ACL-OCL / Base_JSON /prefixK /json /K15 /K15-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:42.631590Z"
},
"title": "Task-Oriented Learning of Word Embeddings for Semantic Relation Classification",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": "",
"affiliation": {},
"email": "pontus@stenetorp.se"
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": "",
"affiliation": {},
"email": "makoto-miwa@toyota-ti.ac.jp"
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": "",
"affiliation": {},
"email": "tsuruoka@logos.t.u-tokyo.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel learning method for word embeddings designed for relation classification. Our word embeddings are trained by predicting words between noun pairs using lexical relation-specific features on a large unlabeled corpus. This allows us to explicitly incorporate relationspecific information into the word embeddings. The learned word embeddings are then used to construct feature vectors for a relation classification model. On a wellestablished semantic relation classification task, our method significantly outperforms a baseline based on a previously introduced word embedding method, and compares favorably to previous state-of-the-art models that use syntactic information or manually constructed external resources.",
"pdf_parse": {
"paper_id": "K15-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel learning method for word embeddings designed for relation classification. Our word embeddings are trained by predicting words between noun pairs using lexical relation-specific features on a large unlabeled corpus. This allows us to explicitly incorporate relationspecific information into the word embeddings. The learned word embeddings are then used to construct feature vectors for a relation classification model. On a wellestablished semantic relation classification task, our method significantly outperforms a baseline based on a previously introduced word embedding method, and compares favorably to previous state-of-the-art models that use syntactic information or manually constructed external resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic classification of semantic relations has a variety of applications, such as information extraction and the construction of semantic networks (Girju et al., 2007; Hendrickx et al., 2010) . A traditional approach to relation classification is to train classifiers using various kinds of features with class labels annotated by humans. Carefully crafted features derived from lexical, syntactic, and semantic resources play a significant role in achieving high accuracy for semantic relation classification (Rink and Harabagiu, 2010) .",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "(Girju et al., 2007;",
"ref_id": "BIBREF11"
},
{
"start": 172,
"end": 195,
"text": "Hendrickx et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 514,
"end": 540,
"text": "(Rink and Harabagiu, 2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years there has been an increasing interest in using word embeddings as an alternative to traditional hand-crafted features. Word embeddings are represented as real-valued vectors and capture syntactic and semantic similarity between words. For example, word2vec 1 (Mikolov et al., 2013b ) is a well-established tool for learning word embeddings. Although word2vec has successfully been used to learn word embeddings, these kinds of word embeddings capture only co-occurrence relationships between words (Levy and Goldberg, 2014) . While simply adding word embeddings trained using window-based contexts as additional features to existing systems has proven valuable (Turian et al., 2010) , more recent studies have focused on how to tune and enhance word embeddings for specific tasks (Bansal et al., 2014; Boros et al., 2014; Chen et al., 2014; Guo et al., 2014 ; Nguyen and Grishman, 2014) and we continue this line of research for the task of relation classification.",
"cite_spans": [
{
"start": 275,
"end": 297,
"text": "(Mikolov et al., 2013b",
"ref_id": "BIBREF23"
},
{
"start": 514,
"end": 539,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF21"
},
{
"start": 677,
"end": 698,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF32"
},
{
"start": 796,
"end": 817,
"text": "(Bansal et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 818,
"end": 837,
"text": "Boros et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 838,
"end": 856,
"text": "Chen et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 857,
"end": 873,
"text": "Guo et al., 2014",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we present a learning method for word embeddings specifically designed to be useful for relation classification. The overview of our system and the embedding learning process are shown in Figure 1 . First we train word embeddings by predicting each of the words between noun pairs using lexical relation-specific features on a large unlabeled corpus. We then use the word embeddings to construct lexical feature vectors for relation classification. Lastly, the feature vectors are used to train a relation classification model. We evaluate our method on a well-established semantic relation classification task and compare it to a baseline based on word2vec embeddings and previous state-of-the-art models that rely on either manually crafted features, syntactic parses or external semantic resources. Our method significantly outperforms the word2vec-based baseline, and compares favorably with previous stateof-the-art models, despite relying only on lexi- Figure 1 : The overview of our system (a) and the embedding learning method (b). In the example sentence, each of are, caused, and by is treated as a target word to be predicted during training.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 1",
"ref_id": null
},
{
"start": 972,
"end": 980,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "cal level features and no external annotated resources. Furthermore, our qualitative analysis of the learned embeddings shows that n-grams of our embeddings capture salient syntactic patterns similar to semantic relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A traditional approach to relation classification is to train classifiers in a supervised fashion using a variety of features. These features include lexical bag-of-words features and features based on syntactic parse trees. For syntactic parse trees, the paths between the target entities on constituency and dependency trees have been demonstrated to be useful (Bunescu and Mooney, 2005; Zhang et al., 2006 ). On the shared task introduced by Hendrickx et al. (2010), Rink and Harabagiu (2010) achieved the best score using a variety of handcrafted features which were then used to train a Support Vector Machine (SVM).",
"cite_spans": [
{
"start": 363,
"end": 389,
"text": "(Bunescu and Mooney, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 390,
"end": 408,
"text": "Zhang et al., 2006",
"ref_id": "BIBREF36"
},
{
"start": 470,
"end": 495,
"text": "Rink and Harabagiu (2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, word embeddings have become popular as an alternative to hand-crafted features (Collobert et al., 2011) . However, one of the limitations is that word embeddings are usually learned by predicting a target word in its context, leading to only local co-occurrence information being captured (Levy and Goldberg, 2014) . Thus, several recent studies have focused on overcoming this limitation. Le and Mikolov (2014) integrated paragraph information into a word2vec-based model, which allowed them to capture paragraph-level information. For dependency parsing, Bansal et al. (2014) and Chen et al. (2014) found ways to improve performance by integrating dependencybased context information into their embeddings. Bansal et al. (2014) trained embeddings by defining parent and child nodes in dependency trees as contexts. Chen et al. (2014) introduced the concept of feature embeddings induced by parsing a large unannotated corpus and then learning embeddings for the manually crafted features. For information extraction, Boros et al. (2014) trained word embeddings relevant for event role extraction, and Nguyen and Grishman (2014) employed word embeddings for domain adaptation of relation extraction. Another kind of task-specific word embeddings was proposed by Tang et al. (2014) , which used sentiment labels on tweets to adapt word embeddings for a sentiment analysis tasks. However, such an approach is only feasible when a large amount of labeled data is available.",
"cite_spans": [
{
"start": 89,
"end": 113,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 299,
"end": 324,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF21"
},
{
"start": 400,
"end": 421,
"text": "Le and Mikolov (2014)",
"ref_id": "BIBREF20"
},
{
"start": 567,
"end": 587,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF0"
},
{
"start": 592,
"end": 610,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 719,
"end": 739,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF0"
},
{
"start": 827,
"end": 845,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 1029,
"end": 1048,
"text": "Boros et al. (2014)",
"ref_id": "BIBREF2"
},
{
"start": 1273,
"end": 1291,
"text": "Tang et al. (2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose a novel method for learning word embeddings designed for relation classification. The word embeddings are trained by predicting each word between noun pairs, given the corresponding low-level features for relation classification. In general, to classify relations between pairs of nouns the most important features come from the pairs themselves and the words between and around the pairs (Hendrickx et al., 2010) . For example, in the sentence in Figure 1 (b) there is a cause-effect relationship between the two nouns conflicts and players. To classify the relation, the most common features are the noun pair (conflicts, players), the words between the noun pair (are, caused, by), the words before the pair (the, external), and the words after the pair (playing, tiles, to, ...). As shown by Rink and Harabagiu (2010) , the words between the noun pairs are the most effective among these features. Our main idea is to treat the most important features (the words between the noun pairs) as the targets to be predicted and other lexical features (noun pairs, words outside them) as their contexts. Due to this, we expect our embeddings to capture relevant features for relation classification better than previous models which only use window-based contexts. In this section we first describe the learning process for the word embeddings, focusing on lexical features for relation classification (Figure 1 (b) ). We then propose a simple and powerful technique to construct features which serve as input for a softmax classifier. The overview of our proposed system is shown in Figure 1 (a).",
"cite_spans": [
{
"start": 400,
"end": 424,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 807,
"end": 832,
"text": "Rink and Harabagiu (2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 459,
"end": 467,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1410,
"end": 1423,
"text": "(Figure 1 (b)",
"ref_id": null
},
{
"start": 1592,
"end": 1600,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation Classification Using Word Embedding-based Features",
"sec_num": "3"
},
{
"text": "Assume that there is a noun pair n = (n 1 , n 2 ) in a sentence with M in words between the pair and M out words before and after the pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "\u2022 w in = (w in 1 , . . . , w in M in ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "\u2022 w bef = (w bef 1 , . . . , w bef Mout ) , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "\u2022 w af t = (w af t 1 , . . . , w af t Mout ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "Our method predicts each target word w in i \u2208 w in using three kinds of information: n, words around w in i in w in , and words in w bef and w af t . Words are embedded in a d-dimensional vector space and we refer to these vectors as word embeddings. To discriminate between words in n from those in w in , w bef , and w af t , we have two sets of word embeddings: N \u2208 R d\u00d7|N | and W \u2208 R d\u00d7|W| . W is a set of words and N is also a set of words but contains only nouns. Hence, the word cause has two embeddings: one in N and another in W. In general cause is used as a noun and a verb, and thus we expect the noun embeddings to capture the meanings focusing on their noun usage. This is inspired by some recent work on word representations that explicitly assigns an independent representation for each word usage according to its part-of-speech tag (Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Hashimoto et al., 2013; Hashimoto et al., 2014; Kartsaklis and Sadrzadeh, 2013) .",
"cite_spans": [
{
"start": 850,
"end": 879,
"text": "(Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 880,
"end": 913,
"text": "Grefenstette and Sadrzadeh, 2011;",
"ref_id": "BIBREF12"
},
{
"start": 914,
"end": 937,
"text": "Hashimoto et al., 2013;",
"ref_id": "BIBREF14"
},
{
"start": 938,
"end": 961,
"text": "Hashimoto et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 962,
"end": 993,
"text": "Kartsaklis and Sadrzadeh, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "A feature vector f \u2208 R 2d(2+c)\u00d71 is constructed to predict w in i by concatenating word embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "f = [N(n 1 ); N(n 2 ); W(w in i\u22121 ); . . . ; W(w in i\u2212c ); W(w in i+1 ); . . . ; W(w in i+c ); 1 M out Mout \u2211 j=1 W(w bef j ); 1 M out Mout \u2211 j=1 W(w af t j )] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "(1) N(\u2022) and W(\u2022) \u2208 R d\u00d71 corresponds to each word and c is the context size. A special NULL token is used if i \u2212 j is smaller than 1 or i + j is larger than M in for each j \u2208 {1, 2, . . . , c}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "Our method then estimates a conditional probability p(w|f ) that the target word is a word w given the feature vector f , using a logistic regression model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w|f ) = \u03c3(W(w) \u2022 f + b(w)) ,",
"eq_num": "(2)"
}
],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "whereW(w) \u2208 R 2d(2+c)\u00d71 is a weight vector for w, b(w) \u2208 R is a bias for w, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "\u03c3(x) = 1 1+e \u2212x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "is the logistic function. Each column vector i\u00f1 W \u2208 R 2d(c+1)\u00d7|W| corresponds to a word. That is, we assign a logistic regression model for each word, and we can train the embeddings using the one-versus-rest approach to make p(w in i |f ) larger than p(w \u2032 |f ) for w \u2032 \u0338 = w in i . However, naively optimizing the parameters of those logistic regression models would lead to prohibitive computational cost since it grows linearly with the size of the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "When training we employ several procedures introduced by Mikolov et al. (2013b), namely, negative sampling, a modified unigram noise distribution and subsampling. For negative sampling the model parameters N, W,W, and b are learned by maximizing the objective function J unlabeled :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 n M in \u2211 i=1 \uf8eb \uf8ed log(p(w in i |f )) + k \u2211 j=1 log(1 \u2212 p(w \u2032 j |f )) \uf8f6 \uf8f8 ,",
"eq_num": "(3)"
}
],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "where w \u2032 j is a word randomly drawn from the unigram noise distribution weighted by an exponent of 0.75. Maximizing J unlabeled means that our method can discriminate between each target word and k noise words given the target word's context. This approach is much less computationally expensive than the one-versus-rest approach and has proven effective in learning word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "To reduce redundancy during training we use subsampling. A training sample, whose target word is w, is discarded with the probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "P d (w) = 1 \u2212 \u221a t p(w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": ", where t is a threshold which is set to 10 \u22125 and p(w) is a probability corresponding to the frequency of w in the training corpus. The more frequent a target word is, the more likely it is to be discarded. To further emphasize infrequent words, we apply the subsampling approach not only to target words, but also to noun pairs; concretely, by drawing two random numbers r 1 and r 2 , a training sample whose noun pair is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "(n 1 , n 2 ) is discarded if P d (n 1 ) is larger than r 1 or P d (n 2 ) is larger than r 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "Since the feature vector f is constructed as defined in Eq. 1, at each training step,W(w) is updated based on information about what pair of nouns surrounds w, what word n-grams appear in a small window around w, and what words appear outside the noun pair. Hence, the weight vector W(w) captures rich information regarding the target word w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Embeddings",
"sec_num": "3.1"
},
{
"text": "Once the word embeddings are trained, we can use them for relation classification. Given a noun pair n = (n 1 , n 2 ) with its context words w in , w bef , and w af t , we construct a feature vector to classify the relation between n 1 and n 2 by concatenating three kinds of feature vectors: g n the word embeddings of the noun pair, g in the averaged n-gram embeddings between the pair, and g out the concatenation of the averaged word embeddings in w bef and w af t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "The feature vector g n \u2208 R 2d\u00d71 is the concatenation of N(n 1 ) and N(n 2 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g n = [N(n 1 ); N(n 2 )] .",
"eq_num": "(4)"
}
],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "Words between the noun pair contribute to classifying the relation, and one of the most common ways to incorporate an arbitrary number of words is treating them as a bag of words. However, word order information is lost for bag-of-words features such as averaged word embeddings. To incorporate the word order information, we first define ngram embeddings h i \u2208 R 4d(1+c)\u00d71 between the noun pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = [W(w in i\u22121 ); . . . ; W(w in i\u2212c ); W(w in i+1 ); . . . ; W(w in i+c );W(w in i )] .",
"eq_num": "(5)"
}
],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "Note thatW can also be used and that the value used for n is (2c + 1). As described in Section 3.1, W captures meaningful information about each word and after the first embedding learning step we can treat the embeddings inW as features for the words. Mnih and Kavukcuoglu (2013) have demonstrated that using embeddings like those i\u00f1 W is useful in representing the words. We then compute the feature vector g in by averaging h i :",
"cite_spans": [
{
"start": 253,
"end": 280,
"text": "Mnih and Kavukcuoglu (2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g in = 1 M in M in \u2211 i=1 h i .",
"eq_num": "(6)"
}
],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "We use the averaging approach since M in depends on each instance. The feature vector g in allows us to represent word sequences of arbitrary lengths as fixed-length feature vectors using the simple operations: concatenation and averaging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "The words before and after the noun pair are sometimes important in classifying the relation. For example, in the phrase \"pour n 1 into n 2 \", the word pour should be helpful in classifying the relation. As with Eq. (1), we use the concatenation of the averaged word embeddings of words before and after the noun pair to compute the feature vector g out \u2208 R 2d\u00d71 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "g out = 1 M out [ Mout \u2211 j=1 W(w bef j ); Mout \u2211 j=1 W(w af t j )] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "(7) As described above, the overall feature vector e \u2208 R 4d(2+c)\u00d71 is constructed by concatenating g n , g in , and g out . We would like to emphasize that we only use simple operations: averaging and concatenating the learned word embeddings. The feature vector e is then used as input for a softmax classifier, without any complex transformation such as matrix multiplication with non-linear functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Feature Vectors",
"sec_num": "3.2"
},
{
"text": "Given a relation classification task we train a softmax classifier using the feature vector e described in Section 3.2. For each k-th training sample with a corresponding label l k among L predefined labels, we compute a conditional probability given its feature vector e k :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(l k |e k ) = exp(o(l k )) \u2211 L i=1 exp(o(i)) ,",
"eq_num": "(8)"
}
],
"section": "Supervised Learning",
"sec_num": "3.3"
},
{
"text": "where o \u2208 R L\u00d71 is defined as o = Se k + s, and S \u2208 R L\u00d74d(2+c) and s \u2208 R L\u00d71 are the softmax parameters. o(i) is the i-th element of o. We then define the objective function as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J labeled = K \u2211 k=1 log(p(l k |e k )) \u2212 \u03bb 2 \u2225\u03b8\u2225 2 .",
"eq_num": "(9)"
}
],
"section": "Supervised Learning",
"sec_num": "3.3"
},
{
"text": "K is the number of training samples and \u03bb controls the L-2 regularization. \u03b8 = (N, W,W, S, s) is the set of parameters and J labeled is maximized using AdaGrad (Duchi et al., 2011) . We have found that dropout (Hinton et al., 2012) is helpful in preventing our model from overfitting. Concretely, elements in e are randomly omitted with a probability of 0.5 at each training step. Recently dropout has been applied to deep neural network models for natural language processing tasks and proven effective (Irsoy and Cardie, 2014; Paulus et al., 2014) .",
"cite_spans": [
{
"start": 160,
"end": 180,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 210,
"end": 231,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 504,
"end": 528,
"text": "(Irsoy and Cardie, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 529,
"end": 549,
"text": "Paulus et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.3"
},
{
"text": "In what follows, we refer to the above method as RelEmb. While RelEmb uses only low-level features, a variety of useful features have been proposed for relation classification. Among them, we use dependency path features (Bunescu and Mooney, 2005 ) based on the untyped binary dependencies of the Stanford parser to find the shortest path between target nouns. The dependency path features are computed by averaging word embeddings from W on the shortest path, and are then concatenated to the feature vector e. Furthermore, we directly incorporate semantic information using word-level semantic features from Named Entity (NE) tags and WordNet hypernyms, as used in previous work (Rink and Harabagiu, 2010; Socher et al., 2012; Yu et al., 2014) . We refer to this extended method as RelEmb FULL . Concretely, RelEmb FULL uses the same binary features as in Socher et al. (2012) . The features come from NE tags and WordNet hypernym tags of target nouns provided by a sense tagger (Ciaramita and Altun, 2006).",
"cite_spans": [
{
"start": 221,
"end": 246,
"text": "(Bunescu and Mooney, 2005",
"ref_id": "BIBREF3"
},
{
"start": 681,
"end": 707,
"text": "(Rink and Harabagiu, 2010;",
"ref_id": "BIBREF29"
},
{
"start": 708,
"end": 728,
"text": "Socher et al., 2012;",
"ref_id": "BIBREF30"
},
{
"start": 729,
"end": 745,
"text": "Yu et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 858,
"end": 878,
"text": "Socher et al. (2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.3"
},
{
"text": "For pre-training we used a snapshot of the English Wikipedia 2 from November 2013. First, we extracted 80 million sentences from the original Wikipedia file, and then used Enju 3 (Miyao and Tsujii, 2008) to automatically assign part-ofspeech (POS) tags. From the POS tags we used NN, NNS, NNP, or NNPS to locate noun pairs in the corpus. We then collected training data by listing pairs of nouns and the words between, before, and after the noun pairs. A noun pair was omitted if the number of words between the pair was larger than 10 and we consequently collected 1.4 billion pairs of nouns and their contexts 4 . We used the 300,000 most frequent words and the 300,000 most frequent nouns and treated out-of-vocabulary words as a special UNK token.",
"cite_spans": [
{
"start": 179,
"end": 203,
"text": "(Miyao and Tsujii, 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "4.1"
},
{
"text": "We initialized the embedding matrices N and W with zero-mean gaussian noise with a variance of 1 d .W and b were zero-initialized. The model parameters were optimized by maximizing the objective function in Eq. (3) using stochastic gradient ascent. The learning rate was set to \u03b1 and linearly decreased to 0 during training, as described in Mikolov et al. (2013a) . The hyperparameters are the embedding dimensionality d, the context size c, the number of negative samples k, the initial learning rate \u03b1, and M out , the number of words outside the noun pairs. For hyperparameter tuning, we first fixed \u03b1 to 0.025 and M out to 5, and then set d to {50, 100, 300}, c to {1, 2, 3}, and k to {5, 15, 25}.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization and Optimization",
"sec_num": "4.2"
},
{
"text": "At the supervised learning step, we initialized S and s with zeros. The hyperparameters, the learning rate for AdaGrad, \u03bb, M out , and the number of iterations, were determined via 10-fold cross validation on the training set for each setting. Note that M out can be tuned at the supervised learning step, adapting to a specific dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization and Optimization",
"sec_num": "4.2"
},
{
"text": "We evaluated our method on the SemEval 2010 Task 8 data set 5 (Hendrickx et al., 2010) , which involves predicting the semantic relations between noun pairs in their contexts. The dataset, containing 8,000 training and 2,717 test samples, defines nine classes (Cause-Effect, Entity-Origin, etc.) for ordered relations and one class (Other) for other relations. Thus, the task can be treated as a 19class classification task. Two examples from the training set are shown below. Training example (a) is classified as Cause-Effect(E 1 , E 2 ) which denotes that E 2 is an effect caused by E 1 , while training example (b) is classified as Cause-Effect(E 2 , E 1 ) which is the inverse of Cause-Effect(E 1 , E 2 ). We report the official macro-averaged F1 scores and accuracy.",
"cite_spans": [
{
"start": 62,
"end": 86,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Dataset",
"sec_num": "5.1"
},
{
"text": "To empirically investigate the performance of our proposed method we compared it to several baselines and previously proposed models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.2"
},
{
"text": "Rand-Init. The first baseline is RelEmb itself, but without applying the learning method on the unlabeled corpus. In other words, we train the softmax classifier from Section 3.3 on the labeled training data with randomly initialized model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random and word2vec Initialization",
"sec_num": "5.2.1"
},
{
"text": "W2V-Init. The second baseline is RelEmb using word embeddings learned by word2vec. More specifically, we initialize the embedding matrices N and W with the word2vec embeddings. Related to our method, word2vec has a set of weight vectors similar toW when trained with negative sampling and we use these weight vectors as a replacement forW. We trained the word2vec embeddings using the CBOW model with subsampling on the full Wikipedia corpus. As with our experimental settings, we fix the learning rate to 0.025, and investigate several hyperparameter settings. For hyperparameter tuning we set the embedding dimensionality d to {50, 100, 300}, the context size c to {1, 3, 9}, and the number of negative samples k to {5, 15, 25}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random and word2vec Initialization",
"sec_num": "5.2.1"
},
{
"text": "A simple approach to the relation classification task is to use SVMs with standard binary bag-of-words features. The bag-of-words features included the noun pairs and words between, before, and after the pairs, and we used LIBLINEAR 6 as our classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SVM-Based Systems",
"sec_num": "5.2.2"
},
{
"text": "Socher et al. 2012used Recursive Neural Network (RNN) models to classify the relations. Subsequently, Ebrahimi and Dou (2015) and Hashimoto et al. (2013) proposed RNN models to better handle the relations. These methods rely on syntactic parse trees. Yu et al. (2014) introduced their novel Factorbased Compositional Model (FCM) and presented results from several model variants, the best performing being FCM EMB and FCM FULL . The former only uses word embedding information and the latter relies on dependency paths and NE features, in addition to word embeddings. Zeng et al. (2014) used a Convolutional Neural Network (CNN) with WordNet hypernyms. Noteworthy in relation to the RNN-based methods, the CNN model does not rely on parse trees. More recently, dos Santos et al. 2015have introduced CR-CNN by extending the CNN model and achieved the best result to date. The key point of CR-CNN is that it improves the classification score by omitting the noisy class \"Other\" in the dataset described in Section 5.1. We call CR-CNN using the \"Other\" class CR-CNN Other and CR-CNN omitting the class CR-CNN Best .",
"cite_spans": [
{
"start": 102,
"end": 125,
"text": "Ebrahimi and Dou (2015)",
"ref_id": "BIBREF9"
},
{
"start": 130,
"end": 153,
"text": "Hashimoto et al. (2013)",
"ref_id": "BIBREF14"
},
{
"start": 251,
"end": 267,
"text": "Yu et al. (2014)",
"ref_id": "BIBREF34"
},
{
"start": 568,
"end": 586,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "5.2.3"
},
{
"text": "The scores on the test set for SemEval 2010 Task 8 are shown in Table 1 . RelEmb achieves 82.8% of F1 which is better than those of almost all models compared and comparable to that of the previous state of the art, except for CR-CNN Best . Note that RelEmb does not rely on external semantic features and syntactic parse features 7 . Furthermore, RelEmb FULL achieves 83.5% of F1. We calculated a confidence interval (82.0, 84.9) (p < 0.05) using bootstrap resampling (Noreen, 1989) .",
"cite_spans": [
{
"start": 469,
"end": 483,
"text": "(Noreen, 1989)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.3"
},
{
"text": "RelEmb significantly outperforms not only the Rand-Init baseline, but also the W2V-Init baseline. (Rink and Harabagiu, 2010) paraphrases, TextRunner, Google n-grams, etc. CR-CNN Best (dos Santos et al., 2015) embeddings, word position embeddings 84.1 / n/a FCM FULL (Yu et al., 2014) embeddings, dependency paths, NE 83.0 / n/a CR-CNN Other (dos Santos et al., 2015) embeddings, word position embeddings 82.7 / n/a CRNN (Ebrahimi and Dou, 2015) embeddings, parse trees, WordNet, NE, POS 82.7 / n/a CNN (Zeng et al., 2014) embeddings, WordNet 82.7 / n/a MVRNN (Socher et al., 2012) embeddings, parse trees, WordNet, NE, POS 82.4 / n/a FCM EMB (Yu et al., 2014) embeddings 80.6 / n/a RNN (Hashimoto et al., 2013) embeddings, parse trees, phrase categories, etc. 79.4 / n/a These results show that our task-specific word embeddings are more useful than those trained using window-based contexts. A point that we would like to emphasize is that the baselines are unexpectedly strong. As was noted by Wang and Manning (2012) , we should carefully implement strong baselines and see whether complex models can outperform these baselines.",
"cite_spans": [
{
"start": 98,
"end": 124,
"text": "(Rink and Harabagiu, 2010)",
"ref_id": "BIBREF29"
},
{
"start": 183,
"end": 208,
"text": "(dos Santos et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 266,
"end": 283,
"text": "(Yu et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 420,
"end": 444,
"text": "(Ebrahimi and Dou, 2015)",
"ref_id": "BIBREF9"
},
{
"start": 502,
"end": 521,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 559,
"end": 580,
"text": "(Socher et al., 2012)",
"ref_id": "BIBREF30"
},
{
"start": 642,
"end": 659,
"text": "(Yu et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 686,
"end": 710,
"text": "(Hashimoto et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 996,
"end": 1019,
"text": "Wang and Manning (2012)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the Baselines",
"sec_num": "5.3.1"
},
{
"text": "RelEmb performs much better than the bag-ofwords-based SVM. This is not surprising given that we use a large unannotated corpus and embeddings with a large number of parameters. RelEmb also outperforms the SVM system of Rink and Harabagiu (2010) , which demonstrates the effectiveness of our task-specific word embeddings, despite our only requirement being a large unannotated corpus and a POS tagger.",
"cite_spans": [
{
"start": 220,
"end": 245,
"text": "Rink and Harabagiu (2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with SVM-Based Systems",
"sec_num": "5.3.2"
},
{
"text": "RelEmb outperforms the RNN models. In our preliminary experiments, we have found some undesirable parse trees when computing vector representations using RNN-based models and such parsing errors might hamper the performance of the RNN models. FCM FULL , which relies on dependency paths and NE features, achieves a better score than that of RElEmb. Without such features, RelEmb outperforms FCM EMB by a large margin. By incorporating external resources, RelEmb FULL outperforms FCM FULL .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Neural Network Models",
"sec_num": "5.3.3"
},
{
"text": "RelEmb compares favorably to CR-CNN Other , despite our method being less computationally expensive than CR-CNN Other . When classifying an instance, the number of the floating number multiplications is 4d(2 + c)L in our method since our method requires only one matrix-vector product for the softmax classifier as described in Section 3.3. c is the window size, d is the word embedding dimensionality, and L is the number of the classes. In CR-CNN Other , the number is (Dc(d + 2d \u2032 )N + DL), where D is the dimensionality of the convolution layer, d \u2032 is the position embedding dimensionality, and N is the average length of the input sentences. Here, we omit the cost of the hyperbolic tangent function in CR-CNN Other for simplicity. Using the best hyperparameter settings, the number is roughly 3.8 \u00d7 10 4 in our method, and 1.6 \u00d7 10 7 in CR-CNN Other assuming N is 10. dos Santos et al. (2015) also boosted the score of CR-CNN Other by omitting the noisy class \"Other\" by a rankingbased classifier, and achieved the best score (CR-CNN Best ). Our results may also be improved by using the same technique, but the technique is dataset-dependent, so we did not incorporate the technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Neural Network Models",
"sec_num": "5.3.3"
},
{
"text": "We perform analysis of the training procedure focusing on RelEmb. In Tables 2 and 3, we Table 3 : Cross-validation results for the W2V-Init.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 87,
"text": "In Tables 2 and 3, we",
"ref_id": "TABREF5"
},
{
"start": 88,
"end": 95,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on Training Settings",
"sec_num": "5.4"
},
{
"text": "the classification results using 10-fold cross validation on the training set. The same split is used for each setting, so all results are comparable to each other. The best settings for the cross validation are used to produce the results reported in Table 1 . Table 2 shows F1 scores obtained by RelEmb. The results for d = 50, 100 show that RelEmb benefits from relatively large context sizes. The n-gram embeddings in RelEmb capture richer information by setting c to 3 compared to setting c to 1. Relatively large numbers of negative samples also slightly boost the scores. As opposed to these trends, the score does not improve using d = 300. We use the best setting (c = 3, d = 100, k = 25) for the remaining analysis. We note that RelEmb FULL achieves an F1-score of 82.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 262,
"end": 269,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effects of Tuning Hyperparameters",
"sec_num": "5.4.1"
},
{
"text": "We also performed similar experiments for the W2V-Init baseline, and the results are shown in Table 3 . In this case, the number of negative samples does not affect the scores, and the best score is achieved by c = 1. As discussed in Bansal et al. (2014) , the small context size captures the syntactic similarity between words rather than the topical similarity. This result indicates that syntactic similarity is more important than topical similarity for this task. Compared to the word2vec embeddings, our embeddings capture not only local context information using word order, but also long- range co-occurrence information by being tailored for the specific task.",
"cite_spans": [
{
"start": 234,
"end": 254,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Tuning Hyperparameters",
"sec_num": "5.4.1"
},
{
"text": "g n g in g \u2032 in g n , g in g n , g in ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Tuning Hyperparameters",
"sec_num": "5.4.1"
},
{
"text": "As described in Section 3.2, we concatenate three kinds of feature vectors, g n , g in , and g out , for supervised learning. Table 4 shows classification scores for ablation tests using 10-fold cross validation. We also provide a score using a simplified version of g in , where the feature vector g \u2032 in is computed by averaging the word embeddings [W(w in i );W(w in i )] of the words between the noun pairs. This feature vector g \u2032 in then serves as a bag-of-words feature. Table 4 clearly shows that the averaged n-gram embeddings contribute the most to the semantic relation classification performance. The difference between the scores of g in and g \u2032 in shows the effectiveness of our averaged n-gram embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 4",
"ref_id": null
},
{
"start": 478,
"end": 485,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Tests",
"sec_num": "5.4.2"
},
{
"text": "At the supervised learning step we use dropout to regularize our model. Without dropout, our performance drops from 82.2% to 81.3% of F1 on the training set using 10-fold cross validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Dropout",
"sec_num": "5.4.3"
},
{
"text": "As described in Section 3.1, we have the nounspecific embeddings N as well as the standard word embeddings W. We evaluated the learned embeddings using a word-level semantic evaluation task called WordSim-353 (Finkelstein et al., 2001 ). This dataset consists of 353 pairs of nouns and each pair has an averaged human rating which corresponds to a semantic similarity score. Evaluation is performed by measuring Spearman's rank correlation between the human ratings and the cosine similarity scores of the embeddings. Table 5 shows the evaluation results. We used the best settings reported in is designed for relation classification and it is not clear how to tune the hyperparameters for the word similarity task. As shown in the result table, the noun-specific embeddings perform better than the standard embeddings in our method, which indicates the noun-specific embeddings capture more useful information in measuring the semantic similarity between nouns. The performance of the noun-specific embeddings is roughly the same as that of the word2vec embeddings.",
"cite_spans": [
{
"start": 209,
"end": 234,
"text": "(Finkelstein et al., 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 518,
"end": 525,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance on a Word Similarity Task",
"sec_num": "5.4.4"
},
{
"text": "Using the n-gram embeddings h i in Eq. 5, we inspect which n-grams are relevant to each relation class after the supervised learning step of RelEmb. When the context size c is 3, we can use at most 7-grams. The learned weight matrix S in Section 3.3 is used to detect the most relevant n-grams for each class. More specifically, for each n-gram embedding (n = 1, 3) in the training set, we compute the dot product between the n-gram embedding and the corresponding components in S. We then select the pairs of n-grams and class labels with the highest scores. In Table 6 we show the top five n-grams for six classes. These results clearly show that the n-gram embeddings capture salient syntactic patterns which are useful for the relation classification task.",
"cite_spans": [],
"ref_spans": [
{
"start": 563,
"end": 570,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Qualitative Analysis on the Embeddings",
"sec_num": "5.5"
},
{
"text": "We have presented a method for learning word embeddings specifically designed for relation classification. The word embeddings are trained using large unlabeled corpora to capture lexical features for relation classification. On a well-established semantic relation classification task our method significantly outperforms the baseline based on word2vec. Our method also compares favorably to previous state-of-the-art models that rely on syn-tactic parsers and external semantic resources, despite our method requiring only access to an unannotated corpus and a POS tagger. For future work, we will investigate how well our method performs on other domains and datasets and how relation labels can help when learning embeddings in a semisupervised learning setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "https://code.google.com/p/word2vec/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://dumps.wikimedia.org/enwiki/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Despite Enju being a syntactic parser we only use the POS tagger component. The accuracy of the POS tagger is about 97.2% on the WSJ corpus.4 The training data, the training code, and the learned model parameters used in this paper are publicly available at http://www.logos.t.u-tokyo.ac.jp/\u02dchassy/ publications/conll2015/ 5 http://docs.google.com/View?docid= dfvxd49s_36c28v9pmw.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.csie.ntu.edu.tw/\u02dccjlin/ liblinear/.7 While we use a POS tagger to locate noun pairs, RelEmb does not explicitly use POS features at the supervised learning step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tailoring Continuous Word Representations for Dependency Parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring Continuous Word Representations for Dependency Parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 809-815.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space. In Proceedings of the 2010 Conference on Empiri- cal Methods in Natural Language Processing, pages 1183-1193.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Event Role Extraction using Domain-Relevant Word Representations",
"authors": [
{
"first": "Emanuela",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "Romaric",
"middle": [],
"last": "Besan\u00e7on",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Grau",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1852--1857",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuela Boros, Romaric Besan\u00e7on, Olivier Ferret, and Brigitte Grau. 2014. Event Role Extrac- tion using Domain-Relevant Word Representations. In Proceedings of the 2014 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 1852-1857.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Shortest Path Dependency Kernel for Relation Extraction",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "724--731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond Mooney. 2005. A Shortest Path Dependency Kernel for Relation Ex- traction. In Proceedings of Human Language Tech- nology Conference and Conference on Empirical Methods in Natural Language Processing, pages 724-731.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Feature Embedding for Dependency Parsing",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "816--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenliang Chen, Yue Zhang, and Min Zhang. 2014. Feature Embedding for Dependency Parsing. In Proceedings of COLING 2014, the 25th Interna- tional Conference on Computational Linguistics: Technical Papers, pages 816-826.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Broad-Coverage Sense Disambiguation and Information Extraction with a Supersense Sequence Tagger",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "594--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-Coverage Sense Disambiguation and Infor- mation Extraction with a Supersense Sequence Tag- ger. In Proceedings of the 2006 Conference on Em- pirical Methods in Natural Language Processing, pages 594-602.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural Language Processing (Almost) from Scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493-2537.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Classifying Relations by Ranking with Convolutional Neural Networks",
"authors": [
{
"first": "Cicero",
"middle": [],
"last": "Nogueira Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Joint Conference of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying Relations by Ranking with Convolutional Neural Networks. In Proceedings of the Joint Conference of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing of the Asian Federation of Natural Language Processing. to appear.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Chain Based RNN for Relation Classification",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1244--1249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi and Dejing Dou. 2015. Chain Based RNN for Relation Classification. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1244-1249.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Placing Search in Context: The Concept Revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Gabrilovich",
"middle": [],
"last": "Evgenly",
"suffix": ""
},
{
"first": "Matias",
"middle": [],
"last": "Yossi",
"suffix": ""
},
{
"first": "Rivlin",
"middle": [],
"last": "Ehud",
"suffix": ""
},
{
"first": "Solan",
"middle": [],
"last": "Zach",
"suffix": ""
},
{
"first": "Wolfman",
"middle": [],
"last": "Gadi",
"suffix": ""
},
{
"first": "Ruppin",
"middle": [],
"last": "Eytan",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Tenth International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Gabrilovich Evgenly, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Rup- pin Eytan. 2001. Placing Search in Context: The Concept Revisited. In Proceedings of the Tenth In- ternational World Wide Web Conference.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SemEval-2007 Task 04: Classification of Semantic Relations between Nominals",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Vivi",
"middle": [],
"last": "Nastase",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Sz- pakowicz, Peter Turney, and Deniz Yuret. 2007. SemEval-2007 Task 04: Classification of Semantic Relations between Nominals. In Proceedings of the Fourth International Workshop on Semantic Evalu- ations (SemEval-2007), pages 13-18.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Experimental Support for a Categorical Compositional Distributional Model of Meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental Support for a Categorical Composi- tional Distributional Model of Meaning. In Pro- ceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 1394- 1404.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Revisiting Embedding Features for Simple Semi-supervised Learning",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "110--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Revisiting Embedding Features for Sim- ple Semi-supervised Learning. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 110-120.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simple Customization of Recursive Neural Networks for Semantic Relation Classification",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Chikayama",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1372--1376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsu- ruoka, and Takashi Chikayama. 2013. Simple Cus- tomization of Recursive Neural Networks for Se- mantic Relation Classification. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1372-1376.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Jointly Learning Word Representations and Composition Functions Using Predicate-Argument Structures",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1544--1555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2014. Jointly Learning Word Representations and Composition Functions Using Predicate-Argument Structures. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1544-1555.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "33--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations be- tween Pairs of Nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33-38.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep Recursive Neural Networks for Compositionality in Language",
"authors": [
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27",
"volume": "",
"issue": "",
"pages": "2096--2104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozan Irsoy and Claire Cardie. 2014. Deep Recursive Neural Networks for Compositionality in Language. In Advances in Neural Information Processing Sys- tems 27, pages 2096-2104.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Prior Disambiguation of Word Tensors for Constructing Sentence Vectors",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1590--1601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2013. Prior Disambiguation of Word Tensors for Con- structing Sentence Vectors. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1590-1601.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distributed Representations of Sentences and Documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14), ICML '14",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), ICML '14, pages 1188-1196.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural Word Embedding as Implicit Matrix Factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural Word Embedding as Implicit Matrix Factorization. In Ad- vances in Neural Information Processing Systems 27, pages 2177-2185.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Repre- sentations in Vector Space. In Proceedings of Work- shop at the International Conference on Learning Representations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed Represen- tations of Words and Phrases and their Composition- ality. In Advances in Neural Information Processing Systems 26, pages 3111-3119.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Feature Forest Models for Probabilistic HPSG Parsing",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "35--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2008. Feature For- est Models for Probabilistic HPSG Parsing. Compu- tational Linguistics, 34(1):35-80, March.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning word embeddings efficiently with noise-contrastive estimation",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "2265--2273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in Neural Information Pro- cessing Systems 26, pages 2265-2273.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Employing Word Representations and Regularization for Domain Adaptation of Relation Extraction",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "68--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2014. Em- ploying Word Representations and Regularization for Domain Adaptation of Relation Extraction. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 68-74.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Computer-Intensive Methods for Testing Hypotheses: An Introduction",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley- Interscience.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Global Belief Recursive Neural Networks",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2888--2896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global Belief Recursive Neural Networks. In Advances in Neural Information Pro- cessing Systems 27, pages 2888-2896.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "UTD: Classifying Semantic Relations by Combining Lexical and Semantic Resources",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Rink",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "256--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Rink and Sanda Harabagiu. 2010. UTD: Clas- sifying Semantic Relations by Combining Lexical and Semantic Resources. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 256-259.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semantic Compositionality through Recursive Matrix-Vector Spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Semantic Compo- sitionality through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 1201-1211.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1555--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning Sentiment- Specific Word Embedding for Twitter Sentiment Classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1555- 1565.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Word Representations: A Simple and General Method for Semi-Supervised Learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and General Method for Semi-Supervised Learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Baselines and Bigrams: Simple, Good Sentiment and Topic Classification",
"authors": [
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida Wang and Christopher Manning. 2012. Baselines and Bigrams: Simple, Good Sentiment and Topic Classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 90-94.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Factor-based Compositional Embedding Models",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Gormley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Workshop on Learning Semantics at the 2014 Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Matthew R. Gormley, and Mark Dredze. 2014. Factor-based Compositional Embedding Models. In Proceedings of Workshop on Learning Semantics at the 2014 Conference on Neural Information Pro- cessing Systems.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Relation Classification via Convolutional Deep Neural Network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation Classification via Convolutional Deep Neural Network. In Proceed- ings of COLING 2014, the 25th International Con- ference on Computational Linguistics: Technical Papers, pages 2335-2344.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A Composite Kernel to Extract Relations between Entities with Both Flat and Structured Features",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "825--832",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Zhang, Jie Zhang, Jian Su, and GuoDong Zhou. 2006. A Composite Kernel to Extract Relations be- tween Entities with Both Flat and Structured Fea- tures. In Proceedings of the 21st International Con- ference on Computational Linguistics and 44th An- nual Meeting of the Association for Computational Linguistics, pages 825-832.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "show how tuning the hyperparameters of our method and word2vec affects c d k = 5 k = 15 k =",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF2": {
"text": "Scores on the test set for SemEval 2010 Task 8.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF4": {
"text": "Evaluation on the WordSim-353 dataset.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF5": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>and 3 since our method</td></tr></table>"
},
"TABREF6": {
"text": "Top five unigrams and trigrams with the highest scores for six classes.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}