ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1022.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:18:10.404897Z"
},
"title": "Hierarchical Embeddings for Hypernymy Detection and Directionality",
"authors": [
{
"first": "Kim",
"middle": [
"Anh"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "nguyenkh@ims.uni-stuttgart.de"
},
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel neural model HyperVec to learn hierarchical embeddings for hypernymy detection and directionality. While previous embeddings have shown limitations on prototypical hypernyms, HyperVec represents an unsupervised measure where embeddings are learned in a specific order and capture the hypernym-hyponym distributional hierarchy. Moreover, our model is able to generalize over unseen hypernymy pairs, when using only small sets of training data, and by mapping to other languages. Results on benchmark datasets show that HyperVec outperforms both state-of-theart unsupervised measures and embedding models on hypernymy detection and directionality, and on predicting graded lexical entailment.",
"pdf_parse": {
"paper_id": "D17-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel neural model HyperVec to learn hierarchical embeddings for hypernymy detection and directionality. While previous embeddings have shown limitations on prototypical hypernyms, HyperVec represents an unsupervised measure where embeddings are learned in a specific order and capture the hypernym-hyponym distributional hierarchy. Moreover, our model is able to generalize over unseen hypernymy pairs, when using only small sets of training data, and by mapping to other languages. Results on benchmark datasets show that HyperVec outperforms both state-of-theart unsupervised measures and embedding models on hypernymy detection and directionality, and on predicting graded lexical entailment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Hypernymy represents a major semantic relation and a key organization principle of semantic memory (Miller and Fellbaum, 1991; Murphy, 2002) . It is an asymmetric relation between two terms, a hypernym (superordinate) and a hyponym (subordiate), as in animal-bird and flower-rose, where the hyponym necessarily implies the hypernym, but not vice versa. From a computational point of view, automatic hypernymy detection is useful for NLP tasks such as taxonomy creation (Snow et al., 2006; Navigli et al., 2011) , recognizing textual entailment (Dagan et al., 2013) , and text generation (Biran and McKeown, 2013) , among many others.",
"cite_spans": [
{
"start": 99,
"end": 126,
"text": "(Miller and Fellbaum, 1991;",
"ref_id": "BIBREF21"
},
{
"start": 127,
"end": 140,
"text": "Murphy, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 469,
"end": 488,
"text": "(Snow et al., 2006;",
"ref_id": "BIBREF38"
},
{
"start": 489,
"end": 510,
"text": "Navigli et al., 2011)",
"ref_id": "BIBREF24"
},
{
"start": 544,
"end": 564,
"text": "(Dagan et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 587,
"end": 612,
"text": "(Biran and McKeown, 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two families of approaches to identify and discriminate hypernyms are predominent in NLP, both of them relying on word vector representa-tions. Distributional count approaches make use of either directionally unsupervised measures or of supervised classification methods. Unsupervised measures exploit the distributional inclusion hypothesis (Geffet and Dagan, 2005; Zhitomirsky-Geffet and Dagan, 2009) , or the distributional informativeness hypothesis (Santus et al., 2014; Rimell, 2014) . These measures assign scores to semantic relation pairs, and hypernymy scores are expected to be higher than those of other relation pairs. Typically, Average Precision (AP) (Kotlerman et al., 2010 ) is applied to rank and distinguish between the predicted relations. Supervised classification methods represent each pair of words as a single vector, by using the concatenation or the element-wise difference of their vectors (Baroni et al., 2012; Roller et al., 2014; Weeds et al., 2014) . The resulting vector is fed into a Support Vector Machine (SVM) or into Logistic Regression (LR), to predict hypernymy. Across approaches, Shwartz et al. (2017) demonstrated that there is no single unsupervised measure which consistently deals well with discriminating hypernymy from other semantic relations. Furthermore, Levy et al. (2015) showed that supervised methods memorize prototypical hypernyms instead of learning a relation between two words.",
"cite_spans": [
{
"start": 342,
"end": 366,
"text": "(Geffet and Dagan, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 367,
"end": 402,
"text": "Zhitomirsky-Geffet and Dagan, 2009)",
"ref_id": "BIBREF50"
},
{
"start": 454,
"end": 475,
"text": "(Santus et al., 2014;",
"ref_id": "BIBREF31"
},
{
"start": 476,
"end": 489,
"text": "Rimell, 2014)",
"ref_id": "BIBREF28"
},
{
"start": 666,
"end": 689,
"text": "(Kotlerman et al., 2010",
"ref_id": "BIBREF13"
},
{
"start": 918,
"end": 939,
"text": "(Baroni et al., 2012;",
"ref_id": "BIBREF0"
},
{
"start": 940,
"end": 960,
"text": "Roller et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 961,
"end": 980,
"text": "Weeds et al., 2014)",
"ref_id": "BIBREF45"
},
{
"start": 1122,
"end": 1143,
"text": "Shwartz et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 1306,
"end": 1324,
"text": "Levy et al. (2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Approaches of hypernymy-specific embeddings utilize neural models to learn vector representations for hypernymy. Yu et al. (2015) proposed a supervised method to learn term embeddings for hypernymy identification, based on pre-extracted hypernymy pairs. Recently, Tuan et al. (2016) proposed a dynamic weighting neural model to learn term embeddings in which the model encodes not only the information of hypernyms vs. hyponyms, but also their contextual information. The performance of this family of models is typically evaluated by using an SVM to discriminate hypernymy from other relations.",
"cite_spans": [
{
"start": 113,
"end": 129,
"text": "Yu et al. (2015)",
"ref_id": "BIBREF49"
},
{
"start": 254,
"end": 282,
"text": "Recently, Tuan et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel neural model HyperVec to learn hierarchical embeddings that (i) discriminate hypernymy from other relations (detection task), and (ii) distinguish between the hypernym and the hyponym in a given hypernymy relation pair (directionality task). Our model learns to strengthen the distributional similarity of hypernym pairs in comparison to other relation pairs, by moving hyponym and hypernym vectors close to each other. In addition, we generate a distributional hierarchy between hyponyms and hypernyms. Relying on these two new aspects of hypernymy distributions, the similarity of hypernym pairs receives higher scores than the similarity of other relation pairs; and the distributional hierarchy of hyponyms and hypernyms indicates the directionality of hypernymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model is inspired by the distributional inclusion hypothesis, that prominent context words of hyponyms are expected to appear in a subset of the hypernym contexts. We assume that each context word which appears with both a hyponym and its hypernym can be used as an indicator to determine which of the two words is semantically more general: Common context word vectors which represent distinctive characteristics of a hyponym are expected to be closer to the hyponym vector than to its hypernym vector. For example, the context word flap is more characteristic for a bird than for its hypernym animal; hence, the vector of flap should be closer to the vector of bird than to the vector of animal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our HyperVec model on both unsupervised and supervised hypernymy detection and directionality tasks. In addition, we apply the model to the task of graded lexical entailment (Vuli\u0107 et al., 2016) , and we assess the capability of HyperVec on generalizing hypernymy by mapping to German and Italian. Results on benchmark datasets of hypernymy show that the hierarchical embeddings outperform state-of-the-art measures and previous embedding models. Furthermore, the implementation of our models is made publicly available. 1",
"cite_spans": [
{
"start": 186,
"end": 206,
"text": "(Vuli\u0107 et al., 2016)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised hypernymy measures: A variety of directional measures for unsupervised hypernymy detection (Weeds and Weir, 2003; Weeds et al., 2004; Clarke, 2009; Kotlerman et al., 2010; 1 www.ims.uni-stuttgart.de/data/hypervec Lenci and Benotto, 2012) all rely on some variation of the distributional inclusion hypothesis: If u is a semantically narrower term than v, then a significant number of salient distributional features of u is expected to be included in the feature vector of v as well. In addition, Santus et al. (2014) proposed the distributional informativeness hypothesis, that hypernyms tend to be less informative than hyponyms, and that they occur in more general contexts than their hyponyms. All of these approaches represent words as vectors in distributional semantic models (Turney and Pantel, 2010), relying on the distributional hypothesis (Harris, 1954; Firth, 1957) . For evaluation, these directional models use the AP measure to assess the proportion of hypernyms at the top of a score-sorted list. In a different vein, Kiela et al. (2015) introduced three unsupervised methods drawn from visual properties of images to determine a concept's generality in hypernymy tasks.",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "(Weeds and Weir, 2003;",
"ref_id": "BIBREF46"
},
{
"start": 127,
"end": 146,
"text": "Weeds et al., 2004;",
"ref_id": "BIBREF47"
},
{
"start": 147,
"end": 160,
"text": "Clarke, 2009;",
"ref_id": "BIBREF4"
},
{
"start": 161,
"end": 184,
"text": "Kotlerman et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 185,
"end": 185,
"text": "",
"ref_id": null
},
{
"start": 227,
"end": 251,
"text": "Lenci and Benotto, 2012)",
"ref_id": "BIBREF15"
},
{
"start": 510,
"end": 530,
"text": "Santus et al. (2014)",
"ref_id": "BIBREF31"
},
{
"start": 864,
"end": 878,
"text": "(Harris, 1954;",
"ref_id": "BIBREF11"
},
{
"start": 879,
"end": 891,
"text": "Firth, 1957)",
"ref_id": "BIBREF9"
},
{
"start": 1048,
"end": 1067,
"text": "Kiela et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The studies in this area are based on word embeddings which represent words as low-dimensional and realvalued vectors (Mikolov et al., 2013b; Pennington et al., 2014) . Each hypernymy pair is encoded by some combination of the two word vectors, such as concatenation (Baroni et al., 2012) or difference (Roller et al., 2014; Weeds et al., 2014) . Hypernymy is distinguished from other relations by using a classification approach, such as SVM or LR. Because word embeddings are trained for similar and symmetric vectors, it is however unclear whether the supervised methods do actually learn the asymmetry in hypernymy (Levy et al., 2015) .",
"cite_spans": [
{
"start": 118,
"end": 141,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF19"
},
{
"start": 142,
"end": 166,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 267,
"end": 288,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 303,
"end": 324,
"text": "(Roller et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 325,
"end": 344,
"text": "Weeds et al., 2014)",
"ref_id": "BIBREF45"
},
{
"start": 619,
"end": 638,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised hypernymy methods:",
"sec_num": null
},
{
"text": "Hypernymy-specific embeddings: These approaches are closest to our work. Yu et al. (2015) proposed a dynamic distance-margin model to learn term embeddings that capture properties of hypernymy. The neural model is trained on the taxonomic relation data which is pre-extracted. The resulting term embeddings are fed to an SVM classifier to predict hypernymy. However, this model only learns term pairs without considering their contexts, leading to a lack of generalization for term embeddings. Tuan et al. (2016) introduced a dynamic weighting neural network to learn term embeddings that encode information about hypernymy and also about their contexts, considering all words between a hypernym and its hyponym in a sentence. The proposed model is trained on a set of hypernym relations extracted from WordNet (Miller, 1995) . The embeddings are applied as features to detect hypernymy, using an SVM classifier. Tuan et al. (2016) handles the drawback of the approach by Yu et al. (2015) , considering the contextual information between two terms; however the method still is not able to determine the directionality of a hypernym pair. Vendrov et al. (2016) proposed a method to encode order into learned distributed representations, to explicitly model partial order structure of the visual-semantic hierarchy or the hierarchy of hypernymy in WordNet. The resulting vectors are used to predict the transitive hypernym relations in WordNet.",
"cite_spans": [
{
"start": 73,
"end": 89,
"text": "Yu et al. (2015)",
"ref_id": "BIBREF49"
},
{
"start": 494,
"end": 512,
"text": "Tuan et al. (2016)",
"ref_id": "BIBREF40"
},
{
"start": 811,
"end": 825,
"text": "(Miller, 1995)",
"ref_id": "BIBREF20"
},
{
"start": 972,
"end": 988,
"text": "Yu et al. (2015)",
"ref_id": "BIBREF49"
},
{
"start": 1138,
"end": 1159,
"text": "Vendrov et al. (2016)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised hypernymy methods:",
"sec_num": null
},
{
"text": "In this section, we present our model of hierarchical embeddings HyperVec. Section 3.1 describes how we learn the embeddings for hypernymy, and Section 3.2 introduces the unsupervised measure HyperScore that is applied to the hypernymy tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Embeddings",
"sec_num": "3"
},
{
"text": "Our approach makes use of a set of hypernyms which could be obtained from either exploiting the transitivity of the hypernymy relation (Fallucchi and Zanzotto, 2011) or lexical databases, to learn hierarchical embeddings. We rely on Word-Net, a large lexical database of English (Fellbaum, 1998) , and extract all hypernym-hyponym pairs for nouns and for verbs, including both direct and indirect hypernymy, e.g., animal-bird, birdrobin, animal-robin. Before training our model, we exclude all hypernym pairs which appear in any datasets used for evaluation.",
"cite_spans": [
{
"start": 279,
"end": 295,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Hierarchical Embeddings",
"sec_num": "3.1"
},
{
"text": "In the following, Section 3.1.1 first describes the Skip-gram model which is integrated into our model for optimization. Section 3.1.2 then describes the objective functions to train the hierarchical embeddings for hypernymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Hierarchical Embeddings",
"sec_num": "3.1"
},
{
"text": "The Skip-gram model is a word embeddings method suggested by Mikolov et al. (2013b) . Levy and Goldberg (2014) introduced a variant of the Skip-gram model with negative sampling (SGNS), in which the objective function is defined as follows:",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF19"
},
{
"start": 86,
"end": 110,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-gram Model",
"sec_num": "3.1.1"
},
{
"text": "J SGN S = w\u2208V W c\u2208V C J (w,c) (1) J (w,c) = #(w, c) log \u03c3( w, c) + k \u2022 E c N \u223cP D [log \u03c3(\u2212 w, c N )] (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-gram Model",
"sec_num": "3.1.1"
},
{
"text": "where the skip-gram with negative sampling is trained on a corpus of words w \u2208 V W and their contexts c \u2208 V C , with V W and V C the word and context vocabularies, respectively. The collection of observed words and context pairs is denoted as D; the term #(w, c) refers to the number of times the pair (w, c) appeared in D; the term \u03c3(x) is the sigmoid function; the term k is the number of negative samples and the term c N is the sampled context, drawn according to the empirical unigram distribution P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-gram Model",
"sec_num": "3.1.1"
},
{
"text": "Vector representations for detecting hypernymy are usually encoded by standard first-order distributional co-occurrences. In this way, they are insufficient to differentiate hypernymy from other paradigmatic relations such as synonymy, meronymy, antonymy, etc. Incorporating directional measures of hypernymy to detect hypernymy by exploiting the common contexts of hypernym and hyponym improves this relation distinction, but still suffers from distinguishing between hypernymy and meronymy. Our novel approach presents two solutions to deal with these challenges. First of all, the embeddings are learned in a specific order, such that the similarity score for hypernymy is higher than the similarity score for other relations. For example, the hypernym pair animal-frog will be assigned a higher cosine score than the co-hyponymy pair eagle-frog. Secondly, the embeddings are learned to capture the distributional hierarchy between hyponym and hypernym, as an indicator to differentiate between hypernym and hyponym. For example, given a hyponym-hypernym pair (p, q), we can exploit the Euclidean norms of q and p to differentiate between the two words, such that the Euclidean norm of the hypernym q is larger than the Euclidean norm of the hyponym p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "Inspired by the distributional lexical contrast model in Nguyen et al. (2016) for distinguishing antonymy from synonymy, this paper proposes two objective functions to learn hierarchical embeddings for hypernymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "Before moving to the details of the two objective functions, we first define the terms as follows: W(c) refers to the set of words co-occurring with the context c in a certain window-size; H(w) denotes the set of hypernyms for the word w; the two terms H + (w, c) and H \u2212 (w, c) are drawn from H(w), and are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "H + (w, c) = {u \u2208 W(c) \u2229 H(w) : cos( w, c) \u2212 cos( u, c) \u2265 \u03b8} H \u2212 (w, c) = {v \u2208 W(c) \u2229 H(w) : cos( w, c) \u2212 cos( v, c) < \u03b8}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "where cos( x, y) stands for the cosine similarity of the two vectors x and y; \u03b8 is the margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "The set H + (w, c) contains all hypernyms of the word w that share the context c and satisfy the constraint that the cosine similarity of pair (w, c) is higher than the cosine similarity of pair (u, c) within a max-margin framework \u03b8. Similarly, the set H \u2212 (w, c) represents all hypernyms of the word w with respect to the common context c in which the cosine similarity difference between the pair (w, c) and the pair (v, c) is within a min-margin framework \u03b8. The two objective functions are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L (w,c) = 1 #(w, u) u\u2208H + (w,c) \u2202( w, u) (3) L (v,w,c) = v\u2208H \u2212 (w,c) \u2202( v, w)",
"eq_num": "(4)"
}
],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "where the term \u2202( x, y) stands for the cosine derivative of ( x, y); and \u2202 then is optimized by the negative sampling procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "The objective function in Equation 3 minimizes the distributional difference between the hyponym w and the hypernym u by exploiting the common context c. More specifically, if the common context c is the distinctive characteristic of the hyponym w (i.e. the common context c is closer to the hyponym w than to the hypernym u), the objective function L (w,c) tries to decrease the distributional generality of hypernym u by moving w closer to u. For example, given a hypernymhyponym pair animal-bird, the context flap is a distinctive characteristic of bird, because almost every bird can flap, but not every animal can flap. Therefore, the context flap is closer to the hyponym bird than to the hypernym animal. The model then tries to move bird closer to animal in order to enforce the similarity between bird and animal, and to decrease the distributional generality of animal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "In contrast to Equation 3, the objective function in Equation 4 minimizes the distributional difference between the hyponym w and the hypernym v by exploiting the common context c, which is a distinctive characteristic of the hypernym v. In this case, the objective function L (v,w,c) tries to reduce the distributional generality of hyponym w by moving v closer to w. For example, the context word rights, a distinctive characteristic of the hypernym animal, should be closer to animal than to bird. Hence, the model tries to move the hypernym animal closer to the hyponym bird. Given that hypernymy is an asymmetric and also a hierarchical relation, where each hypernym may contain several hyponyms, our objective functions updates simultaneously both the hypernym and all of its hyponyms; therefore, our objective functions are able to capture the hierarchical relations between the hypernym and its hyponyms. Moreover, in our model, the margin framework \u03b8 plays a role in learning the hierarchy of hypernymy, and in preventing the model from minimizing the distance of synonymy or antonymy, because synonymy and antonymy share many contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "In the final step, the objective function which is used to learn the hierarchical embeddings for hypernymy combines Equations 1, 2, 3, and 4 by the objective function in Equations 5 and 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J (w,v,c) = J (w,c) + L (w,c) + L (v,w,c) (5) J = w\u2208V W c\u2208V C J (w,v,c)",
"eq_num": "(6)"
}
],
"section": "Hierarchical Hypernymy Model",
"sec_num": "3.1.2"
},
{
"text": "HyperVec is expected to show the two following properties: (i) the hyponym and the hypernym are close to each other, and (ii) there exists a distributional hierarchy between hypernyms and their hyponyms. Given a hypernymy pair (u, v) in which u is the hyponym and v is the hypernym, we propose a measure to detect hypernymy and to determine the directionality of hypernymy by using the hierarchical embeddings as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Hypernymy Measure",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "HyperScore(u, v) = cos( u, v) * v u",
"eq_num": "(7)"
}
],
"section": "Unsupervised Hypernymy Measure",
"sec_num": "3.2"
},
{
"text": "where cos( u, v) is the cosine similarity between u and v, and \u2022 is the magnitude of the vector (or the Euclidean norm). The cosine similarity is applied to distinguish hypernymy from other re-lations, due to the first property of the hierarchical embeddings, while the second property is used to decide about the directionality of hypernymy, assuming that the magnitude of the hypernym is larger than the magnitude of the hyponym. Note that the proposed hypernymy measure is unsupervised when the resource is only used to learn hierarchical embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Hypernymy Measure",
"sec_num": "3.2"
},
{
"text": "In this section, we first describe the experimental settings in our experiments (Section 4.1). We then evaluate the performance of HyperVec on three different tasks: i) unsupervised hypernymy detection and directionality (Section 4.2), where we assess HyperVec on ranking and classifying hypernymy; ii) supervised hypernymy detection (Section 4.3), where we apply supervised classification to detect hypernymy; iii) graded lexical entailment (Section 4.4), where we predict the strength of hypernymy pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use the ENCOW14A corpus (Sch\u00e4fer and Bildhauer, 2012; Sch\u00e4fer, 2015) with approx. 14.5 billion tokens for training the hierarchical embeddings and the default SGNS model. We train our model with 100 dimensions, a window size of 5, 15 negative samples, and 0.025 as the learning rate. The threshold \u03b8 is set to 0.05. The hypernymy resource for nouns comprises 105, 020 hyponyms, 24, 925 hypernyms, and 1, 878, 484 hyponym-hypernym pairs. The hypernymy resource for verbs consists of 11, 328 hyponyms, 4, 848 hypernyms, and 130, 350 hyponym-hypernym pairs.",
"cite_spans": [
{
"start": 27,
"end": 56,
"text": "(Sch\u00e4fer and Bildhauer, 2012;",
"ref_id": "BIBREF34"
},
{
"start": 57,
"end": 71,
"text": "Sch\u00e4fer, 2015)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "In this section, we assess our model on two experimental setups: i) a ranking retrieval setup that expects hypernymy pairs to have a higher similarity score than instances from other semantic relations; ii) a classification setup that requires both hypernymy detection and directionality. Shwartz et al. (2017) conducted an extensive evaluation of a large number of unsupervised distributional measures for hypernymy ranking retrieval proposed in previous work (Weeds and Weir, 2003; Santus et al., 2014; Clarke, 2009 ; Kotlerman et al., 2010; Lenci and Benotto, 2012; Santus et al., 2016) . The evaluation was performed on four semantic relation datasets: BLESS (Baroni and Lenci, 2011), WEEDS (Weeds et al., 2004) , EVALUTION (Santus et al., 2015) , and LENCI&BENOTTO (Benotto, 2015) . Table 1 describes the detail of these datasets in terms of the semantic relations and the number of instances. The Average Precision (AP) ranking measure is used to evaluate the performance of the measures.",
"cite_spans": [
{
"start": 289,
"end": 310,
"text": "Shwartz et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 461,
"end": 483,
"text": "(Weeds and Weir, 2003;",
"ref_id": "BIBREF46"
},
{
"start": 484,
"end": 504,
"text": "Santus et al., 2014;",
"ref_id": "BIBREF31"
},
{
"start": 505,
"end": 517,
"text": "Clarke, 2009",
"ref_id": "BIBREF4"
},
{
"start": 520,
"end": 543,
"text": "Kotlerman et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 544,
"end": 568,
"text": "Lenci and Benotto, 2012;",
"ref_id": "BIBREF15"
},
{
"start": 569,
"end": 589,
"text": "Santus et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 695,
"end": 715,
"text": "(Weeds et al., 2004)",
"ref_id": "BIBREF47"
},
{
"start": 728,
"end": 749,
"text": "(Santus et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 770,
"end": 785,
"text": "(Benotto, 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 788,
"end": 795,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Unsupervised Hypernymy Detection and Directionality",
"sec_num": "4.2"
},
{
"text": "In comparison to the state-of-the-art unsupervised measures compared by Shwartz et al. (2017) (henceforth, baseline models), we apply our unsupervised measure HyperScore (Equation 7) to rank hypernymy against other relations. presents the results of using HyperScore vs. the best baseline models, across datasets. When detecting hypernymy among all other relations (which is the most challenging task), HyperScore significantly outperforms all baseline variants on all datasets. The strongest difference is reached on the BLESS dataset, where HyperScore achieves an improvement of 40% AP score over the best baseline model. When ranking hypernymy in comparison to a single other relation, HyperScore also improves over the baseline models, except for the event relation in the BLESS dataset. We assume that this is due to the different parts-ofspeech (adjective and noun) involved in the relation, where HyperVec fails to establish a hierarchy.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "Shwartz et al. (2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Retrieval",
"sec_num": "4.2.1"
},
{
"text": "In this setup, we rely on three datasets of semantic relations, which were all used in various state-of-the-art approaches before, and brought together for hypernymy evaluation by Kiela et al. (2015) . (i) A subset of BLESS contains 1,337 hyponym-hypernym pairs. The task is to predict the directionality of hypernymy within a binary classification. Our approach requires no threshold; we only need to compare the magnitudes of the two words and to assign the hypernym label to the word with the larger magnitude. Figure versed hypernym-hyponym pairs, plus additional holonym-meronym pairs, co-hyponyms and randomly matched nouns. For this classification we make use of our HyperScore measure that ranks hypernymy pairs higher than other relation pairs. A threshold decides about the splitting point between the two classes: hyper vs. other. Instead of using a manually defined threshold as done by Kiela et al. (2015) , we decided to run 1 000 iterations which randomly sampled only 2% of the available pairs for learning a threshold, using the remaining 98% for test purposes. We present average accuracy results across all iterations. Figure 1b compares the default cosine similarities between the relation pairs (as applied by SGNS ) and HyperScore (as applied by HyperVec) on this task. Using HyperScore, the class \"hyper\" can clearly be distinguished from the class \"other\". (iii) BIBLESS represents the most challenging dataset; the relation pairs from WBLESS are split into three classes instead of two: hypernymy pairs, reversed hypernymy pairs, and other relation pairs. In this case, we perform a three-way classification. We apply the same technique as used for the WB-LESS classification, but in cases where we classify hyper we additionally classify the hypernymy direction, to decide between hyponym-hypernym pairs and reversed hypernym-hyponym pairs. Table 3 compares our results against related work. HyperVec outperforms all other methods on all three tasks. In addition we see again that an unmodified SGNS model cannot solve any of the three tasks.",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "Kiela et al. (2015)",
"ref_id": "BIBREF12"
},
{
"start": 899,
"end": 918,
"text": "Kiela et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 514,
"end": 520,
"text": "Figure",
"ref_id": null
},
{
"start": 1138,
"end": 1144,
"text": "Figure",
"ref_id": null
},
{
"start": 1867,
"end": 1874,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.2.2"
},
{
"text": "For supervised hypernymy detection, we make use of the two datasets: the full BLESS dataset, and ENTAILMENT (Baroni et al., 2012) , containing 2,770 relation pairs in total, including 1,385 hypernym pairs and 1,385 other relations pairs. We follow the same procedure as Yu et al. (2015) and Tuan et al. (2016) to assess HyperVec on the two datasets. Regarding BLESS, we extract pairs for four types of relations: hypernymy, meronymy, co-hyponymy (or coordination), and add the random relation for nouns. For the evaluation, we randomly select one concept and its relatum for testing, and train the supervised model on the 199 remaining concepts and its relatum. We then report the average accuracy across all concepts. For the ENTAILMENT dataset, we randomly select one hypernym pair for testing and train on all remaining hypernym pairs. Again, we report the average accuracy across all hypernyms. We apply an SVM classifier to detect hypernymy based on HyperVec. Given a hyponymhypernym pair (u, v), we concatenate four components to construct the vector for a pair (u, v) as follows: the vector difference between hypernym and hyponym ( v\u2212 u); the cosine similarity between the hypernym and hyponym vectors (cos( u, v)); the magnitude of the hyponym ( u ); and the magnitude of the hypernym ( v ). The resulting vector is fed into the SVM classifier to detect hypernymy. Similar to the two previous works, we train the SVM classifier with the RBF kernel, \u03bb = 0.03125, and the penalty C = 8.0. Table 4 shows the performance of HyperVec and the two baseline models reported by Tuan et al. (2016) . HyperVec slightly outperforms the method of Tuan et al. (2016) on the BLESS dataset, and is equivalent to the performance of their method on the ENTAILMENT dataset. In comparison to the method of Yu et al. (2015) , HyperVec achieves significant improvements.",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 270,
"end": 286,
"text": "Yu et al. (2015)",
"ref_id": "BIBREF49"
},
{
"start": 291,
"end": 309,
"text": "Tuan et al. (2016)",
"ref_id": "BIBREF40"
},
{
"start": 1578,
"end": 1596,
"text": "Tuan et al. (2016)",
"ref_id": "BIBREF40"
},
{
"start": 1795,
"end": 1811,
"text": "Yu et al. (2015)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 1496,
"end": 1503,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supervised Hypernymy Detection",
"sec_num": "4.3"
},
{
"text": "In this experiment, we apply HyperVec to the dataset of graded lexical entailment, HyperLex, as introduced by Vuli\u0107 et al. (2016) . The HyperLex dataset provides soft lexical entailment on a con- tinuous scale, rather than simplifying into a binary decision. HyperLex contains 2,616 word pairs across seven semantic relations and two word classes (nouns and verbs). Each word pair is rated by a score that indicates the strength of the semantic relation between the two words. For example, the score of the hypernym pair duck-animal is 5.9 out of 6.0, while the score of the reversed pair animal-duck is only 1.0. We compared HyperScore against the most prominent state-of-the-art hypernymy and lexical entailment models from previous work:",
"cite_spans": [
{
"start": 110,
"end": 129,
"text": "Vuli\u0107 et al. (2016)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Lexical Entailment",
"sec_num": "4.4"
},
{
"text": "\u2022 Directional entailment measures (DEM) (Weeds and Weir, 2003; Weeds et al., 2004; Clarke, 2009; Kotlerman et al., 2010; Lenci and Benotto, 2012) \u2022 Generality measures (SQLS) (Santus et al., 2014) \u2022 Visual generality measures (VIS) (Kiela et al., 2015) \u2022 Consideration of concept frequency ratio (FR) (Vuli\u0107 et al., 2016) \u2022 WordNet-based similarity measures (WN) (Wu and Palmer, 1994; Pedersen et al., 2004) \u2022 Order embeddings (OrderEmb) (Vendrov et al., 2016) \u2022 Skip-gram embeddings (SGNS) (Mikolov et al., 2013b; Levy and Goldberg, 2014) \u2022 Embeddings fine-tuned to a paraphrase database with linguistic constraints (PARA-GRAM) (Mrk\u0161i\u0107 et al., 2016) \u2022 Gaussian embeddings (Word2Gauss) (Vilnis and McCallum, 2015) The performance of the models is assessed through Spearman's rank-order correlation coefficient \u03c1 (Siegel and Castellan, 1988) , comparing the ranks of the models' scores and the human judgments for the given word pairs. Table 5 : Results (\u03c1) of HyperScore and state-ofthe-art measures and word embedding models on graded lexical entailment. Table 5 shows that HyperScore significantly outperforms both state-of-the-art measures and word embedding models.",
"cite_spans": [
{
"start": 40,
"end": 62,
"text": "(Weeds and Weir, 2003;",
"ref_id": "BIBREF46"
},
{
"start": 63,
"end": 82,
"text": "Weeds et al., 2004;",
"ref_id": "BIBREF47"
},
{
"start": 83,
"end": 96,
"text": "Clarke, 2009;",
"ref_id": "BIBREF4"
},
{
"start": 97,
"end": 120,
"text": "Kotlerman et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 121,
"end": 145,
"text": "Lenci and Benotto, 2012)",
"ref_id": "BIBREF15"
},
{
"start": 175,
"end": 196,
"text": "(Santus et al., 2014)",
"ref_id": "BIBREF31"
},
{
"start": 232,
"end": 252,
"text": "(Kiela et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 301,
"end": 321,
"text": "(Vuli\u0107 et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 363,
"end": 384,
"text": "(Wu and Palmer, 1994;",
"ref_id": "BIBREF48"
},
{
"start": 385,
"end": 407,
"text": "Pedersen et al., 2004)",
"ref_id": "BIBREF26"
},
{
"start": 438,
"end": 460,
"text": "(Vendrov et al., 2016)",
"ref_id": "BIBREF42"
},
{
"start": 491,
"end": 514,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF19"
},
{
"start": 515,
"end": 539,
"text": "Levy and Goldberg, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 629,
"end": 650,
"text": "(Mrk\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 686,
"end": 713,
"text": "(Vilnis and McCallum, 2015)",
"ref_id": "BIBREF43"
},
{
"start": 812,
"end": 840,
"text": "(Siegel and Castellan, 1988)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 935,
"end": 942,
"text": "Table 5",
"ref_id": null
},
{
"start": 1056,
"end": 1063,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graded Lexical Entailment",
"sec_num": "4.4"
},
{
"text": "HyperScore outperforms even the previously best word embedding model PARAGRAM by .22, and the previously best measures FR by .27. The reason that HyperVec outperforms all other models is that the hierarchy between hypernym and hypornym within HyperVec differentiates hyponym-hypernym pairs from hypernymhyponym pairs. For example, the HyperScore for the pairs duck-animal and animal-duck are 3.02 and 0.30, respectively. Thus, the magnitude proportion of the hypernym-hyponym pair duckanimal is larger than that for the pair animal-duck.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Lexical Entailment",
"sec_num": "4.4"
},
{
"text": "Having demonstrated the general abilities of HyperVec, this final section explores its potential for generalization in two different ways, (i) by relying on a small seed set only, rather than using a large set of training data; and (ii) by projecting HyperVec to other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Hypernymy",
"sec_num": "5"
},
{
"text": "We utilize only a small hypernym set from the hypernymy resource to train HyperVec, relying on 200 concepts from the BLESS dataset. The motivation behind using these concepts is threefold: i) these concepts are distinct and unambiguous noun concepts; ii) the concepts were equally divided between living and non-living entities; iii) concepts have been grouped into 17 broader classes. Based on the seed set, we collected the hyponyms of each concept from WordNet, and then re-trained HyperVec. On the hypernymy ranking retrieval task (Section 4.2.1), HyperScore outperforms the baselines across all datasets (cf. Table 1) with AP values of 0.39, 0.448, and 0.585 for EVALu-tion, LenciBenotto, and Weeds, respectively. For the graded lexical entailment task (Section 4.4), HyperScore obtains a correlation of \u03c1 = 0.30, outperforming all models except for PARAGRAM with \u03c1 = 0.32. Overall, the results show that HyperVec is indeed able to generalize hypernymy from small seeds of training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 614,
"end": 622,
"text": "Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Hypernymy Seed Generalization:",
"sec_num": null
},
{
"text": "We assume that hypernymy detection can be improved across languages by projecting representations from any arbitrary language into our modified English HyperVec space. We conduct experiments for German and Italian, where the languagespecific representations are obtained using the same hyper-parameter settings as for our English SGNS model (cf. Section 4.1). As corpus resource we relied on Wikipedia dumps 2 . Note that we do not use any additional resource, such as the German or Italian WordNet, to tune the embeddings for hypernymy detection. Based on the representations, a mapping function between a source language (German, Italian) and our English HyperVec space is learned, by relying on the least-squares error method from previous work using cross-lingual data (Mikolov et al., 2013a) and different modalities (Lazaridou et al., 2015) .",
"cite_spans": [
{
"start": 773,
"end": 796,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF18"
},
{
"start": 822,
"end": 846,
"text": "(Lazaridou et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Hypernymy across Languages:",
"sec_num": null
},
{
"text": "To learn a mapping function between two languages, a one-to-one correspondence (word translations) between two sets of vectors is required. We obtained these translations by using the parallel Europarl 3 V7 corpus for German-English and Italian-English. Word alignment counts were extracted using fast align (Dyer et al., 2013) . We then assigned each source word to the English word with the maximum number of alignments in the parallel corpus. We could match 25,547 pairs for DE\u2192EN and 47,475 pairs for IT\u2192EN.",
"cite_spans": [
{
"start": 308,
"end": 327,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Hypernymy across Languages:",
"sec_num": null
},
{
"text": "Taking the aligned subset of both spaces, we assume that X is the matrix obtained by concatenating all source vectors, and likewise Y is the matrix obtained by concatenating all corresponding English elements. Applying the 2-regularized leastsquares error objective can be described using the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Hypernymy across Languages:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W = argmin W\u2208R d1\u00d7d2 XW \u2212 Y + \u03bb W",
"eq_num": "(8)"
}
],
"section": "Generalizing Hypernymy across Languages:",
"sec_num": null
},
{
"text": "Although we learn the mapping only on a subset of aligned words, it allows us to project every word in a source vocabulary to its English HyperVec position by using W. Finally we compare the original representations and the mapped representation on the hypernymy ranking retrieval task (similar to Section 4.2.1). As gold resources we relied on German and Italian nouns pairs. For German we used the 282 German pairs collected via Amazon Mechanical Turk by Scheible and Schulte im Walde (2014). The 1,350 Italian pairs were collected via Crowdflower by Sucameli (2015) in the same way. Both collections contain hypernymy, antonymy and synonymy pairs. As before, we evaluate the ranking by AP, and we compare the cosine of the unmodified default representations against the HyperScore of the projected representations. Table 6 : AP results across languages, comparing SGNS and the projected representations.",
"cite_spans": [
{
"start": 553,
"end": 568,
"text": "Sucameli (2015)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 818,
"end": 825,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalizing Hypernymy across Languages:",
"sec_num": null
},
{
"text": "The results are shown in Table 6 . We clearly see that for both languages the default SGNS embeddings do not provide higher similarity scores for hypernymy pairs (except for Italian Hyp/Ant), but both languages provide higher scores when we map the embeddings into the English HyperVec space.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalizing Hypernymy across Languages:",
"sec_num": null
},
{
"text": "This paper proposed a novel neural model HyperVec to learn hierarchical embeddings for hypernymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "HyperVec has been shown to strengthen hypernymy similarity, and to capture the distributional hierarchy of hypernymy. Together with a newly proposed unsupervised measure HyperScore our experiments demonstrated (i) significant improvements against state-of-theart measures, and (ii) the capability to generalize hypernymy and learn the relation instead of memorizing prototypical hypernyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The Wikipedia dump for German and Italian were both downloaded in January 2017.3 http://www.statmt.org/europarl/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research was supported by the Ministry of Education and Training of the Socialist Republic of Vietnam (Scholarship 977/QD-BGDDT; Kim-Anh Nguyen), the DFG Collaborative Research Centre SFB 732 (Kim-Anh Nguyen, Maximilian K\u00f6per, Ngoc Thang Vu), and the DFG Heisenberg Fellowship SCHU-2580/1 (Sabine Schulte im Walde). We would like to thank three anonymous reviewers for their comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ngoc-Quynh",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Chung-Chieh",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceed- ings of the 13th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL), pages 23-32, Avignon, France.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "How we blessed distributional semantic evaluation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics (GEMS)",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Pro- ceedings of the GEMS 2011 Workshop on GEometri- cal Models of Natural Language Semantics (GEMS), pages 1-10, Edinburgh, Scotland.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Distributional models for semantic relations: A study on hyponymy and antonymy",
"authors": [
{
"first": "Giulia",
"middle": [],
"last": "Benotto",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giulia Benotto. 2015. Distributional models for semantic relations: A study on hyponymy and antonymy. Ph.D. thesis, University of Pisa.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classifying taxonomic relations between pairs of wikipedia articles",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceddings of Sixth International Joint Conference on Natural Language Processing (IJC-NLP)",
"volume": "",
"issue": "",
"pages": "788--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran and Kathleen McKeown. 2013. Classifying taxonomic relations between pairs of wikipedia ar- ticles. In Proceddings of Sixth International Joint Conference on Natural Language Processing (IJC- NLP), pages 788-794, Nagoya, Japan.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Context-theoretic semantics for natural language: An overview",
"authors": [
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Geometrical Models of Natural Language Semantics (GEMS)",
"volume": "",
"issue": "",
"pages": "112--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daoud Clarke. 2009. Context-theoretic semantics for natural language: An overview. In Proceedings of the Workshop on Geometrical Models of Natu- ral Language Semantics (GEMS), pages 112-119, Athens, Greece.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recognizing Textual Entailment: Models and Applications",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Sammons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2013,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzotto. 2013. Recognizing Textual Entail- ment: Models and Applications. Synthesis Lectures on Human Language Technologies.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparameteri- zation of IBM Model 2. In Proceedings of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL), pages 644-648, Atlanta, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Inductive probabilistic taxonomy learning using singular value decomposition",
"authors": [
{
"first": "Francesca",
"middle": [],
"last": "Fallucchi",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language Engineering",
"volume": "17",
"issue": "1",
"pages": "71--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesca Fallucchi and Fabio Massimo Zanzotto. 2011. Inductive probabilistic taxonomy learning using singular value decomposition. Natural Lan- guage Engineering, 17(1):71-94.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "WordNet -An Electronic Lexical Database. Language, Speech, and Communication",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet -An Elec- tronic Lexical Database. Language, Speech, and Communication. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Papers in Linguistics 1934-51. Longmans",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R. Firth. 1957. Papers in Linguistics 1934-51. Longmans, London, UK.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The distributional inclusion hypotheses and lexical entailment",
"authors": [
{
"first": "Maayan",
"middle": [],
"last": "Geffet",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maayan Geffet and Ido Dagan. 2005. The distribu- tional inclusion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 107-114, Michigan, US.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributional structure. Word",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146-162.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploiting image generality for lexical entailment detection",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL)",
"volume": "",
"issue": "",
"pages": "119--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Laura Rimell, Ivan Vuli\u0107, and Stephen Clark. 2015. Exploiting image generality for lexical entailment detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (ACL), pages 119-124, Beijing, China.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Directional distributional similarity for lexical inference",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Kotlerman",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Maayan",
"middle": [],
"last": "Zhitomirsky-Geffet",
"suffix": ""
}
],
"year": 2010,
"venue": "Natural Language Engineering",
"volume": "16",
"issue": "4",
"pages": "359--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distribu- tional similarity for lexical inference. Natural Lan- guage Engineering, 16(4):359-389.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "270--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Georgiana Dinu, and Marco Ba- roni. 2015. Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning. In Proceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 270-280, Beijing, China.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Identifying hypernyms in distributional semantic spaces",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Benotto",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval)",
"volume": "1",
"issue": "",
"pages": "75--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Lenci and Giulia Benotto. 2012. Identify- ing hypernyms in distributional semantic spaces. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval), pages 75-79, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceddings of the 27th International Conference on Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Pro- ceddings of the 27th International Conference on Advances in Neural Information Processing Systems (NIPS), pages 2177-2185, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Do supervised distributional methods really learn lexical inference relations?",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "970--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceed- ings of the 2015 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL), pages 970-976, Denver, Colorado.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting Similarities among Languages for Machine Translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting Similarities among Languages for Ma- chine Translation. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their com- positionality. In Proceedings of the 26th Interna- tional Conference on Advances in Neural Informa- tion Processing Systems (NIPS), pages 3111-3119, Lake Tahoe, Nevada, US.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "WordNet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semantic networks of english",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1991,
"venue": "Cognition",
"volume": "41",
"issue": "",
"pages": "197--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller and Christiane Fellbaum. 1991. Se- mantic networks of english. Cognition, 41:197-229.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Counter-fitting word vectors to linguistic constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "M",
"middle": [
"Lina"
],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "142--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, M. Lina Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 142- 148, San Diego, California.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Big Book of Concepts",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Murphy. 2002. The Big Book of Concepts. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A graph-based algorithm for inducing lexical taxonomies from scratch",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Faralli",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "1872--1877",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Paola Velardi, and Stefano Faralli. 2011. A graph-based algorithm for inducing lex- ical taxonomies from scratch. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pages 1872-1877, Barcelona, Catalonia, Spain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym- synonym distinction. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL), pages 454-459, Berlin, Germany.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Wordnet: : Similarity -measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 19th National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "1024--1025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet: : Similarity -measuring the relatedness of concepts. In Proceedings of the 19th National Conference on Artificial Intelligence, Six- teenth Conference on Innovative Applications of Ar- tificial Intelligence (AAAI), pages 1024-1025, Cali- fornia, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distributional lexical entailment by topic coherence",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "511--519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell. 2014. Distributional lexical entailment by topic coherence. In Proceedings of the 14th Con- ference of the European Chapter of the Association for Computational Linguistics (EACL), pages 511- 519, Gothenburg, Sweden.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Inclusive yet selective: Supervised distributional hypernymy detection",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "1025--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hy- pernymy detection. In Proceedings of the 25th Inter- national Conference on Computational Linguistics (COLING), pages 1025-1036, Dublin, Ireland.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised measure of word similarity: How to outperform cooccurrence and vector cosine in vsms",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Tin-Shing",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth Conference on Artificial Intelligence AAAI)",
"volume": "",
"issue": "",
"pages": "4260--4261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, and Chu-Ren Huang. 2016. Unsupervised mea- sure of word similarity: How to outperform co- occurrence and vector cosine in vsms. In Proceed- ings of the Thirtieth Conference on Artificial Intelli- gence AAAI), pages 4260-4261, Arizona, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Chasing hypernyms in vector spaces with entropy",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "38--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte Im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 38-42, Gothenburg, Sweden.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Frances",
"middle": [],
"last": "Yung",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrico Santus, Frances Yung, Alessandro Lenci, and Chu-Ren Huang. 2015. Evalution 1.0: an evolving semantic dataset for training and evaluation of distri- butional semantic models. In Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications, Beijing, China.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Processing and querying large web corpora with the COW14 architecture",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd Workshop on Challenges in the Management of Large Corpora",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer. 2015. Processing and querying large web corpora with the COW14 architecture. In Pro- ceedings of the 3rd Workshop on Challenges in the Management of Large Corpora, pages 28-34, Lan- caster, UK.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Building large corpora from the web using a new efficient tool chain",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Bildhauer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "486--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer and Felix Bildhauer. 2012. Building large corpora from the web using a new efficient tool chain. In Proceedings of the 8th International Conference on Language Resources and Evaluation, pages 486-493, Istanbul, Turkey.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A Database of Paradigmatic Semantic Relation Pairs for German Nouns, Verbs, and Adjectives",
"authors": [
{
"first": "Silke",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing",
"volume": "",
"issue": "",
"pages": "111--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silke Scheible and Sabine Schulte im Walde. 2014. A Database of Paradigmatic Semantic Relation Pairs for German Nouns, Verbs, and Adjectives. In Pro- ceedings of Workshop on Lexical and Grammati- cal Resources for Language Processing, pages 111- 119, Dublin, Ireland.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Valencia, Spain.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Nonparametric Statistics for the Behavioral Sciences",
"authors": [
{
"first": "Sidney",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "N. John",
"middle": [],
"last": "Castellan",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidney Siegel and N. John Castellan. 1988. Non- parametric Statistics for the Behavioral Sciences. McGraw-Hill, Boston, MA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Semantic taxonomy induction from heterogenous evidence",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "801--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 801-808, Sydney, Australia.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Analisi computazionale delle relazioni semantiche: Uno studio della lingua italiana",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Sucameli",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Sucameli. 2015. Analisi computazionale delle re- lazioni semantiche: Uno studio della lingua italiana. B.s. thesis, University of Pisa.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Learning term embeddings for taxonomic relation identification using dynamic weighting neural network",
"authors": [
{
"first": "Anh",
"middle": [],
"last": "Luu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Tuan",
"suffix": ""
},
{
"first": "Siu",
"middle": [
"Cheung"
],
"last": "Tay",
"suffix": ""
},
{
"first": "See",
"middle": [
"Kiong"
],
"last": "Hui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "403--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luu Anh Tuan, Yi Tay, Siu Cheung Hui, and See Kiong Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neu- ral network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 403-413, Austin, Texas.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "From Frequency to Meaning: Vector Space Models of Semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From Fre- quency to Meaning: Vector Space Models of Se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Order-embeddings of images and language",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vendrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 4th International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In Proceedings of the 4th International Conference on Learning Representations (ICLR), San Juan, Puerto Rico.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Word representations via gaussian embedding",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR), California, USA.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Hyperlex: A large-scale evaluation of graded lexical entailment",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2016. Hyperlex: A large-scale evaluation of graded lexical entailment. arXiv.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Learning to distinguish hypernyms and co-hyponyms",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Weir",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "2249--2259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, Daoud Clarke, Jeremy Reffin, David J. Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), pages 2249-2259, Dublin, Ireland.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A general framework for distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds and David Weir. 2003. A general frame- work for distributional similarity. In Proceedings of the Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 81-88, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COL-ING)",
"volume": "",
"issue": "",
"pages": "1015--1021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th International Conference on Computational Linguistics (COL- ING), pages 1015-1021, Geneva, Switzerland.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics (ACL), pages 133-138, Las Cruces, New Mexico.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Learning term embeddings for hypernymy identification",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haixun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuemin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "1390--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Yu, Haixun Wang, Xuemin Lin, and Min Wang. 2015. Learning term embeddings for hypernymy identification. In Proceedings of the 24th Interna- tional Conference on Artificial Intelligence (IJCAI), pages 1390-1397, Buenos Aires, Argentina.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Bootstrapping distributional feature vector quality",
"authors": [
{
"first": "Maayan",
"middle": [],
"last": "Zhitomirsky",
"suffix": ""
},
{
"first": "-Geffet",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "3",
"pages": "435--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maayan Zhitomirsky-Geffet and Ido Dagan. 2009. Bootstrapping distributional feature vector quality. Computational Linguistics, 35(3):435-461.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "detection: hypernymy vs. other relations.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Comparing SGNS and HyperVec on binary classification tasks. The y-axis shows the magnitude values of the vectors.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td colspan=\"3\">Hypernymy vs. Baseline HyperScore</td></tr><tr><td/><td>other relations</td><td>0.353</td><td>0.538</td></tr><tr><td/><td>meronymy</td><td>0.675</td><td>0.811</td></tr><tr><td>EVALution</td><td>attribute</td><td>0.651</td><td>0.800</td></tr><tr><td/><td>antonymy</td><td>0.55</td><td>0.743</td></tr><tr><td/><td>synonymy</td><td>0.657</td><td>0.793</td></tr><tr><td/><td>other relations</td><td>0.051</td><td>0.454</td></tr><tr><td/><td>meronymy</td><td>0.76</td><td>0.913</td></tr><tr><td>BLESS</td><td>coordination</td><td>0.537</td><td>0.888</td></tr><tr><td/><td>attribute</td><td>0.74</td><td>0.918</td></tr><tr><td/><td>event</td><td>0.779</td><td>0.620</td></tr><tr><td/><td>other relations</td><td>0.382</td><td>0.574</td></tr><tr><td>Lenci&amp;Benotto</td><td>antonymy</td><td>0.624</td><td>0.696</td></tr><tr><td/><td>synonymy</td><td>0.725</td><td>0.751</td></tr><tr><td>Weeds</td><td>coordination</td><td>0.441</td><td>0.850</td></tr></table>",
"text": "Details of the semantic relations and the number of instances in each dataset."
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Accuracy for hypernymy directionality."
}
}
}
}