ACL-OCL / Base_JSON /prefixL /json /ldl /2020.ldl-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:41:28.475049Z"
},
"title": "Supervised Hypernymy Detection in Spanish through Order Embeddings",
"authors": [
{
"first": "Gun",
"middle": [
"Woo"
],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de la Rep\u00fablica Montevideo",
"location": {
"country": "Uruguay"
}
},
"email": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Etcheverry",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de la Rep\u00fablica Montevideo",
"location": {
"country": "Uruguay"
}
},
"email": "mathiase@fing.edu.uy"
},
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez S\u00e1nchez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de la Rep\u00fablica Montevideo",
"location": {
"country": "Uruguay"
}
},
"email": "daniel.fernandez.sanchez@fing.edu.uy"
},
{
"first": "Dina",
"middle": [],
"last": "Wonsever",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de la Rep\u00fablica Montevideo",
"location": {
"country": "Uruguay"
}
},
"email": "wonsever@fing.edu.uy"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper addresses the task of supervised hypernymy detection in Spanish through an order embedding and using pretrained word vectors as input. Although the task has been widely addressed in English, there is not much work in Spanish, and according to our knowledge there is not any available dataset for supervised hypernymy detection in Spanish. We built a supervised hypernymy dataset for Spanish using WordNet and corpus statistics, with different versions according to the lexical intersection between its partitions: random and lexical split. We show the results of using the resulting dataset within an order embedding consuming pretrained word vectors as input. We show the ability of pretrained word vectors to transfer learning to unseen lexical units according to the results in the lexical split dataset. To finish, we study the results of giving additional information in training time, such as, co-hyponymy links and instances extracted through lexico-syntactic patterns.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper addresses the task of supervised hypernymy detection in Spanish through an order embedding and using pretrained word vectors as input. Although the task has been widely addressed in English, there is not much work in Spanish, and according to our knowledge there is not any available dataset for supervised hypernymy detection in Spanish. We built a supervised hypernymy dataset for Spanish using WordNet and corpus statistics, with different versions according to the lexical intersection between its partitions: random and lexical split. We show the results of using the resulting dataset within an order embedding consuming pretrained word vectors as input. We show the ability of pretrained word vectors to transfer learning to unseen lexical units according to the results in the lexical split dataset. To finish, we study the results of giving additional information in training time, such as, co-hyponymy links and instances extracted through lexico-syntactic patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Hierarchical organizations are key in language semantics. Hypernymy refers to the general-specific relationship between two lexical terms. Such is the case of biology taxonomies (e.g. mammal-vertebrate, pangolin-mammal), seasons (e.g. spring-season) and colors (e.g. green-color), among many others. The general term is called the hypernym and the specific one the hyponym. In natural language processing, automatic hypernymy detection (or taxonomy learning) is an active NLP research area, that has applications in several tasks such as question answering (Clark et al., 2007) , textual entailment (Chen et al., 2017) and image detection (Marszalek and Schmid, 2007) . A well known hand-crafted resource is WordNet (Miller, 1995) . It is a large lexical database that contains semantic relations, including hypernymy among them. Manual resources consume a considerable human effort for its creation and maintenance, and suffer from incompleteness and inadequacies. Furthermore, different applications require the expansion of the hypernymy relationship to particular instances like celebrities, song names, movies, and so on. Hence, it is clear the importance of automatic mechanisms to overcome or assist manual ones. Regarding Spanish, the resources available for supervised hypernymy detection are quite scarce. WordNet was originally created for English and later translated into other languages, among which is Spanish (Atserias et al., 2004) . This consists in the main source of hypernyms for Spanish. Hypernymy detection has been evaluated mainly through binary classification relying on datasets that contain a number of pairs of terms and a label for each pair indicating if hypernymy relation is held between the terms (Shwartz et al., 2016) . A complementary evaluation benchmark for modeling hypernymy is given by hypernymy discovery (Espinosa-Anke et al., 2016) . It consists on given a domain's vocabulary and an input term, discover its hypernyms. This formulation is beneficial to avoid the lexical memorization phenomena (Levy et al., 2015) . Regarding to hypernymy discovery, Figure 1 : Example of a very simplified taxonomy in Spanish. a dataset in Spanish (among other languages) was introduced for the task 9 of SemEval-2018 (Camacho-Collados et al., 2018 . In this work we does not pursuit hypernymy discovery and we are aware that it is not clear how realistic hypernymy detection is, since in many scenarios the potential pairs may not be given and need to be discovered. However, we believe that a dataset for hypernymy detection in Spanish can be useful for model comparisons, and according to our knowledge there is no such resource available for Spanish at the time of this work. We introduce a dataset for supervised hypernymy detection for Spanish built using Spanish WordNet and corpus statistics. We describe its creation process and we made it available to the NLP community as a complementary benchmark for hypernymy detection in Spanish. In addition, we train and evaluate using the created dataset an order embedding (Vendrov et al., 2015) based model using pretrained word embeddings as input, and we report the obtained results for future comparisons. Also, we show that this model, disregarding the use of Hearst patterns, outperforms other distributional approaches and the much more complex hybrid LSTM-based model, that combines distributional and path-based information, proposed by Shwartz et al. (2016) .",
"cite_spans": [
{
"start": 557,
"end": 577,
"text": "(Clark et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 599,
"end": 618,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 639,
"end": 667,
"text": "(Marszalek and Schmid, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 716,
"end": 730,
"text": "(Miller, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 1425,
"end": 1448,
"text": "(Atserias et al., 2004)",
"ref_id": "BIBREF0"
},
{
"start": 1731,
"end": 1753,
"text": "(Shwartz et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 1848,
"end": 1876,
"text": "(Espinosa-Anke et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 2040,
"end": 2059,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 2235,
"end": 2247,
"text": "SemEval-2018",
"ref_id": null
},
{
"start": 2248,
"end": 2278,
"text": "(Camacho-Collados et al., 2018",
"ref_id": "BIBREF2"
},
{
"start": 3055,
"end": 3077,
"text": "(Vendrov et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 3428,
"end": 3449,
"text": "Shwartz et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 2096,
"end": 2104,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Hypernymy detection in NLP can be focused as a supervised or an unsupervised learning task. Supervised approaches relies on pairs annotated with the information of whether they belong to the relationship or not. On the contrary, unsupervised approaches do not use annotated instances, they rely solely in the distributional inclusion hypothesis (Zhitomirsky-Geffet and Dagan, 2005) or entropy based measures (Santus et al., 2014) . Supervised approaches have been addressed mainly using two types of information: paths and contexts distributions (or word embeddings). Path-based (or pattern-based) approaches use the paths of words that connect instances of the relationship. Hearst (1992) presents the first path-based approach where hand-crafted patterns were used for hypernymy extraction. For example, the path \"is a type of\" would match cases like \"tuna is a type of fish\" allowing to detect that \"tuna\" is an hyponym of \"fish\", etc. Also, paths of joint occurrences in syntactic dependency trees result useful for hypernymy detection (Snow et al., 2004) . Path patterns were generalized using part-of-speech tags and ontology types by Nakashole et al. (2012) . A different kind of pattern-based approach is proposed in the work of Navigli and Velardi (2010) , they consider word lattices to extract definitional sentences in texts and then extract hypernymy related pairs from them, or learning lexical taxonomies (Navigli et al., 2011) . The main disadvantage of path-based approaches is that both candidates must occur simultaneously in the same context. In the other hand, the distributional approaches relies in the contexts of each word independently. Many methods propose supervised classification after applying a binary vector operation on the pair of representations, such as vector concatenation (Baroni et al., 2012) and difference (Roller et al., 2014; Fu et al., 2014; Weeds et al., 2014) . Vylomova et al. (2016) studied vector difference behavior in a wider set of lexical relations and they remarked the importance of negative training data to improve the results. Ustalov et al. (2017) performed hypernyms extraction based on projection learning. Instead of classifying the pair of representations, they learned a mapping to project hyponyms embeddings to their respective hypernyms, remarking also the importance of negative sampling. A related approach is presented by Dash et al. 2019, where a neural network architecture is designed to enforce asymmetry and transitivity through non-linearities and residual connection. These last two approaches present some overlap with the work of Vendrov et al. (2015) , that its order embedding approach is the one considered in this work. Shwartz et al. (2016) combined path-based and distributional information in supervised hypernymy detection, concatenating the embedding of each term independently with a distributional representation of all paths between the terms in a dependency parsed corpus. The representation was built with the average of the LSTM resulting representation of each path. Additionally, they introduced a dataset for lexical entailment where they tested their model. LEAR (Lexical Entailment Attract-Repel) (Vulic and Mrksic, 2017) gives great performance on hypernymy detection specializing word embeddings based on WordNet con-straints. The direction of the asymmetric relation was encoded in the resulting vector norms while cosine distance jointly enforces synonyms semantic similarity. The resulting vectors were specialized simultaneously for lexical relatedness and entailment.",
"cite_spans": [
{
"start": 345,
"end": 381,
"text": "(Zhitomirsky-Geffet and Dagan, 2005)",
"ref_id": "BIBREF28"
},
{
"start": 408,
"end": 429,
"text": "(Santus et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 676,
"end": 689,
"text": "Hearst (1992)",
"ref_id": "BIBREF9"
},
{
"start": 1040,
"end": 1059,
"text": "(Snow et al., 2004)",
"ref_id": "BIBREF22"
},
{
"start": 1141,
"end": 1164,
"text": "Nakashole et al. (2012)",
"ref_id": "BIBREF15"
},
{
"start": 1237,
"end": 1263,
"text": "Navigli and Velardi (2010)",
"ref_id": "BIBREF16"
},
{
"start": 1420,
"end": 1442,
"text": "(Navigli et al., 2011)",
"ref_id": "BIBREF17"
},
{
"start": 1812,
"end": 1833,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 1849,
"end": 1870,
"text": "(Roller et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 1871,
"end": 1887,
"text": "Fu et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 1888,
"end": 1907,
"text": "Weeds et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 1910,
"end": 1932,
"text": "Vylomova et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 2087,
"end": 2108,
"text": "Ustalov et al. (2017)",
"ref_id": "BIBREF23"
},
{
"start": 2611,
"end": 2632,
"text": "Vendrov et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 2705,
"end": 2726,
"text": "Shwartz et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 3198,
"end": 3222,
"text": "(Vulic and Mrksic, 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "In this section we describe the dataset construction process. The dataset consists of pairs of words and a boolean label associated to each pair that is true when the first element is an hyponym of the second and false otherwise. We will refer as positive instances to those pairs that are labelled as true (e.g. summer-season) and as negative instances to those that are labelled as false (e.g. cat-fish).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernymy Dataset for Spanish",
"sec_num": "3."
},
{
"text": "In the dataset construction process we use a variety of sources to obtain positive and negative instances. In the following we describe each source and technique used; and we give a measure of the quality of the dataset based on a random sampling. In addition and based on the dataset built by Shwartz et al. 2016, we performed a random split (in train, validation and test) and a split without terms occurring in more than one partition to deal with the lexical memorization (Levy et al., 2015) . The latter is referred as lexical split.",
"cite_spans": [
{
"start": 476,
"end": 495,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernymy Dataset for Spanish",
"sec_num": "3."
},
{
"text": "The extraction of positive pairs was performed using Spanish WordNet, patterns against a Spanish Corpus, and Shwartz dataset translation. In addition to these sources, it is possible to consider the transitive links as positive instances, since the hypernym relation fulfills the transitive property. However, this assumption may not be satisfied when different senses are faced in the transitive link. So, we decided to not consider inferred transitive instances in this work, and the dataset discard word sense information. In the following we describe how we use each source:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "\u2022 Spanish WordNet:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "The main source of positive instances of our dataset is the Spanish version of the WordNet of the Open Multilingual Wordnet (OMW). We consider the hypernymy relation defined in WordNet between synsets, and then we perform a selection of pairs, taking one word of each synset, to obtain hypernymic pairs that will belong to the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "We considered the following two heuristics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "1. We choose from each synset those words that are most frequently used according to its frequency in the corpus of Cardellino (2016) 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "2. Based on Santus et al. (2014) work, we filtered the resulting candidate pairs that the hyponyms has a frequency greater than the frequency of it proposed hypernym.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "k Size (# pairs) % Correct 1 15695 / 10103 83.9 / 84.3 2 29180 / 19258 82.2 / 83.3 3 35103 / 22851 77.6 / 83.5 Table 1 : Size and percentage of correct hypernyms of a sample of the resulting pairs considering 1, 2 and 3 most frequent words of each synset. We show the results applying (right) and without applying (left) the second heuristic filtering.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "Regarding the first heuristic, we observe the result of considering the pairs from an all-vs-all of the k most frequent lemmas of each synset. In table 1 we report the respective sizes and percentage of correct pairs of a 0.5% random sample, where can be observed that taking into account more than the two most frequent words of each synset the results degrade considerably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "We filter the output of the first heuristic using the second heuristic and we observe a quality improvement in the resulting pairs. The values on the right in table 1 details the obtained results. According to this minimal evaluation criterion we decide to consider the most three frequent words of each synset filtering the pairs where the hyponym is more frequent than the hypernym.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "To finish with WordNet extracted hypernyms, we eliminate the cycles that are generated due to the multiple senses of certain words and the transitivity of the hypernym relation. The resulting pairs are the final set of the WordNet positve instances of the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "\u2022 Pattern-based:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "Relying on the well known importance of the pattern (or path) based approaches to detect and discover hypernyms, originated by Hearst (1992) , we consider to include in our dataset positive instances extracted using high confidence patterns. We consider the following two patterns for Spanish built by Ortega et al. (2011) they found to present a high confidence in their experiments (confidence value near to 1):",
"cite_spans": [
{
"start": 127,
"end": 140,
"text": "Hearst (1992)",
"ref_id": "BIBREF9"
},
{
"start": 302,
"end": 322,
"text": "Ortega et al. (2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "1. \"el <hyponym> es el\u00fanico <hyperonym>\" 2. \"de <hyponym> y otras <hyperonym>\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "We use these patterns to extract candidate pairs from the corpus of Cardellino (2016) . Unfortunately, the quality of the resulting pairs was poor. Subsequently, we achieve a little improvement filtering the obtained candidates using the part of speech. Even so, we did not obtain good enough results to be included in the final dataset. However, we consider that despite the poor quality the extracted instances, it may become useful to study the behavior of including them as training data. For that purpose it is available along with the dataset.",
"cite_spans": [
{
"start": 68,
"end": 85,
"text": "Cardellino (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "\u2022 Shwartz dataset translation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "In the dataset built by Shwartz et al. (2016) , they obtained the hypernymy relation instances from English WordNet, DBPedia, Wikidata and Yago. Their dataset contains a considerable number of instances like shakespeare-writer. Therefore, we consider to select those pairs that contain proper names as hyponym candidate. We limit our selection to the instances of: \"village\", \"city\", \"company\", \"town\", \"place\", \"river\" and \"person\"; and we translate the instances through Google's translation library. We include the resulting candidates as positive instances in our dataset.",
"cite_spans": [
{
"start": 24,
"end": 45,
"text": "Shwartz et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Pairs",
"sec_num": "3.1."
},
{
"text": "The unrelated pairs, or negative instances, are those pairs that does not hold an hypernymic relation between them. We consider for the procurement of unrelated pairs the following approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unrelated Pairs",
"sec_num": "3.2."
},
{
"text": "\u2022 Random sampling: Since most of the words are not hypernym between them, we can randomly pick two words from a given vocabulary and we probably will get a non hypernymic pair. So, we obtain the noun words from the Cardellino's Corpus, with at least 4 characters and a frequency greater than to 200, jointly with the vocabulary of the positive part, above mentioned, of the dataset. Then we proceed to generate tuples, that were not already included in the dataset, till complete the desired ratio of 1:3 of positive:negative instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unrelated Pairs",
"sec_num": "3.2."
},
{
"text": "The dataset resulting of WordNet, Shwartz translation and random pairs is what we refer as our base dataset, presented in its two versions: random and lexical split, as we will detail later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unrelated Pairs",
"sec_num": "3.2."
},
{
"text": "\u2022 Cohyponyms: Cohyponymy is the relation between hyponyms that share the same hypernym. They are words that have properties in common, but which in turn have their own characteristics that differentiate them well from each other. Cohyponymy can be seen as words belonging to a same class (e.g male-female, marchnovember). Given a pair of cohponyms it is highly probably that an hypernymy relation is not fullfilled between them. Therefore, it is possible to obtain negative pairs from cohyponymic relations entailed from the positive instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unrelated Pairs",
"sec_num": "3.2."
},
{
"text": "\u2022 Inverted links: The hypernym relation is asymmetric. Therefore, if a tuple satisfies the hypernym relation, its inverse not. Then, having our positive dataset already, a simple way to build negative dataset is exchanging the order of the pairs of the positive dataset. However, synonyms may become a problem in this assumption. We can think between some synonyms that an hypernymic relation is fulfilled in both directions (e.g. neat-tidy). For this reason we does not include inverted links in the distributed dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unrelated Pairs",
"sec_num": "3.2."
},
{
"text": "\u2022 Antonymy: Words that have an opposite meaning are called Antonyms. We assume that if there is an antonymy relationship, the hypernym relationship is not satisfied. Therefore, we include the antonyms extracted from WordNet as negative instances. WordNet Pattern-based Shwartz 27861 2731 3798",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 285,
"text": "WordNet Pattern-based Shwartz 27861",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unrelated Pairs",
"sec_num": "3.2."
},
{
"text": "Negative Pairs Random Cohyponym Antonym Meronym \u223c 90000 \u223c 45000 1107 5940 Table 2 : Total amount of positive and negative instances from where each version of the dataset is built.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Positive Pairs",
"sec_num": null
},
{
"text": "As usual in supervised training, we split the whole dataset (positive and negative pairs) into train, validation and test partitions. Following the work of Shwartz et al. (2016) , we consider two splits of the data: random and lexical split. While the random split is performed randomly, the lexical split does not allow lexical intersection between the partitions. In the following section we describe each one.",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "Shwartz et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Splits",
"sec_num": "3.3."
},
{
"text": "The random split consists in splitting the dataset randomly, without taking into account any consideration. We perform a random split with the following ratio: 70 % for training set, 25 % for test set and 5 % for validation set. This splitting process has the advantage that any tuple is discarded, leading to a larger dataset, but may suffer of the phenomena of lexical memorization (Levy et al., 2015) . The lexical memorization phenomenon occurs when different pairs of hypernym, instead of learning the semantic relationship between words, learn a specific word independently as a strong indicator of the label. For example, given the positive pairs such as: (cat, animal), (dog, animal), (horse, animal), the algorithm tends to learn that the word \"animal\" is a \"prototype\" and given any new (x, animal) classifies it as a positive pair.",
"cite_spans": [
{
"start": 384,
"end": 403,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Random Split",
"sec_num": "3.3.1."
},
{
"text": "To avoid the phenomenon of lexical memorization, the training, validation, and test sets are split with different vocabularies. We split the dataset with the same methodology of (Shwartz et al., 2016) . The approximate division ratio was 70-25-5. The respective sizes of the random and lexical splits of our base dataset are shown in Table 3 . Table 3 : Spanish dataset sizes for each split: lexical and random. The sizes are discriminated in terms of positive (P) and negative (N) instance. This sizes does not contain cohyponyms or pattern extracted positive instances.",
"cite_spans": [
{
"start": 178,
"end": 200,
"text": "(Shwartz et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 3",
"ref_id": null
},
{
"start": 344,
"end": 351,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Split",
"sec_num": "3.3.2."
},
{
"text": "To automatically detect hypernymy we consider a simple feed forward network trained as an order embeddings (Vendrov et al., 2015). This network takes the word embedding to a non negative vector with a partial order relation defined and trained to take hypernym pairs to related vectors. In this work we show that without path or any additional information than the proper word embedding of each word, and a feed forward network trained as above mentioned, fairly good results can be achieved. We first give an introduction to the order embedding proposal and our experiments configuration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using Order Embeddings",
"sec_num": "4."
},
{
"text": "An order embedding is a function between two partially ordered sets f : (X, X ) \u2192 (Y, Y ) that preserves and reflects its order relationships. That is to say, x 1 X x 2 if and only if f (x 1 ) Y f (x 2 ). Vendrov et al. (2015) introduce a method to train an order embedding into m \u22650 considering the reversed product order, defined as follows:",
"cite_spans": [
{
"start": 205,
"end": 226,
"text": "Vendrov et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Order Embedding Model",
"sec_num": "4.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x y \u21d0\u21d2 m i=1 x i \u2265 y i ,",
"eq_num": "(1)"
}
],
"section": "Order Embedding Model",
"sec_num": "4.1."
},
{
"text": "where x, y \u2208 m \u22650 and x i and y i correspond to the i-th component of x and y, respectively. By definition this relationship is antisymmetric and transitive, being 0 the top element of the hierarchy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Order Embedding Model",
"sec_num": "4.1."
},
{
"text": "The partial order relation ( , m \u22650 ) defined above allows to define measures to quantify the degree to which a pair of two elements does not satisfy the relationship. Let us consider",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E p ( x, y) = ||max( 0, y \u2212 x)|| 2 ,",
"eq_num": "(2)"
}
],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "where x, y \u2208 m + and max is the maximum function element-wise. Note that E p indicates the relation satisfaction degree and E p (x, y) = 0 iff x y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "Then, E p can be forced to be higher than a threshold \u03b1 for unrelated terms through the max-margin loss as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E n ( x, y) = max{0, \u03b1 \u2212 E p ( x, y)},",
"eq_num": "(3)"
}
],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "guaranteeing that E n ( x , y ) is 0 when E p ( x , y ) \u2265 \u03b1 and therefor x y . Then, summing (2) and (3) the resulting contrastive loss function, which consists of minimizing E p and E n jointly, stands as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = (x,y)\u2208P E p ( x, y) + (x ,y )\u2208N E n ( x , y ),",
"eq_num": "(4)"
}
],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "where P and N are sets of positive and negative examples, respectively. Note that L is differentiable allowing to fit a mapping to an order embedding through gradient descent based techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrastive Loss Function",
"sec_num": "4.1.1."
},
{
"text": "We search for a good hyperparameter configuration through random search. We search for an hyperparameter configuration according to the validation set and report the evaluation results on the test set partition. We consider feed (Shwartz et al., 2016) 0.901 0.637 0.746 0.754 0.551 0.637 HypeNET Integrated (Shwartz et al., 2016) forward networks using pretrained fastText (Joulin et al., 2016) word vectors for Spanish and English.",
"cite_spans": [
{
"start": 229,
"end": 251,
"text": "(Shwartz et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 307,
"end": 329,
"text": "(Shwartz et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 373,
"end": 394,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Configuration",
"sec_num": "4.2."
},
{
"text": "P rand R rand F rand P lex R lex F lex Best Distributional",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Configuration",
"sec_num": "4.2."
},
{
"text": "We evaluate our models using precision, recall and F measures. The best configuration consisted on a three layered feed forward networks, with 150 neurons and SELU activation function on the first two layers and 100 ReLU units for the output layer. For the training we consider Adam (Kingma and Ba, 2014) , with a learning rate of 0.005, and we conclude the training by early stopping, with a patience of 5. We checkout the best performing model against the validation set along the whole training.",
"cite_spans": [
{
"start": 283,
"end": 304,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Configuration",
"sec_num": "4.2."
},
{
"text": "We include for comparison the results of the best distributional model reported by Shwartz et al. (Shwartz et al., 2016) and HypeNET integrated mdoel. In the Table 5 can be seen how the order embedding achieves considerable good results in comparison to the best distributional model reported by Shwartz and also in comparison to HypeNET, that is a pattern-based and distributional combined model. We found interesting the good performance of the order embedding model taking as input general purpose word embeddings and without considering any explicit paths information on a corpus.",
"cite_spans": [
{
"start": 83,
"end": 120,
"text": "Shwartz et al. (Shwartz et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results for English",
"sec_num": "4.3."
},
{
"text": "In this section we show the results obtained with the above described model in the introduced dataset for Spanish. We report order embedding results as a baseline in the dataset for future comparisons. In order to show the behavior of pattern-extracted and cohyponymy instances we consider the following different variants of the training data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Spanish",
"sec_num": "4.3.1."
},
{
"text": "\u2022 As base, the positive instances from WordNet and the translated instances of Shwartz dataset, and the negative instances randomly, sampling words from the vocabularies of Cardellino and WordNet. (OrdEmb) \u2022 The base dataset adding cohyponyms as negative instances for training. (OrdEmb +cohyp)",
"cite_spans": [
{
"start": 173,
"end": 205,
"text": "Cardellino and WordNet. (OrdEmb)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Spanish",
"sec_num": "4.3.1."
},
{
"text": "\u2022 The base dataset adding positive instances extracted by patterns. (OrdEmb +pattern)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Spanish",
"sec_num": "4.3.1."
},
{
"text": "\u2022 The base dataset adding for training cohyponyms as negative instances and pattern extracted pairs as positive. (OrdEmb +pattern+cohyp) We show the obtained results in the table 4. We evaluate the model against the base test partition and including cohyponymy instances on the test data. In the results can be observed that both cohyponyms and pattern-extracted instances during the training give some improvement in most cases, where cohyponyms are most beneficial, with the exception of the lexical split evaluating with cohyponyms addition in test partition.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 136,
"text": "(OrdEmb +pattern+cohyp)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for Spanish",
"sec_num": "4.3.1."
},
{
"text": "In this paper we show the results obtained on supervised hypernymy detection in Spanish. Given the lack of resources in Spanish for hypernymy detection we build a dataset based on previous work for English. We included two versions of the dataset according to its train, validation and test partitions, and the lexical intersection between them: random and lexical split. The former is done randomly while the lexical split does not contain lexical intersection between the partitions, tackling the lexical memorization problem of the hypernymy detection. We train an order embedding using general purpose word vectors and we obtain that considerable good results. We show the behavior of including cohyponyms pairs for the training considerably improves the overall result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Spanish Billion Word Corpus and Embeddings by Cristian Cardellino: https://crscardellino.github.io/ SBWCE/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Spanish wordnet 1.6: Porting the spanish wordnet across princeton versions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Atserias",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Villarejo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2004,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atserias, J., Villarejo, L., and Rigau, G. (2004). Spanish wordnet 1.6: Porting the spanish wordnet across prince- ton versions. In LREC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M., Bernardi, R., Do, N., and Shan, C. (2012). En- tailment above the word level in distributional semantics. In EACL, pages 23-32. The Association for Computer Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semeval-2018 task 9: Hypernym discovery",
"authors": [
{
"first": "J",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Delli Bovi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oramas",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pasini",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "712--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Camacho-Collados, J., Delli Bovi, C., Espinosa-Anke, L., Oramas, S., Pasini, T., Santus, E., Shwartz, V., Nav- igli, R., and Saggion, H. (2018). Semeval-2018 task 9: Hypernym discovery. In Proceedings of the 12th Inter- national Workshop on Semantic Evaluation (SemEval- 2018); 2018 Jun 5-6; New Orleans, LA. Stroudsburg (PA): ACL; 2018. p. 712-24. ACL (Association for Com- putational Linguistics).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Spanish billion words corpus and embeddings. Spanish Billion Words Corpus and Embeddings",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cardellino",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cardellino, C. (2016). Spanish billion words corpus and embeddings. Spanish Billion Words Corpus and Embed- dings.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language inference with external knowl",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Q., Zhu, X., Ling, Z., Inkpen, D., and Wei, S. (2017). Natural language inference with external knowl- edge. CoRR, abs/1711.04289.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using and extending wordnet to support question-answering. 01. Dash",
"authors": [
{
"first": "P",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "J",
"middle": [
"; S"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "M",
"middle": [
"F M"
],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gliozzo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mihindukulasooriya",
"suffix": ""
},
{
"first": "N",
"middle": [
"R"
],
"last": "Fauceglia",
"suffix": ""
}
],
"year": 2007,
"venue": "Hypernym detection using strict partial order networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, P., Fellbaum, C., and Hobbs, J. (2007). Using and extending wordnet to support question-answering. 01. Dash, S., Chowdhury, M. F. M., Gliozzo, A., Mihinduku- lasooriya, N., and Fauceglia, N. R. (2019). Hypernym detection using strict partial order networks.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Supervised distributional hypernym discovery via domain adaptation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Delli Bovi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Espinosa-Anke, L., Camacho-Collados, J., Delli Bovi, C., and Saggion, H. (2016). Supervised distributional hy- pernym discovery via domain adaptation. In Conference on Empirical Methods in Natural Language Processing;",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "ACL; 2016. p. 424-35. ACL (Association for Computational Linguistics)",
"authors": [
{
"first": "Tx",
"middle": [
"Red"
],
"last": "Austin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hook",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Austin, TX. Red Hook (NY): ACL; 2016. p. 424-35. ACL (Association for Computational Linguis- tics).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning semantic hierarchies via word embeddings",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "1199--1209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu, R., Guo, J., Qin, B., Che, W., Wang, H., and Liu, T. (2014). Learning semantic hierarchies via word embed- dings. In ACL (1), pages 1199-1209. The Association for Computer Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "14th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. A. (1992). Automatic acquisition of hyponyms from large text corpora. In 14th International Con- ference on Computational Linguistics, COLING 1992, Nantes, France, August 23-28, 1992, pages 539-545.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fasttext.zip: Compressing text classification models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03651"
]
},
"num": null,
"urls": [],
"raw_text": "Joulin, A., Grave, E., Bojanowski, P., Douze, M., J\u00e9gou, H., and Mikolov, T. (2016). Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Do supervised distributional methods really learn lexical inference relations?",
"authors": [
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "970--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levy, O., Remus, S., Biemann, C., and Dagan, I. (2015). Do supervised distributional methods really learn lexi- cal inference relations? In Rada Mihalcea, et al., ed- itors, NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 -June 5, 2015, pages 970-976. The Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantic hierarchies for visual object recognition. Computer Vision and Pattern Recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marszalek",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2007,
"venue": "CVPR '07. IEEE Conference",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marszalek, M. and Schmid, C. (2007). Semantic hierar- chies for visual object recognition. Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference, pages 1-7, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Commun. ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A. (1995). Wordnet: A lexical database for en- glish. Commun. ACM, 38(11):39-41.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "PATTY: A taxonomy of relational patterns with semantic types",
"authors": [
{
"first": "N",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "F",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1135--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakashole, N., Weikum, G., and Suchanek, F. M. (2012). PATTY: A taxonomy of relational patterns with semantic types. In EMNLP-CoNLL, pages 1135-1145. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning word-class lattices for definition and hypernym extraction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1318--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navigli, R. and Velardi, P. (2010). Learning word-class lattices for definition and hypernym extraction. In Pro- ceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1318-1327, Upp- sala, Sweden, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A graphbased algorithm for inducing lexical taxonomies from scratch",
"authors": [
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Faralli",
"suffix": ""
}
],
"year": 2011,
"venue": "Twenty-Second International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navigli, R., Velardi, P., and Faralli, S. (2011). A graph- based algorithm for inducing lexical taxonomies from scratch. In Twenty-Second International Joint Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Hacia la identificaci\u00c3de relaciones de hiponimia/hiperonimia en Internet",
"authors": [
{
"first": "R",
"middle": [
"M A"
],
"last": "Ortega",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "Aguilar",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Villase\u00e3",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Montes",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sierra",
"suffix": ""
}
],
"year": 2011,
"venue": "Revista signos",
"volume": "44",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ortega, R. M. A.-a., Aguilar, C. A., Villase\u00c3, L., Montes, M., and Sierra, G. (2011). Hacia la identificaci\u00c3de rela- ciones de hiponimia/hiperonimia en Internet. Revista signos, 44:68 -84, 03.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Inclusive yet selective: Supervised distributional hypernymy detection",
"authors": [
{
"first": "S",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boleda",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014)",
"volume": "",
"issue": "",
"pages": "1025--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roller, S., Erk, K., and Boleda, G. (2014). Inclusive yet selective: Supervised distributional hypernymy de- tection. In Proceedings of the 25th International Con- ference on Computational Linguistics (COLING 2014), pages 1025-1036, Dublin, Ireland, August.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Chasing hypernyms in vector spaces with entropy",
"authors": [
{
"first": "E",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "38--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santus, E., Lenci, A., Lu, Q., and Schulte im Walde, S. (2014). Chasing hypernyms in vector spaces with en- tropy. In Proceedings of the 14th Conference of the Eu- ropean Chapter of the Association for Computational Linguistics, EACL 2014, April 26-30, 2014, Gothenburg, Sweden, pages 38-42.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Improving hypernymy detection with an integrated path-based and distributional method",
"authors": [
{
"first": "V",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shwartz, V., Goldberg, Y., and Dagan, I. (2016). Improv- ing hypernymy detection with an integrated path-based and distributional method. CoRR, abs/1603.06076.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning syntactic patterns for automatic hypernym discovery",
"authors": [
{
"first": "R",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1297--1304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Snow, R., Jurafsky, D., and Ng, A. Y. (2004). Learn- ing syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems, NIPS 2004, December 13-18, 2004, Vancouver, British Columbia, Canada], pages 1297-1304.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Negative sampling improves hypernymy extraction based on projection learning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ustalov",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Arefyev",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Panchenko",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "543--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ustalov, D., Arefyev, N., Biemann, C., and Panchenko, A. (2017). Negative sampling improves hypernymy extrac- tion based on projection learning. In EACL (2), pages 543-550. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Order-embeddings of images and language",
"authors": [
{
"first": "I",
"middle": [],
"last": "Vendrov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vendrov, I., Kiros, R., Fidler, S., and Urtasun, R. (2015). Order-embeddings of images and language. CoRR, abs/1511.06361.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Specialising word vectors for lexical entailment",
"authors": [
{
"first": "I",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mrksic",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vulic, I. and Mrksic, N. (2017). Specialising word vectors for lexical entailment. CoRR, abs/1710.06371.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning",
"authors": [
{
"first": "E",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL (1). The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vylomova, E., Rimell, L., Cohn, T., and Baldwin, T. (2016). Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical re- lation learning. In ACL (1). The Association for Com- puter Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning to distinguish hypernyms and cohyponyms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Weir",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "2249--2259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weeds, J., Clarke, D., Reffin, J., Weir, D. J., and Keller, B. (2014). Learning to distinguish hypernyms and co- hyponyms. In COLING, pages 2249-2259. ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The distributional inclusion hypotheses and lexical entailment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zhitomirsky-Geffet",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhitomirsky-Geffet, M. and Dagan, I. (2005). The distri- butional inclusion hypotheses and lexical entailment. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Order embedding diagram."
},
"TABREF1": {
"type_str": "table",
"text": "Results on test set on Spanish. The upper table (a) shows the result of evaluating without introducing inferred cohyponymy instances in the test partition and the lower table (b) shows the results including cohyponymy instances in the test partition. The labels +cohyp and +pattern stand for cohyponymy and pattern-extracted instances in the training data.",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>P rand R rand F rand</td><td>P lex</td><td>R lex</td><td>F lex</td></tr><tr><td/><td>OrdEmb</td><td colspan=\"3\">0.855 0.904 0.879 0.823 0.674 0.741</td></tr><tr><td>(a)</td><td>OrdEmb +cohyp</td><td colspan=\"3\">0.857 0.932 0.893 0.809 0.827 0.818</td></tr><tr><td/><td>OrdEmb +pattern</td><td colspan=\"3\">0.860 0.885 0.872 0.798 0.766 0.782</td></tr><tr><td/><td colspan=\"4\">OrdEmb +pattern +cohyp 0.859 0.930 0.893 0.802 0.821 0.811</td></tr><tr><td/><td/><td>P rand R rand F rand</td><td>P lex</td><td>R lex</td><td>F lex</td></tr><tr><td/><td>OrdEmb</td><td colspan=\"3\">0.719 0.946 0.817 0.744 0.841 0.789</td></tr><tr><td>(b)</td><td>OrdEmb +cohyp</td><td colspan=\"3\">0.847 0.869 0.858 0.781 0.716 0.747</td></tr><tr><td/><td>OrdEmb +pattern</td><td colspan=\"3\">0.742 0.931 0.826 0.666 0.857 0.749</td></tr><tr><td/><td colspan=\"4\">OrdEmb +pattern +cohyp 0.848 0.870 0.859 0.759 0.678 0.716</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "Order embedding results with different activation functions on test of Shwartz English dataset, and we include HypeNET and Best Distributional results reported by Shwartz.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}