| { |
| "paper_id": "R15-1046", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:57:39.852020Z" |
| }, |
| "title": "Taxonomy Beats Corpus in Similarity Identification, but Does It Matter?", |
| "authors": [ |
| { |
| "first": "Minh", |
| "middle": [ |
| "Ngoc" |
| ], |
| "last": "Le", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "VU University Amsterdam", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Antske", |
| "middle": [], |
| "last": "Fokkens", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "VU University Amsterdam", |
| "location": {} |
| }, |
| "email": "antske.fokkens@vu.nl" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present extensive evaluations comparing the performance of taxonomy-based and corpus-based approaches on SimLex-999. The results confirm our hypothesis that taxonomy-based approaches are more suitable to identify similarity. We introduce two new measures of evaluation that show that all measures perform well on a coarse-grained evaluation and that it is not always clear which approach is most suitable when a similarity score is used as a threshold. This leads us to conclude that the inferior performance of corpus-based approaches may not (always) matter.", |
| "pdf_parse": { |
| "paper_id": "R15-1046", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present extensive evaluations comparing the performance of taxonomy-based and corpus-based approaches on SimLex-999. The results confirm our hypothesis that taxonomy-based approaches are more suitable to identify similarity. We introduce two new measures of evaluation that show that all measures perform well on a coarse-grained evaluation and that it is not always clear which approach is most suitable when a similarity score is used as a threshold. This leads us to conclude that the inferior performance of corpus-based approaches may not (always) matter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Similarity measures are used in a wide variety of Natural Language Processing (NLP) tasks (see Pilehvar et al. (2013) , among others for examples). They may be used, e.g. to increase coverage of an approach by using information from similar words for unseen data, or to establish average similarity between a question and a potential answer.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 117, |
| "text": "Pilehvar et al. (2013)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Due to its importance, similarity measures have received steady attention in computational linguistics. There are two widely followed, but different, schools: taxonomy-based approaches and distributional, or corpus-based, approaches. Apart from a few exceptions, these approaches have mostly been studied separately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our main goal is to examine how the approaches perform when identifying true similarity, in contrast to the more general relatedness, which also includes association, between wordpairs. We evaluate the approaches on the new gold-standard SimLex-999 (Hill et al., 2014b) . We compare taxonomy-based approaches that use WordNet (Fellbaum, 1998) to the corpus-based approaches that performed best on SimLex-999 in Hill et al. (2014a) . We hypothesize that taxonomybased approaches outperform corpus-based approaches on a true similarity set, because corpusbased approaches tend to mix-up similarity and association.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 269, |
| "text": "(Hill et al., 2014b)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 326, |
| "end": 342, |
| "text": "(Fellbaum, 1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 411, |
| "end": 430, |
| "text": "Hill et al. (2014a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We carry out several evaluations which investigate (i) the difference in performance on pure similarity sets and sets that combine similarity and association, (ii) the influence of associative pairs while identifying true similarity, and (iii) various evaluation metrics that compare similarity measures to the gold standard of SimLex-999.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We perform more than one evaluation metric for two reasons. First, different ranking coefficients can lead to a completely different outcome when evaluating similarity scores (Fokkens et al., 2013) . Second, we want to gain more insight into the differences between individual measures. To do so, we introduced two new, more flexible, evaluation methods which reveal high results for all similarity measures. We argue that these new evaluations provide a better insight into how suitable similarity measures are to be used in NLP tasks than the commonly used Spearman's correlation (henceforth Spearman \u03c1).", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 197, |
| "text": "(Fokkens et al., 2013)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our results show that most of the evaluations confirm our hypothesis. The few cases where corpus-based methods outperformed taxonomybased approaches reveal much smaller differences than the many cases where taxonomy-based approaches have higher results. However, all similarity measures perform very well when they are evaluated on the relative ranking of word-pairs that are further apart in the gold-standard. We therefore conclude that, even though taxonomy-based are better at identifying similarity than corpusbased approaches, this may not (always) matter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of this paper is structured as follows. In Section 2, we motivate our approach and address related work. Section 3 describes the similarity measures we investigate. In Section 4, we outline our experimental methodology, including used datasets and evaluation methods. The results are presented in Section 5, and our conclusions and future work in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Several gold-standards have been created that rank word-pairs based on their similarity. Agirre et al. (2009) point out that association and similarity are mixed up in these sets, where associated pairs such as coffee and cup rank higher than truly similar pairs such as car and train. The confusion directly influences the performance of corpus-based approaches, which also tend to have difficulties distinguishing association from similarity (Hill et al., 2014a) . Hill et al. (2014b) introduce a new gold standard dataset that is annotated with pure semantic similarity and larger than previously created similarity sets, such as Rubenstein and Goodenough (1965) and Agirre et al. (2009) 's sets. Hill et al. (2014a) evaluate corpus-based approaches and show that they indeed have trouble identifying similarity, performing well-below the upperbound of agreement between human annotators.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 109, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 444, |
| "end": 464, |
| "text": "(Hill et al., 2014a)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 467, |
| "end": 486, |
| "text": "Hill et al. (2014b)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 633, |
| "end": 665, |
| "text": "Rubenstein and Goodenough (1965)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 670, |
| "end": 690, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 700, |
| "end": 719, |
| "text": "Hill et al. (2014a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "It is not surprising that corpus-based approaches confuse similarity and association: semantically related words tend to occur close to each other and hence in similar contexts. Approaches that make use of a relatively narrow context window perform slightly better, because they can capture more subtle differences in context to some extend.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Taxonomies represent word meanings in hypernym and hyponym hierarchies, directly capturing their similarity. The closer two terms are in the hierarchy, the more similar they are. Similarity measures that make use of this structure are less likely to confuse whether two terms are similar or related in some other way.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "These well-known properties of corpus-based and taxonomy-based approaches led to the following hypothesis:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Taxonomy-based approaches are better suited to identify similarity than corpus-based approaches Agirre et al. (2009) seem to contradict this hypothesis showing that corpus-based approaches can be as good at identifying similarity (when the right model is based on enough data). However, Hill et al. (2014b) point out that Agirre et al.'s evaluation set does not form a representative set for measuring similarity, even after they made an alternative set that separates association and similarity. We therefore expected that the hypothesis would nevertheless hold on SimLex-999.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 116, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 287, |
| "end": 306, |
| "text": "Hill et al. (2014b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The outcome of our experiments confirmed our hypothesis, thus contradicting Agirre et al. (2009) 's results and being, to our knowledge, the first to show this on such a large and reliable benchmark. Banjade et al. (2015) also applies WordNet-based and corpus-based similarity measures to SimLex-999, but do not examine or discuss the difference between taxonomy-based approaches and corpus-based approaches in detail. Instead, they focus on the strength of combining several approaches to yield better results. 1 We investigate the difference between the approaches in various evaluations showing that taxonomy-based approaches outperform corpus-based approaches, a conclusion that cannot be drawn (clearly) from Banjade et al. (2015) 's results. It should be noted that our conclusions only apply to the task of identifying pure similarity. Markert and Nissim (2005) show, for instance, that a corpus-based approach with sufficiently large corpus works better than WordNet for anaphora resolution.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 96, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 200, |
| "end": 221, |
| "text": "Banjade et al. (2015)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 512, |
| "end": 513, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 714, |
| "end": 735, |
| "text": "Banjade et al. (2015)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 843, |
| "end": 868, |
| "text": "Markert and Nissim (2005)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The next step in our investigation was to determine the strengths and weaknesses of each approach. The original idea was to investigate pairs that are ranked more or less correctly by one approach, but are far off in the other to identify patterns of errors in each approach. We did not find such patterns, partially because the examples that have large differences in ranking compared to the gold are relatively rare.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We therefore developed two alternative evaluation methods that are less sensitive to minor differences in ranking. The first evaluation directly tests the comparison of pairs and, more importantly, allows us to study the contribution of partitions of the dataset. The second evaluation revolves around thresholds for similarity. In this evaluation, we set thresholds to establish a binary distinction between highly similar pairs and other pairs. The pairs above the similarity threshold are compared to those falling above the threshold in the gold (see Section 4.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Many studies compare similarity measures (see Baroni et al. (2014) and Pedersen (2010) , among others) but, to our knowledge, Agirre et al. (2009) and Banjade et al. (2015) are the only ones that look at both taxonomy-based approaches and distributional approaches. As mentioned above, they do not dive into the details of the differences between the two. Furthermore, apart from Fokkens et al. 2013, who do not propose new rankings, we are not aware of studies applying multiple evaluation metrics for similarity-based rankings.", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 66, |
| "text": "Baroni et al. (2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 71, |
| "end": 86, |
| "text": "Pedersen (2010)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 126, |
| "end": 146, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 151, |
| "end": 172, |
| "text": "Banjade et al. (2015)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This section describes the similarity measures compared in this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity Measures", |
| "sec_num": "3" |
| }, |
| { |
| "text": "WordNet (Fellbaum, 1998) organizes nouns and verbs in hierarchies of hypernym-hyponym relations. We selected WordNet for our taxonomybased experiments, because it is widely used and probably the most popular taxonomy when it comes to determining word similarity. Many measures of similarity based on WordNet have been proposed over the years. Early work (Rada et al., 1989) advocates the use of is-a hierarchy and later approaches continue to use it heavily. In order to make a clean comparison between WordNet and distributional models, we do not include in our study measures that make use of a corpus such as Resnik (1995) and Jiang and Conrath (1997) .", |
| "cite_spans": [ |
| { |
| "start": 354, |
| "end": 373, |
| "text": "(Rada et al., 1989)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 612, |
| "end": 625, |
| "text": "Resnik (1995)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 630, |
| "end": 654, |
| "text": "Jiang and Conrath (1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Path length similarity takes the inverse of the path length (i.e. the distance in number of nodes) from s 1 to s 2 plus one.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "PL = 1 d(s 1 , s 2 ) + 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Wu and Palmer's similarity (Wu and Palmer, 1994) takes the fact into account that senses deeper in the hierarchy tend to be more specific than those high up. It therefore incorporates the depth of the hierarchy in their similarity calculation:", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 48, |
| "text": "(Wu and Palmer, 1994)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "WUP = 2depth(lcs) d(s 1 , lcs) + d(s 2 , lcs) + 2depth(lcs)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Leacock and Chodorows similarity (Leacock and Chodorow, 1998) normalizes path-based scores by the maximum depth D of the hierarchy. This corrects for the difference in the depth of verb and noun hierarchy:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "LCH = \u2212 log d(s 1 , s 2 ) + 1 2D", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomy-based Similarity Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We selected two representative models from the large and growing literature on corpus-based models of lexical semantics: Word2vec (Mikolov et al., 2013, W2V) and dependency-based word embeddings (Levy and Goldberg, 2014a, DEPS) .", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 157, |
| "text": "(Mikolov et al., 2013, W2V)", |
| "ref_id": null |
| }, |
| { |
| "start": 195, |
| "end": 227, |
| "text": "(Levy and Goldberg, 2014a, DEPS)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Word2vec is the first model to use a Skip-Gram with Negative Sampling (SGNN) algorithm for constructing semantic models and performed best on SimLex-999 in Hill et al. (2014a) . Levy and Goldberg (2014b) argue that SGNN implicitly factorizes a shifted positive mutual information wordcontext matrix, not unlike traditional distributional semantic models. The use of a small window size and the weighting scheme that favors nearby contexts are supported by a systematic study of Kiela and Clark (2014) that shows the superiority of small windows. Moreover, Sahlgren (2006) presents empirical evidence that smaller windows lead to a cleaner distinction between syntagmatic and paradigmatic relations (which can be considered the linguistic version of similarity and association).", |
| "cite_spans": [ |
| { |
| "start": 156, |
| "end": 175, |
| "text": "Hill et al. (2014a)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 178, |
| "end": 203, |
| "text": "Levy and Goldberg (2014b)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 478, |
| "end": 500, |
| "text": "Kiela and Clark (2014)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 556, |
| "end": 571, |
| "text": "Sahlgren (2006)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Levy and Goldberg (2014a) extend SGNN to work with arbitrary contexts and experiment with dependency structures. It is generally believed that dependency structures are better at capturing similarity (Pad\u00f3 and Lapata, 2007) although Kiela and Clark (2014) found mixed results.", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 223, |
| "text": "(Pad\u00f3 and Lapata, 2007)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 233, |
| "end": 255, |
| "text": "Kiela and Clark (2014)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The Skip-gram model captures the distribution p(c|t) of a context word c within a certain window around a target word t. For a vocabulary of millions, computing normalized probabilities (i.e. summing to one) for each example can be prohibitively expensive. Negative sampling was used to avoid the cost.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For each context-target pair (c, t) taken from training data, we replace the context by random words drawn from the vocabulary to obtain new pairs {(c , t)}. We call D (c, t) positive distribution and N (c , t) negative distribution. The task of the model is to identify which pairs come from D and which from N . Formally. that is to maximize the negative log likelihood:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "= \u2212 log p(D|c, t) + log p(N |c , t)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The probability is calculated using target embeddings e t \u2208 R d and context embeddings\u00ea c \u2208 R d such that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "p(D|c, t) = \u03c3(e t \u2022\u00ea c ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where \u03c3(x) = 1/ (1 + e \u2212x ) is a monotonic function that maps any value in (\u2212\u221e, +\u221e) to a valid probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The training objective encourages to increase p(D|c, t) which can be achieved by aligning e t and e c in similar directions. On the other hand, the objective also encourages a small p(N |c, t), creating an uniform \"repelling force\" between all pairs of words. After a lot of updating iterations, similar words come close together while dissimilar words are pulled apart.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We used the trained embeddings from Mikolov et al. (2013) and Levy and Goldberg (2014a) . 2 Word2vec embeddings are 300-dimensional vectors obtained by training on 100 billion words of Google News dataset. Dependency-based embeddings were harvested from English Wikipedia automatically annotated with dependency structures. Although the dependency-based model was trained on a significantly smaller corpus, it achieves comparable results as we will show in Section 5.", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 57, |
| "text": "Mikolov et al. (2013)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 62, |
| "end": 87, |
| "text": "Levy and Goldberg (2014a)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this section, we describe the experimental setup used in our evaluations. We first describe the datasets and then the evaluation metrics we use.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We evaluate the approaches on three datasets. WordSim-353 and MEN allow us to compare performance on sets that mix association and similarity. SimLex-999's ranking is based on similarity only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold-standard Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "WordSim-353 (Finkelstein et al., 2001) includes 353 word pairs scored for relatedness on a scale from 0 to 10 by 13 or 16 subjects. The interannotator agreement is 0.611 defined as the average pairwise Spearman's correlation. Researchers have reported correlation as high as 0.81 (Yih and Qazvinian, 2012). Agirre et al. (2009) later divided WordSim-353 into a \"similarity\" and \"relatedness\" set. However, Hill et al. (2014b) rightly point out that both remain relatedness datasets, because this is what the annotators rated.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 38, |
| "text": "(Finkelstein et al., 2001)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 307, |
| "end": 327, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 406, |
| "end": 425, |
| "text": "Hill et al. (2014b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold-standard Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "MEN (Bruni et al., 2012) is composed of 3,000 word pairs, sampled to include a balanced range of relatedness. Annotators were asked to choose 2 The models are available at:", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 24, |
| "text": "(Bruni et al., 2012)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 142, |
| "end": 143, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold-standard Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "https:// code.google.com/p/word2vec/ and https: //levyomer.wordpress.com/2014/04/25/ dependency-based-word-embeddings which of two pairs of words is more related, an arguably more intuitive task than assigning a score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold-standard Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "SimLex-999 (Hill et al., 2014b) carefully distinguishes between similarity and association and provides a balanced range of similarity, concreteness and parts-of-speech. The authors sampled 900 associated pairs from the University of South Florida Free Association Database (Nelson et al., 2004) and randomly coupled them to create 999 unassociated pairs. Subjects were asked to judge the similarity of word pairs on a 0-6 scale. Their answers were averaged to produce the final score.", |
| "cite_spans": [ |
| { |
| "start": 11, |
| "end": 31, |
| "text": "(Hill et al., 2014b)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 274, |
| "end": 295, |
| "text": "(Nelson et al., 2004)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold-standard Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "All three datasets are lemma-based. The way two words can be compared, however, is more likely via their senses (e.g. queen is not similar to princess when referring to a chess piece). We follow Resnik (1995) in using maximally similar senses in our taxonomy-based approaches.", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 208, |
| "text": "Resnik (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold-standard Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The first evaluation measure we use compares between the gold ranking and a measurement's ranking using Spearman's \u03c1 , the most widely used evaluation metric for similarity score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Hill et al. (2014b) report performance on a subset of highly associated word pairs, but its contribution to the overall performance is unclear. We wish to gain deeper insight into how different subsets in the data contribute to the overall score. This is not possible with Spearman's \u03c1 due to its holistic nature. We overcome this by using ordering accuracy following Agirre et al. (2009) . The scale is defined as:", |
| "cite_spans": [ |
| { |
| "start": 368, |
| "end": 388, |
| "text": "Agirre et al. (2009)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "a = a G,G = 1 |G| 2 (u,v)\u2208G (x,y)\u2208G m s,G (u, v, x, y)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where G stands for the gold standard and m s,G (\u2022) is a matching function that returns 1 for those two word-pairs whose relative ranking is the same in the gold standard and in the ranking of the similarity measure and 0 otherwise. We also experiment with a variation of m where ties get half score. As shown in Figure 1 , ordering accuracy highly correlates with Spearman's \u03c1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 312, |
| "end": 320, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "If G can be partitioned into n subsets g i (i.e. g i = \u2205 and g i = G) then a can be decomposed as the weighted sum of the accuracy on different subsets. The weights are proportional to their size: The final evaluation measure is based on the observation that many approaches use a threshold to determine which words are similar enough to be used for contributing features or approximations, or to be candidates for lexical substitution (Mc-Carthy and Navigli, 2009; Biran et al., 2011, e.g.) . Threshold accuracy sets a similarity threshold and determines how many of the n-highest ranking word pairs in a given measurement are also in the top-n pairs of the gold standard. In other words, this evaluation determines whether the right wordpairs would end up above the threshold of being similar.", |
| "cite_spans": [ |
| { |
| "start": 436, |
| "end": 465, |
| "text": "(Mc-Carthy and Navigli, 2009;", |
| "ref_id": null |
| }, |
| { |
| "start": 466, |
| "end": 491, |
| "text": "Biran et al., 2011, e.g.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "a = 1 |G| 2 i j |g i ||g j |a g i ,g j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We calculated the similarity scores of all noun and verb pairs in SimLex-999 (a set of 888 pairs), MEN (2,034 pairs), and all pairs in WordSim-353 using the measures outlined in Section 3 and ranked the word pairs according to the outcome. that taxonomy-based approaches capture similarity rather than association, whereas corpus-based approaches do not clearly distinguish the two. Table 2 presents the evaluation of our metrics using ordering accuracy. The first column indicates the standard score. The scores in the second and third column are calculated while giving partial credits to ties. Note that this only affects the performance of taxonomy-based approaches, where it is common for word pairs to have identical scores. Without correction for ties, scores for taxonomy-based and corpus-based measures are highly similar, with the corpus-based DEPS leading to the highest results. Taxonomy-based approaches uniformly beat corpus-based approaches again when we do correct for ties, confirming the outcome of our Spearman \u03c1 evaluation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 383, |
| "end": 390, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We also evaluate on a subset of highlyassociated words. The results are presented in column 3 of Table 2 . Sizeable decrease is observed in corpus-based measures for highly associated terms while taxonomy-based measures remain largely unaffected. This result confirms our hypothesis once more that taxonomy-based measures are more suited to capture similarity and that corpus-based methods tend to have difficulties separating similarity from association. Palmer et al. (2007) showed that making subtle sense distinction is hard for human subjects leading to evaluations where both coarse-grained and fine-grained word senses are considered (Palmer et al., 2007; Navigli et al., 2007) . Similarly, establishing which word-pair is more similar than another is challenging when pairs are close in sim-\u2206 = 0 pollution-president forget-learn take-leave succeed-try army-squad girl-child emotion-passion collect-save sheep-lamb attention-awareness \u2206 = 1 spoon-cup argue-differ remind-sell apple-candy book-topic argument-agreement corporation-business kidney-organ alcohol-wine beach-island Table 3 : Is the pair in the left or in the right more similar? (All pairs are extracted from ilarity. This is illustrated by the sample pairs in Table 3 . The fact that ranking such pairs is highly challenging for humans leads to the question how meaningful differences in performance of similarities measures on these pairs actually are.", |
| "cite_spans": [ |
| { |
| "start": 456, |
| "end": 476, |
| "text": "Palmer et al. (2007)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 641, |
| "end": 662, |
| "text": "(Palmer et al., 2007;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 663, |
| "end": 684, |
| "text": "Navigli et al., 2007)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 97, |
| "end": 104, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1086, |
| "end": 1093, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1232, |
| "end": 1239, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ordering Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To overcome this issue and gain deeper insight into how often low performance is the result of many small errors piling up and how often it is the result of a set of pairs being ranked completely wrongly, we apply our ordering accuracy to a decomposed dataset. We divide SimLex-999 nv into five equal similarity ranges {g i } based on SimLex-999's original ranges. The first range g 1 contains highly dissimilar pairs of words with a similarity between 0 and 2. Final set g 5 contains very similar or synonymous pairs with a similarity from 8 to 10.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition of Ordering Accuracy", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We use different granularity levels \u2206 (\u2206 = 0, ..., 4). Component accuracy is calculated by comparing each pair in g i to every pair in g j such that |i \u2212 j| = \u2206.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition of Ordering Accuracy", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The results reported in Figure 2 show that all models perform consistently well on coarsegrained similarity while only marginally beating chance-level at the most fine-grained level. Furthermore, taxonomy-based approaches only outperform corpus-based approaches when comparing pairs that are further apart in the gold ranking.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 32, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Decomposition of Ordering Accuracy", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Because the two most fine-grained components (\u2206 = 0 and \u2206 = 1) together have a weight of 58%, the ordering accuracy as reported in Table 2 is dominated by fine-grained similarity comparison. Spearman's \u03c1 highly correlates with ordering accuracy, indicating that fine-grained differ- Figure 2 : Ordering accuracy varies with degrees of granularity on SimLex-999 nv . \u2206 = 0 means two pairs fall in the same range of similarity (e.g. 0-2); \u2206 = 1 means they fall in neighboring ranges of similarity (e.g. 0-2 and 2-4), etc. ences also had a major impact on previous work. It is questionable whether it is really necessary for these measures to capture the small differences in similarity that are even difficult for humans to find. This outcome shows that similarity measures perform better than they seem to do according to recent evaluations in the literature.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 139, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 284, |
| "end": 292, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Decomposition of Ordering Accuracy", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The final evaluation we carry out is the so-called threshold evaluation. It evaluates how well a threshold performs that separates highly similar terms from less similar terms based on a specific score. We use the 10% and 20% most similar terms as a starting point. In a total set of 888 examples, this means we compare the top 89 and top 178 pairs of each measurement's output with the top pairs of the gold data. We report on the accuracy (i.e. percentage of pairs correctly classified as highly similar) of each scores. As mentioned above, taxonomy-based approaches often assign the same score to multiple pairs. If this was the case for the pairs around the threshold, we extended the range of comparison as to include all pairs with an identical score. Table 4 provides an overview of the results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 758, |
| "end": 765, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Threshold Evaluation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The top-n sets increase significantly for taxonomy-based approaches. Because approaches tend to fare better when the size of the group changes, we calculated the scores for W2V and DEPS with the top-n ranks found in the taxonomybased scores. Table 5 shows the results of this analysis. The scores of the relevant taxonomybased approach are repeated in the third row.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 242, |
| "end": 249, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Threshold Evaluation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The threshold based evaluation shows more 42.6 43.5/53.5 50.3 61.0 80.8 Table 5 : Scores of corpus-based methods on the n-values used for taxonomy-based scores.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 72, |
| "end": 79, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Threshold Evaluation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "variation than our other metric. In three out of twelve cases, 3 the corpus-based approach leads to more accurate results than the taxonomy-based score. In combination with the outcome of the accuracy ordering result, this outcome underlines the importance of using a variety of evaluation metrics. Overall, the outcome seems to confirm that taxonomy-based approaches are better at identifying similarity. First, taxonomy-based approaches outperformed corpus-based approaches on identifying the most accurate pairs. Second, corpusbased approaches only beat taxonomy-based ones in few measures and with comparatively small margins (the largest difference being 1.2%, compared to differences up to 15.1%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Threshold Evaluation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "This paper investigated the difference in performance of taxonomy-based approaches and corpusbased approaches on identifying similarity. The outcome of our experiments confirmed our hypothesis that taxonomy-based approaches are better at identifying similarity. This is mainly due to the fact that corpus-based approaches have difficulties distinguishing association from similarity, as also noted by Hill et al. (2014a) .", |
| "cite_spans": [ |
| { |
| "start": 401, |
| "end": 420, |
| "text": "Hill et al. (2014a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We presented several results that confirm our hypothesis by (i) comparing performance of taxonomy-based and corpus-based methods on a dataset designed to capture similarity, (ii) relating this to the results of the same measures on evaluation sets that measure both association and relatedness, and (iii) looking what the influence is of testing against a set that consists of associated terms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The results show that taxonomy-based approaches excel at identifying similarity whereas corpus-based approaches yield high results when similarity and association are not distinguished. Furthermore, taxonomy-based approaches are not influenced by association between words whereas performance of corpus-based measures drop when their task is to identify similarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We applied more than one evaluation to compare the models' performance on SimLex-999. This was done for two reasons. First, different evaluation measures can sometimes lead to different conclusions even if they are meant to address the same question on the same dataset. This also happened in our evaluation, where ordering accuracy without tie-correction and some thresholds led to different results. Second, the evaluation metrics revealed different aspects of the performance. Most notably, the results of our decomposed ordering accuracy showed that all similarity measures are quite good in a coarse-grained setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Together with the mixed outcome of the threshold-evaluation, this shows that corpus-based approaches have good potential to be used when similarity needs to be detected. In particular, when taxonomy-based approaches run into coverage issues, they may be the preferred choice. We therefore believe that it will ultimately depend on the application which approach works best. Future work will need to show whether and how these approaches differ when used in actual applications. 4 ", |
| "cite_spans": [ |
| { |
| "start": 478, |
| "end": 479, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We independently confirmed this result in our own experiments, but decided to leave it out of this paper because our results did not add much toBanjade et al. (2015).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We compare eight corpus-based outcomes with one taxonomy score and two with two scores for n=172, leading to twelve comparisons in total.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research for this paper was supported by the Netherlands Organisation for Scientific Research (NWO) via the Spinoza-prize Vossen projects (SPI 30-673, 2014(SPI 30-673, -2019 and the Bi-ographyNet project (Nr. 660.011.308), funded by the Netherlands eScience Center (http:// esciencecenter.nl/). We would like to thank the anonymous reviewers for their feedback. 4 All our code is published on https://bitbucket. org/ulm4/kcsim.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A study on similarity and relatedness using distributional and wordnet-based approaches", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Alfonseca", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Kravalova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of Human Language Technologies: The 2009", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "19--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, NAACL '09, pages 19-27, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Lemon and tea are not similar: Measuring word-to-word similarity by combining different methods", |
| "authors": [ |
| { |
| "first": "Rajendra", |
| "middle": [], |
| "last": "Banjade", |
| "suffix": "" |
| }, |
| { |
| "first": "Nabin", |
| "middle": [], |
| "last": "Maharjan", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Nobal", |
| "suffix": "" |
| }, |
| { |
| "first": "Vasile", |
| "middle": [], |
| "last": "Niraula", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipesh", |
| "middle": [], |
| "last": "Rus", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gautam", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics and Intelligent Text Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "335--346", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rajendra Banjade, Nabin Maharjan, Nobal B. Niraula, Vasile Rus, and Dipesh Gautam. 2015. Lemon and tea are not similar: Measuring word-to-word simi- larity by combining different methods. In Compu- tational Linguistics and Intelligent Text Processing, pages 335-346. Springer.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Dont count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Dont count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 238-247.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Putting it simply: a context-aware approach to lexical simplification", |
| "authors": [ |
| { |
| "first": "Or", |
| "middle": [], |
| "last": "Biran", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Brody", |
| "suffix": "" |
| }, |
| { |
| "first": "No\u00e9mie", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", |
| "volume": "2", |
| "issue": "", |
| "pages": "496--501", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Or Biran, Samuel Brody, and No\u00e9mie Elhadad. 2011. Putting it simply: a context-aware approach to lex- ical simplification. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 496-501. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Distributional semantics in technicolor", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Gemma", |
| "middle": [], |
| "last": "Boleda", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam-Khanh", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
| "volume": "1", |
| "issue": "", |
| "pages": "136--145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 136-145. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "WordNet: An Electronic Lexical Database", |
| "authors": [], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414. ACM.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Offspring from reproduction problems: What replication failure teaches us", |
| "authors": [ |
| { |
| "first": "Antske", |
| "middle": [], |
| "last": "Fokkens", |
| "suffix": "" |
| }, |
| { |
| "first": "Marieke", |
| "middle": [], |
| "last": "Erp", |
| "suffix": "" |
| }, |
| { |
| "first": "Marten", |
| "middle": [], |
| "last": "Postma", |
| "suffix": "" |
| }, |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "Piek", |
| "middle": [], |
| "last": "Vossen", |
| "suffix": "" |
| }, |
| { |
| "first": "Nuno", |
| "middle": [], |
| "last": "Freire", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1691--1701", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antske Fokkens, Marieke Erp, Marten Postma, Ted Pedersen, Piek Vossen, and Nuno Freire. 2013. Off- spring from reproduction problems: What replica- tion failure teaches us. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1691-1701. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Not all neural embeddings are born equal", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "S\u00e9bastien", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Coline", |
| "middle": [], |
| "last": "Devin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Workshop on Learning Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, KyungHyun Cho, S\u00e9bastien Jean, Coline Devin, and Yoshua Bengio. 2014a. Not all neural embeddings are born equal. NIPS 2014 Workshop on Learning Semantics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014b. SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation. ArXiv e-prints, August.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Semantic similarity based on corpus statistics and lexical taxonomy", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Jay", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "W" |
| ], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Conrath", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 10th Research on Computational Linguistics International Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "19--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jay J Jiang and David W Conrath. 1997. Seman- tic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the 10th Research on Computational Linguistics International Confer- ence, pages 19-33. The Association for Computa- tional Linguistics and Chinese Language Processing (ACLCLP).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A Systematic Study of Semantic Vector Space Model Parameters", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EACL 2014, Workshop on Continuous Vector Space Models and their Compositionality (CVSC)", |
| "volume": "", |
| "issue": "", |
| "pages": "21--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela and Stephen Clark. 2014. A Systematic Study of Semantic Vector Space Model Parameters. In Proceedings of EACL 2014, Workshop on Contin- uous Vector Space Models and their Compositional- ity (CVSC), pages 21-30. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Combining local context and WordNet similarity for word sense identification. WordNet: An electronic lexical database", |
| "authors": [ |
| { |
| "first": "Claudia", |
| "middle": [], |
| "last": "Leacock", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Chodorow", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "49", |
| "issue": "", |
| "pages": "265--283", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claudia Leacock and Martin Chodorow. 1998. Com- bining local context and WordNet similarity for word sense identification. WordNet: An electronic lexical database, 49(2):265-283.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Dependencybased word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "302--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, Maryland, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Neural word embedding as implicit matrix factorization", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Z. Ghahramani, M. Welling, C. Cortes, N.D.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Advances in Neural Information Processing Systems", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weinberger", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "27", |
| "issue": "", |
| "pages": "2177--2185", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2177-2185. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Comparing Knowledge Sources for Nominal Anaphora Resolution", |
| "authors": [ |
| { |
| "first": "Katja", |
| "middle": [], |
| "last": "Markert", |
| "suffix": "" |
| }, |
| { |
| "first": "Malvina", |
| "middle": [], |
| "last": "Nissim", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "31", |
| "issue": "3", |
| "pages": "367--402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katja Markert and Malvina Nissim. 2005. Compar- ing Knowledge Sources for Nominal Anaphora Res- olution. Computational Linguistics, 31(3):367-402, September.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The english lexical substitution task. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "43", |
| "issue": "", |
| "pages": "139--159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diana McCarthy and Roberto Navigli. 2009. The en- glish lexical substitution task. Language Resources and Evaluation, 43(2):139-159.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Efficient Estimation of Word Representations in Vector Space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient Estimation of Word Repre- sentations in Vector Space. ICLR Workshop.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Semeval-2007 task 07: Coarsegrained english all-words task", |
| "authors": [ |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "C" |
| ], |
| "last": "Litkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Orin", |
| "middle": [], |
| "last": "Hargraves", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", |
| "volume": "", |
| "issue": "", |
| "pages": "30--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roberto Navigli, Kenneth C. Litkowski, and Orin Har- graves. 2007. Semeval-2007 task 07: Coarse- grained english all-words task. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), pages 30-35, Prague, Czech Republic, June. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "The university of south florida free association, rhyme, and word fragment norms", |
| "authors": [ |
| { |
| "first": "Douglas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Nelson", |
| "suffix": "" |
| }, |
| { |
| "first": "Cathy", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcevoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "A" |
| ], |
| "last": "Schreiber", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Behavior Research Methods, Instruments, & Computers", |
| "volume": "36", |
| "issue": "3", |
| "pages": "402--407", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douglas L. Nelson, Cathy L. McEvoy, and Thomas A. Schreiber. 2004. The university of south florida free association, rhyme, and word fragment norms. Be- havior Research Methods, Instruments, & Comput- ers, 36(3):402-407.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Dependency-based construction of semantic space models", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "2", |
| "pages": "161--199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Making fine-grained and coarse-grained sense distinctions, both manually and automatically", |
| "authors": [ |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoa", |
| "middle": [ |
| "Trang" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Natural Language Engineering", |
| "volume": "13", |
| "issue": "02", |
| "pages": "137--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martha Palmer, Hoa Trang Dang, and Christiane Fellbaum. 2007. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering, 13(02):137-163.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Information content measures of semantic similarity perform better without sensetagged text", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "329--332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ted Pedersen. 2010. Information content measures of semantic similarity perform better without sense- tagged text. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 329-332. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Align, disambiguate and walk: A unified approach for measuring semantic similarity", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Taher Mohammad Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Jurgens", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1341--1351", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taher Mohammad Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, disambiguate and walk: A unified approach for measuring semantic similarity. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341-1351. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Development and application of a metric on semantic nets. Systems, Man and Cybernetics", |
| "authors": [ |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Rada", |
| "suffix": "" |
| }, |
| { |
| "first": "Hafedh", |
| "middle": [], |
| "last": "Mili", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Bicknell", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Blettner", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "IEEE Transactions on", |
| "volume": "19", |
| "issue": "1", |
| "pages": "17--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roy Rada, Hafedh Mili, Ellen Bicknell, and Maria Blettner. 1989. Development and application of a metric on semantic nets. Systems, Man and Cyber- netics, IEEE Transactions on, 19(1):17-30, Jan.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Using information content to evaluate semantic similarity in a taxonomy", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 14th international joint conference on Artificial intelligence", |
| "volume": "1", |
| "issue": "", |
| "pages": "448--453", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Pro- ceedings of the 14th international joint conference on Artificial intelligence-Volume 1, pages 448-453.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Contextual correlates of synonymy", |
| "authors": [ |
| { |
| "first": "Herbert", |
| "middle": [], |
| "last": "Rubenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodenough", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Communications of the ACM", |
| "volume": "8", |
| "issue": "10", |
| "pages": "627--633", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The word-space model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces", |
| "authors": [ |
| { |
| "first": "Magnus", |
| "middle": [], |
| "last": "Sahlgren", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Magnus Sahlgren. 2006. The word-space model: Us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Verbs semantics and lexical selection", |
| "authors": [ |
| { |
| "first": "Zhibiao", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "133--138", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs seman- tics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 133-138. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Measuring word relatedness using heterogeneous vector space models", |
| "authors": [ |
| { |
| "first": "Vahed", |
| "middle": [], |
| "last": "Wen-Tau Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Qazvinian", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "616--620", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wen-tau Yih and Vahed Qazvinian. 2012. Measuring word relatedness using heterogeneous vector space models. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 616-620. Association for Computa- tional Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Ordering accuracy and Spearman's \u03c1 on a synthesized dataset of 100 word pairs.", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>shows the performance of models on all</td></tr><tr><td>three benchmarks. Taxonomy based approaches</td></tr><tr><td>perform higher on SimLex-999, whereas corpus-</td></tr><tr><td>based approaches reveal high performance on</td></tr><tr><td>MEN and WordSim-353 and score significantly</td></tr><tr><td>lower on SimLex-999. This result confirms</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "" |
| } |
| } |
| } |
| } |