ACL-OCL / Base_JSON /prefixP /json /P15 /P15-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P15-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:09:22.365244Z"
},
"title": "SENSEMBED: Learning Sense Embeddings for Word and Relational Similarity",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "iacobacci@di.uniroma1.it"
},
{
"first": "Mohammad",
"middle": [
"Taher"
],
"last": "Pilehvar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "pilehvar@di.uniroma1.it"
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "navigli@di.uniroma1.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-the-art performance on multiple datasets.",
"pdf_parse": {
"paper_id": "P15-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-the-art performance on multiple datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The much celebrated word embeddings represent a new branch of corpus-based distributional semantic model which leverages neural networks to model the context in which a word is expected to appear. Thanks to their high coverage and their ability to capture both syntactic and semantic information, word embeddings have been successfully applied to a variety of NLP tasks, such as Word Sense Disambiguation (Chen et al., 2014) , Machine Translation (Mikolov et al., 2013b) , Relational Similarity (Mikolov et al., 2013c) , Semantic Relatedness and Knowledge Representation (Bordes et al., 2013) .",
"cite_spans": [
{
"start": 405,
"end": 424,
"text": "(Chen et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 447,
"end": 470,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF24"
},
{
"start": 495,
"end": 518,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF25"
},
{
"start": 571,
"end": 592,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, word embeddings inherit two important limitations from their antecedent corpusbased distributional models: (1) they are unable to model distinct meanings of a word as they conflate the contextual evidence of different meanings of a word into a single vector; and (2) they base their representations solely on the distributional statistics obtained from corpora, ignoring the wealth of information provided by existing semantic resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several research works have tried to address these problems. For instance, basing their work on the original sense discrimination approach of Reisinger and Mooney (2010) , Huang et al. (2012) applied K-means clustering to decompose word embeddings into multiple prototypes, each denoting a distinct meaning of the target word. However, the sense representations obtained are not linked to any sense inventory, a mapping that consequently has to be carried out either manually, or with the help of sense-annotated data. Another line of research investigates the possibility of taking advantage of existing semantic resources in word embeddings. A good example is the Relation Constrained Model (Yu and Dredze, 2014) . When computing word embeddings, this model replaces the original co-occurrence clues from text corpora with the relationship information derived from the Paraphrase Database 1 (Ganitkevitch et al., 2013, PPDB) , an automatically extracted dataset of paraphrase pairs.",
"cite_spans": [
{
"start": 142,
"end": 169,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF33"
},
{
"start": 172,
"end": 191,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF17"
},
{
"start": 693,
"end": 714,
"text": "(Yu and Dredze, 2014)",
"ref_id": "BIBREF40"
},
{
"start": 893,
"end": 926,
"text": "(Ganitkevitch et al., 2013, PPDB)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, none of these techniques have simultaneously solved both above-mentioned issues, i.e., inability to model polysemy and reliance on text corpora as the only source of knowledge. We propose a novel approach, called SENSEMBED, which addresses both drawbacks by exploiting semantic knowledge for modeling arbitrary word senses in a large sense inventory. We evaluate our representation on multiple datasets in two standard tasks: word-level semantic similarity and relational similarity. Experimental results show that moving from words to senses, while making use of lexical-semantic knowledge bases, makes embeddings significantly more powerful, resulting in consistent performance improvement across tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are twofold: (1) we propose a knowledge-based approach for obtaining continuous representations for individual word senses; and (2) by leveraging these representations and lexical-semantic knowledge, we put forward a semantic similarity measure with state-of-the-art performance on multiple datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word embeddings are vector space models (VSM) that represent words as real-valued vectors in a low-dimensional (relative to the size of the vocabulary) semantic space, usually referred to as the continuous space language model. The conventional way to obtain such representations is to compute a term-document occurrence matrix on large corpora and then reduce the dimensionality of the matrix using techniques such as singular value decomposition (Deerwester et al., 1990; Bullinaria and Levy, 2012, SVD) . Recent predictive techniques (Bengio et al., 2003; Collobert and Weston, 2008; Mnih and Hinton, 2007; Turian et al., 2010; Mikolov et al., 2013a) replace the conventional two-phase approach with a single supervised process, usually based on neural networks.",
"cite_spans": [
{
"start": 448,
"end": 473,
"text": "(Deerwester et al., 1990;",
"ref_id": "BIBREF12"
},
{
"start": 474,
"end": 505,
"text": "Bullinaria and Levy, 2012, SVD)",
"ref_id": null
},
{
"start": 537,
"end": 558,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF3"
},
{
"start": 559,
"end": 586,
"text": "Collobert and Weston, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 587,
"end": 609,
"text": "Mnih and Hinton, 2007;",
"ref_id": "BIBREF27"
},
{
"start": 610,
"end": 630,
"text": "Turian et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 631,
"end": 653,
"text": "Mikolov et al., 2013a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Embeddings",
"sec_num": "2"
},
{
"text": "In contrast to word embeddings, which obtain a single model for potentially ambiguous words, sense embeddings are continuous representations of individual word senses. In order to be able to apply word embeddings techniques to obtain representations for individual word senses, large sense-annotated corpora have to be available. However, manual sense annotation is a difficult and time-consuming process, i.e., the so-called knowledge acquisition bottleneck. In fact, the largest existing manually sense annotated dataset is the SemCor corpus (Miller et al., 1993) , whose creation dates back to more than two decades ago. In order to alleviate this issue, we leveraged a state-of-the-art Word Sense Disambiguation (WSD) algorithm to automatically generate large amounts of sense-annotated corpora.",
"cite_spans": [
{
"start": 544,
"end": 565,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Embeddings",
"sec_num": "2"
},
{
"text": "In the rest of Section 2, first, in Section 2.1, we describe the sense inventory used for SENSEM-BED. Section 2.2 introduces the corpus and the disambiguation procedure used to sense annotate this corpus. Finally in Section 2.3 we discuss how we leverage the automatically sense-tagged dataset for the training of sense embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Embeddings",
"sec_num": "2"
},
{
"text": "We selected BabelNet 2 (Navigli and Ponzetto, 2012) as our underlying sense inventory. The resource is a merger of WordNet with multiple other lexical resources, the most prominent of which is Wikipedia. As a result, the manually-curated information in WordNet is augmented with the complementary knowledge from collaborativelyconstructed resources, providing a high coverage of domain-specific terms and named entities and a rich set of relations. The usage of BabelNet as our underlying sense inventory provides us with the advantage of having our sense embeddings readily applicable to multiple sense inventories.",
"cite_spans": [
{
"start": 23,
"end": 51,
"text": "(Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying sense inventory",
"sec_num": "2.1"
},
{
"text": "As our corpus we used the September-2014 dump of the English Wikipedia. 3 This corpus comprises texts from various domains and topics and provides a suitable word coverage. The unprocessed text of the corpus includes approximately three billion tokens and more than three million unique words. We only consider tokens with at least five occurrences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a sense-annotated corpus",
"sec_num": "2.2"
},
{
"text": "As our WSD system, we opted for Babelfy 4 (Moro et al., 2014) , a state-of-the-art WSD and Entity Linking algorithm based on BabelNet's semantic network. Babelfy first models each concept in the network through its corresponding \"semantic signature\" by leveraging a graph random walk algorithm. Given an input text, the algorithm uses the generated semantic signatures to construct a subgraph of the semantic network representing the input text. Babelfy then searches this subgraph for the intended sense of each content word using an iterative process and a dense subgraph heuristic. Thanks to its use of Babel-Net, Babelfy inherently features multilinguality; hence, our representation approach is equally applicable to languages other than English. In order to guarantee high accuracy and to avoid bias towards more frequent senses, we do not consider those judgements made by Babelfy while backing off to the most frequent sense, a case that happens when a certain confidence threshold is not met by the algorithm. The disambiguated items with high confidence correspond to more than 50% of all the ",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "(Moro et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a sense-annotated corpus",
"sec_num": "2.2"
},
{
"text": "The disambiguated text is processed with the Word2vec (Mikolov et al., 2013a) toolkit 5 . We applied Word2vec to produce continuous representations of word senses based on the distributional information obtained from the annotated corpus. For each target word sense, a representation is computed by maximizing the log likelihood of the word sense with respect to its context. We opted for the Continuous Bag of Words (CBOW) architecture, the objective of which is to predict a single word (word sense in our case) given its context. The context is defined by a window, typically with the size of five words on each side with the paragraph ending barrier. We used hierarchical softmax as our training algorithm. The dimensionality of the vectors were set to 400 and the subsampling of frequent words to 10 \u22123 . As a result of the learning process, we obtain vector-based semantic representations for each of the word senses in the automatically-annotated corpus. We show in Table 1 some of the closest senses to six sample word senses: the geographical and financial senses of river, the performance and phone number senses of number, and the gang and car senses of hood. 6 As can be seen, sense embeddings can capture effectively the clear distinctions between different senses of a word. Additionally, the closest senses are not necessarily constrained to the same part of speech. For instance, the river sense of bank has the adverbs upstream and downstream and the \"move along, of liquid\" sense of the verb run among its closest senses. ",
"cite_spans": [],
"ref_spans": [
{
"start": 973,
"end": 980,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning sense embeddings",
"sec_num": "2.3"
},
{
"text": "This Section describes how we leverage the generated sense embeddings for the computation of word similarity and relational similarity. We start the Section by explaining how we associate a word with its set of corresponding senses and how we compare pairs of senses in Sections 3.1 and 3.2, respectively. We then illustrate our approach for measuring word similarity, together with its knowledge-based enhancement, in Section 3.3, and relational similarity in Section 3.4. Hereafter, we refer to our similarity measurement approach as SENSEMBED.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measurement",
"sec_num": "3"
},
{
"text": "In order to be able to utilize our sense embeddings for a word-level task such as word similarity measurement, we need to associate each word with its set of relevant senses, each modeled by its corresponding vector. Let S w be the set of senses associated with the word w. Our objective is to cover as many senses as can be associated with the word w. To this end we first initialize the set S w by the word senses of the word w and all its synonymous word senses, as defined in the BabelNet sense inventory. We show in Table 2 some of the senses of the noun hood and the synonym expansion for these senses. We further expand the set S w by repeating the same process for the lemma of word w (if not already in lemma form).",
"cite_spans": [],
"ref_spans": [
{
"start": 521,
"end": 528,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Associating senses with words",
"sec_num": "3.1"
},
{
"text": "For comparing vectors, we use the Tanimoto distance. The measure is a generalization of Jaccard similarity for real-valued vectors in [-1, 1]:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector comparison",
"sec_num": "3.2"
},
{
"text": "T ( w 1 , w 2 ) = w 1 \u2022 w 2 w 1 2 + w 2 2 \u2212 w 1 \u2022 w 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector comparison",
"sec_num": "3.2"
},
{
"text": "where w 1 \u2022 w 2 is the dot product of the vectors w 1 and w 2 and w 1 is the Euclidean norm of w 1 . Rink and Harabagiu (2013) reported consistent improvements when using vector space metrics, in particular the Tanimoto distance, on the SemEval-2012 task on relational similarity (Jurgens et al., 2012) in comparison to several other measures that are designed for probability distributions, such as Jensen-Shannon divergence and Hellinger distance.",
"cite_spans": [
{
"start": 101,
"end": 126,
"text": "Rink and Harabagiu (2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector comparison",
"sec_num": "3.2"
},
{
"text": "We show in Algorithm 1 our procedure for measuring the semantic similarity of a pair of input words w 1 and w 2 . The algorithm also takes as its inputs the similarity strategy and the weighted similarity parameter \u03b1 (Section 3.3.1) along with a graph vicinity factor flag (Section 3.3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "3.3"
},
{
"text": "We take two strategies for calculating the similarity of the given words w 1 and w 2 . Let S w 1 and S w 2 be the sets of senses associated with the two respective input words w 1 and w 2 , and let s i be the sense embedding vector of the sense s i . In the first strategy, which we refer to as closest, we follow the conventional approach (Budanitsky and Hirst, 2006) and measure the similarity of the two words as the similarity of their closest senses, i.e.:",
"cite_spans": [
{
"start": 340,
"end": 368,
"text": "(Budanitsky and Hirst, 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim closest (w 1 , w 2 ) = max s 1 \u2208Sw 1 s 2 \u2208Sw 2 T ( s 1 , s 2 )",
"eq_num": "(2)"
}
],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "However, taking the similarity of the closest senses of two words as their overall similarity ignores the fact that the other senses can also contribute to the process of similarity judgement. In fact, psychological studies suggest that humans, while judging semantic similarity of a pair of words, consider different meanings of the two words and not only the closest ones (Tversky, 1977; Markman and Gentner, 1993) . For instance, the WordSim-353 dataset (Finkelstein et al., 2002) contains the word pair brother-monk. Despite having the religious devotee sense in common, the Algorithm 1 Word Similarity Input: Two words w 1 and w 2 Str, the similarity strategy Vic, the graph vicinity factor flag \u03b1 parameter for the weighted strategy Output: The similarity between w 1 and w 2",
"cite_spans": [
{
"start": 374,
"end": 389,
"text": "(Tversky, 1977;",
"ref_id": "BIBREF38"
},
{
"start": 390,
"end": 416,
"text": "Markman and Gentner, 1993)",
"ref_id": "BIBREF21"
},
{
"start": 457,
"end": 483,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "1: S w 1 \u2190 getSenses(w 1 ), S w 2 \u2190 getSenses(w 2 ) 2: if Str is closest then 3: sim \u2190 -1 4: else 5: sim \u2190 0 6: end if 7: for each s 1 \u2208 S w 1 and s 2 \u2208 S w 2 do 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "if Vic is true then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "tmp \u2190 T * ( s 1 , s 2 ) 10: else 11: tmp \u2190 T ( s 1 , s 2 ) 12: end if 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "if Str is closest then 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "sim \u2190 max (sim, tmp)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "else 16: sim \u2190 sim + tmp \u03b1 \u00d7 d(s 1 ) \u00d7 d(s 2 ) 17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "end if 18: end for two words are assigned the similarity judgement of 6.27, which is slightly above the middle point in the similarity scale [0,10] of the dataset. This clearly indicates that other non-synonymous, yet still related, senses of the two words have also played a role in the similarity judgement. Additionally, the relatively low score reflects the fact that the religious devotee sense is not a dominant meaning of the word brother.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "We therefore put forward another similarity measurement strategy, called weighted, in which different senses of the two words contribute to their similarity computation, but the contributions are scaled according to their relative importance. To this end, we first leverage sense occurrence frequencies in order to estimate the dominance of each specific word sense. For each word w, we first compute the dominance of its sense s \u2208 S w by dividing the frequency of s by the overall frequency of all senses associated with w, i.e., S w :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d(s) = f req(s) s \u2208Sw f req(s )",
"eq_num": "(3)"
}
],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "We further recognize that the importance of a specific sense of a word can also be triggered by the word it is being compared with. We model this by biasing the similarity computation towards closer senses, by increasing the contribution of closer senses through a power function with parameter \u03b1. The similarity of a pair of words w 1 and w 2 according to the weighted strategy is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "Sim weighted (w 1 , w 2 ) = s 1 \u2208Sw 1 s 2 \u2208Sw 2 d(s 1 ) d(s 2 ) T ( s 1 , s 2 ) \u03b1 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "where the \u03b1 parameter is a real-valued constant greater than one. We show in Section 4.1.3 how we tune the value of this parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measurement strategy",
"sec_num": "3.3.1"
},
{
"text": "Our similarity measurement approach takes advantage of lexical knowledge at two different levels. First, as we described in Sections 2.2 and 2.3, we use a knowledge-based disambiguation approach, i.e., Babelfy, which exploits BabelNet's semantic network. Second, we put forward a methodology that leverages the relations in Babel-Net's graph for enhancing the accuracy of similarity judgements, to be discussed next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancing similarity accuracy",
"sec_num": "3.3.2"
},
{
"text": "As a distributional vector representation technique, our sense embeddings can potentially suffer from inaccurate modeling of less frequent word senses. In contrast, our underlying sense inventory provides a full coverage of all its concepts, with relations that are taken from WordNet and Wikipedia. In order to make use of the complementary information provided by our lexical knowledge base and to obtain more accurate similarity judgements, we introduce a graph vicinity factor, that combines the structural knowledge from BabelNet's semantic network and the distributional representation of sense embeddings. To this end, for a given sense pair, we scale the similarity judgement obtained by comparing their corresponding sense embeddings, based on their placement in the network. Let E be the set of all sense-to-sense relations provided by BabelNet's semantic network, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancing similarity accuracy",
"sec_num": "3.3.2"
},
{
"text": "E = {(s i , s j ) : s i \u2212 s j }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancing similarity accuracy",
"sec_num": "3.3.2"
},
{
"text": "Then, the similarity of a pair of words with the graph vicinity factor in formulas 2 and 4 is computed by replacing T with T * , defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancing similarity accuracy",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T * ( s 1 , s 2 ) = T ( s 1 , s 2 ) \u00d7 \u03b2, if (s 1 , s 2 ) \u2208 E T ( s 1 , s 2 ) \u00d7 \u03b2 \u22121 , otherwise",
"eq_num": "(5)"
}
],
"section": "Enhancing similarity accuracy",
"sec_num": "3.3.2"
},
{
"text": "We show in Section 4.1.3 how we tune the parameter \u03b2. This procedure is particularly helpful for the case of less frequent word senses that do not have enough contextual information to allow an effective representation. For instance, the SimLex-999 dataset (Hill et al., 2014 ), which we use as our tuning dataset (see Section 4.1.3), contains the highly-related pair orthodontist-dentist. We observed that the intended sense of the noun orthodontist occurs only 70 times in our annotated corpus. As a result, the obtained representation was not accurate, resulting in a low similarity score for the pair. The two respective senses are, however, directly connected in the BabelNet graph. Hence, the graph vicinity factor scales up the computed similarity value for the word pair.",
"cite_spans": [
{
"start": 257,
"end": 275,
"text": "(Hill et al., 2014",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancing similarity accuracy",
"sec_num": "3.3.2"
},
{
"text": "Relational similarity evaluates the correspondence between relations (Medin et al., 1990) . The task can be viewed as an analogy problem in which, given two pairs of words (w a , w b ) and (w c , w d ), the goal is to compute the extent to which the relations of w a to w b and w c to w d are similar. Sense embeddings are suitable candidates for measuring this type of similarity, as they represent relations between senses as linear transformations. Given this property, the relation between a pair of words can be obtained by subtracting their corresponding normalized embeddings. Following Zhila et al. (2013) , the relational similarity between two pairs of word (w a , w b ) and (w c , w d ) is accordingly calculated as:",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "(Medin et al., 1990)",
"ref_id": "BIBREF22"
},
{
"start": 594,
"end": 613,
"text": "Zhila et al. (2013)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational similarity",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ANALOGY( w a , w b , w c , w d ) = T ( w b \u2212 w a , w d \u2212 w c )",
"eq_num": "(6)"
}
],
"section": "Relational similarity",
"sec_num": "3.4"
},
{
"text": "We show the procedure for measuring the relational similarity in Algorithm 2. The algorithm first finds the closest senses across the two word pairs: s * a and s * b for the first pair and s * c and s * d for the second. The analogy vector representations are accordingly computed as the difference between the sense embeddings of the corresponding closest senses. Finally, the relational similarity is computed as the similarity of the analogy vectors of the two pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational similarity",
"sec_num": "3.4"
},
{
"text": "Algorithm 2 Relational Similarity Input: Two pairs of words w a , w b and w c , w d Output: The degree of analogy between the two pairs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational similarity",
"sec_num": "3.4"
},
{
"text": "1: S wa \u2190 getSenses(w a ), S w b \u2190 getSenses(w b ) 2: (s * a , s * b ) \u2190 argmax sa\u2208Sw a s b \u2208Sw b T ( s a , s b ) 3: S wc \u2190 getSenses(w c ), S w d \u2190 getSenses(w d ) 4: (s * c , s * d ) \u2190 argmax sc\u2208Sw c s d \u2208Sw d T ( s c , s d ) 5: return: T ( s b * \u2212 s a * , s d * \u2212 s c * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational similarity",
"sec_num": "3.4"
},
{
"text": "Word similarity measurement is one of the most popular evaluation methods in lexical semantics, and semantic similarity in particular, with numerous evaluation benchmarks and datasets. Given a set of word pairs, a system's task is to provide similarity judgments for each pair, and these judgements should ideally be as close as possible to those given by humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity experiment",
"sec_num": "4.1"
},
{
"text": "We evaluate SENSEMBED on standard word similarity and relatedness datasets: the RG-65 (Rubenstein and Goodenough, 1965) and the WordSim-353 (Finkelstein et al., 2002, WS-353) datasets. Agirre et al. (2009) suggested that the original WS-353 dataset conflates similarity and relatedness and divided the dataset into two subsets, each containing pairs for just one type of association measure: similarity (the WS-Sim dataset) and relatedness (the WS-Rel dataset).",
"cite_spans": [
{
"start": 140,
"end": 174,
"text": "(Finkelstein et al., 2002, WS-353)",
"ref_id": null
},
{
"start": 185,
"end": 205,
"text": "Agirre et al. (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1.1"
},
{
"text": "We also evaluate our approach on the YP-130 dataset, which was created by Yang and Powers (2005) specifically for measuring verb similarity, and also on the Stanford's Contextual Word Similarities (SCWS), a dataset for measuring wordin-context similarity (Huang et al., 2012) . In the SCWS dataset each word is provided with the sentence containing it, which helps in pointing out the intended sense of the corresponding target word.",
"cite_spans": [
{
"start": 74,
"end": 96,
"text": "Yang and Powers (2005)",
"ref_id": "BIBREF39"
},
{
"start": 255,
"end": 275,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1.1"
},
{
"text": "Finally, we also report results on the MEN dataset which was recently introduced by Bruni et al. (2014) . MEN contains two sets of English word pairs, together with human-assigned similarity judgments, obtained by crowdsourcing using Amazon Mechanical Turk.",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "Bruni et al. (2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1.1"
},
{
"text": "We compare the performance of our similarity measure against twelve other approaches. As regards traditional distributional models, we report the best results computed by for PMI-SVD, a system based on Pointwise Mutual Information (PMI) and SVD-based dimensionality reduction. For word embeddings, we report the results of Pennington et al. (2014, GloVe) and Collobert and Weston (2008) . GloVe is an alternative way for learning embeddings, in which vector dimensions are made explicit, as opposed to the opaque meaning of the vector dimensions in Word2vec. The approach of Collobert and Weston (2008) is an embeddings model with a deeper architecture, designed to preserve more complex knowledge as distant relations. We also show results for the word embeddings trained by . The authors first constructed a massive corpus by combining several large corpora. Then, they trained dozens of different Word2vec models by varying the system's training parameters and reported the best performance obtained on each dataset.",
"cite_spans": [
{
"start": 323,
"end": 354,
"text": "Pennington et al. (2014, GloVe)",
"ref_id": null
},
{
"start": 359,
"end": 386,
"text": "Collobert and Weston (2008)",
"ref_id": "BIBREF11"
},
{
"start": 575,
"end": 602,
"text": "Collobert and Weston (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison systems",
"sec_num": "4.1.2"
},
{
"text": "As representatives for graph-based similarity techniques, we report results for the state-of-theart approach of Pilehvar et al. 2013which is based on random walks on WordNet's semantic network. Moreover, we present results for the graph-based approach of Zesch et al. (2008) , which compares a pair of words based on the path lengths on Wiktionary's semantic network.",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "Zesch et al. (2008)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison systems",
"sec_num": "4.1.2"
},
{
"text": "We also compare our word similarity measure against the multi-prototype models of Reisinger and Mooney (2010) and Huang et al. (2012) , and against the approaches of Yu and Dredze (2014) and Chen et al. (2014) , which enhance word embeddings with semantic knowledge derived from PPDB and WordNet, respectively. Finally, we report results for word embeddings, as our baseline, obtained using the Word2vec toolkit on the same corpus that was annotated and used for learning our sense embeddings (cf. Section 2.3).",
"cite_spans": [
{
"start": 82,
"end": 109,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF33"
},
{
"start": 114,
"end": 133,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF17"
},
{
"start": 166,
"end": 186,
"text": "Yu and Dredze (2014)",
"ref_id": "BIBREF40"
},
{
"start": 191,
"end": 209,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison systems",
"sec_num": "4.1.2"
},
{
"text": "Recall from Sections 3.3.1 and 3.3.2 that our algorithm has two parameters: the \u03b1 parameter for the weighted strategy and the \u03b2 parameter for the graph vicinity factor. We tuned these two parameters on the SimLex-999 dataset (Hill et al., 2014) . We picked SimLex-999 since there are not many comparison systems in the literature that report re- Zesch et al. (2008) 0.820 --0.710 -- Collobert and Weston (2008) 0.480 0.610 0.380 -0.570 -Word2vec 0 Table 3 : Spearman correlation performance on five word similarity and relatedness datasets.",
"cite_spans": [
{
"start": 225,
"end": 244,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 346,
"end": 365,
"text": "Zesch et al. (2008)",
"ref_id": "BIBREF41"
},
{
"start": 383,
"end": 410,
"text": "Collobert and Weston (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 448,
"end": 455,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter tuning",
"sec_num": "4.1.3"
},
{
"text": "sults on the dataset. We found the optimal values for \u03b1 and \u03b2 to be 8 and 1.6, respectively. Table 3 shows the experimental results on five different word similarity and relatedness datasets. We report the Spearman correlation performance for the two strategies of our approach as well as eight other comparison systems. SENSEMBED proves to be highly reliable on both similarity and relatedness measurement tasks, obtaining the best performance on most datasets. In addition, our approach shows itself to be equally suitable for verb similarity, as indicated by the results on YP-130. The rightmost column in the Table shows the average performance weighted by dataset size. Between the two similarity measurement strategies, weighted proves to be the more suitable, achieving the best overall performance on three datasets and the best mean performance of 0.794 across the two strategies. This indicates that our assumption of considering all senses of a word in similarity computation was beneficial.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 3",
"ref_id": null
},
{
"start": 613,
"end": 624,
"text": "Table shows",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter tuning",
"sec_num": "4.1.3"
},
{
"text": "We report in Table 4 the Spearman correlation performance of four approaches that are similar to SENSEMBED: the multi-prototype models of Reisinger and Mooney (2010) and Huang et al. (2012) , and the semantically enhanced models of Yu and Dredze (2014) and Chen et al. (2014) . We provide results only on WS-353 and SCWS, since the above-mentioned approaches do not report their performance on other datasets. As we can see from the Table, SENSEMBED outperforms the other approaches on the WS-353 dataset. However, our approach lags behind on SCWS, highlighting the negative impact of taking the closest senses as the intended meanings. In fact, on this dataset, SENSEMBED weighted provides better performance owing to its taking into account other senses as well. The better performance of the multi-prototype systems can be attributed to their coarse-grained sense inventories which are automatically constructed by means of Word Sense Induction.",
"cite_spans": [
{
"start": 138,
"end": 165,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF33"
},
{
"start": 170,
"end": 189,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF17"
},
{
"start": 232,
"end": 252,
"text": "Yu and Dredze (2014)",
"ref_id": "BIBREF40"
},
{
"start": 257,
"end": 275,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "Dataset and evaluation. We take as our benchmark the SemEval-2012 task on Measuring Degrees of Relational Similarity (Jurgens et al., 2012) . The task provides a dataset comprising 79 graded word relations, 10 of which are used for training and the rest for test. The task evaluated the participating systems in terms of the Spearman correlation and the MaxDiff score (Louviere, 1991) . Comparison systems. We compare our results against six other systems and the PMI baseline provided by the task organizers. As for systems that use word embeddings for measuring relational similarity, we report results for RNN-1600 (Mikolov et al., 2013c) and PairDirection (Levy and Goldberg, 2014) . We also report results for UTD-NB and UTD-SVM (Rink and Harabagiu, 2012) , which rely on lexical pattern classification based on Na\u00efve Bayes and Support Vector Machine classifiers, respectively. UTD-LDA (Rink and Harabagiu, 2013) is another system presented by the same authors that casts the task as a selectional preferences one. Finally, we show the performance of Com (Zhila et al., 2013) , a system that combines Word2vec, lexical patterns, and knowledge base information. Similarly to the word similarity experiments, we also report a baseline based on word embeddings (Word2vec) trained on the same corpus and with the same settings as SENSEMBED.",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Jurgens et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 368,
"end": 384,
"text": "(Louviere, 1991)",
"ref_id": "BIBREF20"
},
{
"start": 618,
"end": 641,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF25"
},
{
"start": 660,
"end": 685,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF19"
},
{
"start": 734,
"end": 760,
"text": "(Rink and Harabagiu, 2012)",
"ref_id": "BIBREF34"
},
{
"start": 891,
"end": 917,
"text": "(Rink and Harabagiu, 2013)",
"ref_id": "BIBREF35"
},
{
"start": 1060,
"end": 1080,
"text": "(Zhila et al., 2013)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational similarity experiment",
"sec_num": "4.2"
},
{
"text": "Results. Table 5 shows the performance of different systems in the task of relational similarity in terms of the Spearman correlation and MaxDiff score. A comparison of the results for Word2vec and SENSEMBED shows the advantage gained by moving from the word to the sense level. Among the comparison systems, Com attains the closest performance. However, we note that the system is a combination of several methods, whereas SENSEMBED is based on a single approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Relational similarity experiment",
"sec_num": "4.2"
},
{
"text": "In order to analyze the impact of the different components of our similarity measure, we carried out a series of experiments on our word similarity datasets. We show in Table 6 the experimental results in terms of Spearman correlation. Performance is reported for the two similarity measurement strategies, i.e., closest and weighted, and for different system settings with and without the expansion procedure (cf. Section 3.1) and graph vicinity factor (cf. Section 3.3.2). As our comparison baseline, we also report results for word embeddings, obtained using the Word2vec toolkit on the same corpus and with the same configuration (cf. Section 2.3) used for learning the sense embeddings (Word2vec in the Table) . The rightmost column in the Table reports the mean performance weighted by dataset size. Word2vec exp is the word embeddings system in which the similarity of the two words is determined in terms of the closest word embeddings among all the corresponding synonyms obtained with the expansion procedure (cf. Section 3.1).",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 708,
"end": 714,
"text": "Table)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "A comparison of word and sense embeddings in the vanilla setting (with neither the expansion procedure nor graph vicinity factor) indicates the consistent advantage gained by moving from word to sense level, irrespective of the dataset and the similarity measurement strategy. The consistent improvement shows that the semantic information provided more than compensates for the inherently imperfect disambiguation. Moreover, the results indicate the consistent benefit gained by introducing the graph vicinity factor, highlighting the fact that our combination of the complementary knowledge from sense embeddings and information derived from a semantic network is beneficial. Finally, note that the expansion procedure leads to performance improvement in most cases for sense embeddings. In direct contrast, the step proves harmful in the case of word embeddings, mainly due to their inability to distinguish individual word senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "Word embeddings were first introduced by Bengio et al. (2003) with the goal of statistical language modeling, i.e., learning the joint probability function of a sequence of words. The initial model was a Multilayer Perceptron (MLP) with two hidden layers: a shared non-linear and a regular hidden hyperbolic tangent one. Collobert and Weston (2008) deepened the original neural model by adding a convolutional layer and an extra layer for modeling long-distance dependencies. A significant contribution was later made by Mikolov et al. (2013a) , who simplified the original model by removing the hyperbolic tangent layer and hence significantly speeding up the training process. Other related work includes GloVe (Pennington et al., 2014) , which is an effort to make the vector dimensions in word embeddings explicit, and the approach of Bordes et al. (2013) , which trains word embeddings on the basis of relationship information derived from WordNet.",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "Bengio et al. (2003)",
"ref_id": "BIBREF3"
},
{
"start": 321,
"end": 348,
"text": "Collobert and Weston (2008)",
"ref_id": "BIBREF11"
},
{
"start": 521,
"end": 543,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF23"
},
{
"start": 713,
"end": 738,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF31"
},
{
"start": 839,
"end": 859,
"text": "Bordes et al. (2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Several techniques have been proposed for transforming word embeddings to the sense level. Chen et al. (2014) leveraged word embeddings in Word Sense Disambiguation and investigated the possibility of retrofitting embeddings with the resulting disambiguated words. Guo et al. (2014) exploited parallel data to automatically generate sense-annotated data, based on the fact that different senses of a word are usually translated to different words in another language (Chan and Ng, 2005) . The automatically-generated senseannotated data was later used for training sensespecific word embeddings. Huang et al. (2012) adopted a similar strategy by decomposing each word's single-prototype representation into multiple prototypes, denoting different senses of that word. To this end, they first gathered the context for all occurrences of a word and then used spherical K-means to cluster the contexts. Each cluster was taken as the context for a specific meaning of the word and hence used to train embeddings for that specific meaning (i.e., word sense). However, these techniques either suffer from low coverage as they can only model word senses that occur in the parallel data, or require manual intervention for linking the obtained representations to an existing sense inventory. In contrast, our approach enables high coverage and is readily applicable for the representation of word senses in widely-used lexical resources, such as WordNet, Wikipedia and Wiktionary, without needing to resort to additional manual effort.",
"cite_spans": [
{
"start": 91,
"end": 109,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 265,
"end": 282,
"text": "Guo et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 467,
"end": 486,
"text": "(Chan and Ng, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 596,
"end": 615,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We proposed an approach for obtaining continuous representations of individual word senses, referred to as sense embeddings. Based on the proposed sense embeddings and the knowledge obtained from a large-scale lexical resource, i.e., Ba-belNet, we put forward an effective technique, called SENSEMBED, for measuring semantic similarity. We evaluated our approach on multiple datasets in the tasks of word and relational similarity. Two conclusions can be drawn on the basis of the experimental results: (1) moving from word to sense embeddings can significantly improve the effectiveness and accuracy of the representations; and (2) a meaningful combination of sense embeddings and knowledge from a semantic network can further enhance the similarity judgements. As future work, we intend to utilize our sense embeddings to perform WSD, as was proposed in Chen et al. (2014) , in order to speed up the process and train sense embeddings on larger amounts of sense-annotated data.",
"cite_spans": [
{
"start": 856,
"end": 874,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "http://paraphrase.org/#/download",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.babelnet.org/ 3 http://dumps.wikimedia.org/enwiki/ 4 http://www.babelfy.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/word2vec/6 We followNavigli (2009) and show the n th sense of the word with part of speech x as word x n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ExperimentsWe evaluate our sense-enhanced semantic representation on multiple word similarity and relatedness datasets (Section 4.1), as well as the relational similarity framework (Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Study on Similarity and Relatedness Using Distributional and WordNet-based Approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A Study on Similarity and Relatedness Using Distribu- tional and WordNet-based Approaches. In Proceed- ings of Human Language Technologies: The 2009",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 19-27, Boulder, Colorado.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 238-247, Baltimore, Maryland.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Neural Probabilistic Language Model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Lan- guage Model. The Journal of Machine Learning Re- search, 3:1137-1155.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Translating Embeddings for Modeling Multirelational Data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi- relational Data. In Advances in Neural Information Processing Systems, volume 26, pages 2787-2795.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Evaluating WordNet-based measures of Lexical Semantic Relatedness",
"authors": [],
"year": null,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "1",
"pages": "13--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evaluating WordNet-based measures of Lexical Se- mantic Relatedness. Computational Linguistics, 32(1):13-47.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Extracting Semantic Representations from Word Cooccurrence Statistics",
"authors": [
{
"first": "John",
"middle": [
"A"
],
"last": "Bullinaria",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"P"
],
"last": "Levy",
"suffix": ""
}
],
"year": 2012,
"venue": "Stop-lists, Stemming and SVD. Behavior Research Methods",
"volume": "44",
"issue": "",
"pages": "890--907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A. Bullinaria and Joseph P. Levy. 2012. Ex- tracting Semantic Representations from Word Co- occurrence Statistics: Stop-lists, Stemming and SVD. Behavior Research Methods, 44:890-907.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scaling Up Word Sense Disambiguation via Parallel Texts",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 20th National Conference on Artificial Intelligence",
"volume": "3",
"issue": "",
"pages": "1037--1042",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005. Scaling Up Word Sense Disambiguation via Parallel Texts. In Proceedings of the 20th National Conference on Artificial Intelligence -Volume 3, pages 1037-1042, Pittsburgh, Pennsylvania.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A unified model for word sense representation and disambiguation",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1025--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense represen- tation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025-1035, Doha, Qatar.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, pages 160-167, Helsinki, Fin- land.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott C. Deerwester, Susan T. Dumais, Thomas K. Lan- dauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of American Society for Information Science, 41(6):391-407.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Placing Search in Context: The Concept Revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Gabrilovich",
"middle": [],
"last": "Evgeniy",
"suffix": ""
},
{
"first": "Matias",
"middle": [],
"last": "Yossi",
"suffix": ""
},
{
"first": "Rivlin",
"middle": [],
"last": "Ehud",
"suffix": ""
},
{
"first": "Solan",
"middle": [],
"last": "Zach",
"suffix": ""
},
{
"first": "Wolfman",
"middle": [],
"last": "Gadi",
"suffix": ""
},
{
"first": "Ruppin",
"middle": [],
"last": "Eytan",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Gabrilovich Evgeniy, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Rup- pin Eytan. 2002. Placing Search in Context: The Concept Revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "PPDB: The Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 758-764, Atlanta, Georgia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning Sense-specific Word Embeddings By Exploiting Bilingual Resources",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "497--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning Sense-specific Word Embed- dings By Exploiting Bilingual Resources. In Pro- ceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Techni- cal Papers, pages 497-507, Dublin, Ireland.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.3456"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv preprint arXiv:1408.3456.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving Word Representations Via Global Context And Multiple Word Prototypes",
"authors": [
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving Word Representations Via Global Context And Multiple Word Prototypes. In Proceedings of 50th Annual Meeting of the Association for Computational Lin- guistics, volume 1, pages 873-882, Jeju Island, South Korea.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2012 task 2: Measuring degrees of relational similarity",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Keith",
"middle": [
"J"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holyoak",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "356--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Jurgens, Peter D. Turney, Saif M. Moham- mad, and Keith J. Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Proceedings of the First Joint Conference on Lex- ical and Computational Semantics -Volume 1: Pro- ceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth Inter- national Workshop on Semantic Evaluation, pages 356-364, Montreal, Canada.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representa- tions. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 171-180, Ann Arbor, Michigan.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Best-Worst Scaling: A Model for the Largest Difference Judgments. Working paper",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Louviere",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Louviere. 1991. Best-Worst Scaling: A Model for the Largest Difference Judgments. Working pa- per, University of Alberta.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Structural alignment during similarity comparisons",
"authors": [
{
"first": "B",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Dedre",
"middle": [],
"last": "Markman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gentner",
"suffix": ""
}
],
"year": 1993,
"venue": "Cognitive Psychology",
"volume": "25",
"issue": "4",
"pages": "431--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur B. Markman and Dedre Gentner. 1993. Struc- tural alignment during similarity comparisons. Cog- nitive Psychology, 25(4):431 -467.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Similarity involving attributes and relations: Judgments of similarity and difference are not inverses",
"authors": [
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Medin",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Goldstone",
"suffix": ""
},
{
"first": "Dedre",
"middle": [],
"last": "Gentner",
"suffix": ""
}
],
"year": 1990,
"venue": "Psychological Science",
"volume": "1",
"issue": "1",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas L. Medin, Robert L. Goldstone, and Dedre Gentner. 1990. Similarity involving attributes and relations: Judgments of similarity and difference are not inverses. Psychological Science, 1(1):64-69.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Repre- sentations in Vector Space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Exploiting Similarities among Languages for Machine Translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting Similarities among Lan- guages for Machine Translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Linguistic Regularities in Continuous Space Word Representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of Human Language Technologies: The 2013 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 746-751, Atlanta, Georgia.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Semantic Concordance",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Randee",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"T"
],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "303--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A Semantic Concordance. In Proceedings of the Workshop on Human Language Technology, pages 303-308, Princeton, New Jersey.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Three New Graphical Models for Statistical Language Modelling",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three New Graphical Models for Statistical Language Mod- elling. In Proceedings of the 24th International Conference on Machine Learning, pages 641-648, Corvallis, Oregon.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL)",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "2",
"issue": "",
"pages": "231--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity Linking meets Word Sense Dis- ambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2:231-244.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2012,
"venue": "Artificial Intelligence",
"volume": "193",
"issue": "",
"pages": "217--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network. Artificial Intelligence, 193:217- 250.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Word Sense Disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys",
"volume": "41",
"issue": "2",
"pages": "1--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word Sense Disambiguation: A survey. ACM Computing Surveys, 41(2):1-69.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Empirical Methods in Natural Language Processing",
"volume": "12",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the Em- pirical Methods in Natural Language Processing (EMNLP), volume 12, pages 1532-1543, Doha, Qatar.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Align, Disambiguate and Walk: a Unified Approach for Measuring Semantic Similarity",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Mohammad Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1341--1351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar, David A. Jurgens, and Roberto Navigli. 2013. Align, Disambiguate and Walk: a Unified Approach for Measuring Semantic Similarity. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics, pages 1341-1351, Sofia, Bulgaria.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multi-Prototype Vector-Space Models of Word Meaning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond J. Mooney. 2010. Multi-Prototype Vector-Space Models of Word Meaning. In Proceedings of Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, pages 109-117, Los Angeles, California.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "UTD: Determining relational similarity using lexical patterns",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Rink",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2012,
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "413--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Rink and Sanda Harabagiu. 2012. UTD: De- termining relational similarity using lexical patterns. In *SEM 2012: The First Joint Conference on Lexi- cal and Computational Semantics -Volume 1: Pro- ceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Interna- tional Workshop on Semantic Evaluation (SemEval 2012), pages 413-418, Montreal, Canada.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The Impact of Selectional Preference Agreement on Semantic Relational Similarity",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Rink",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS) -Long Papers",
"volume": "",
"issue": "",
"pages": "204--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Rink and Sanda Harabagiu. 2013. The Impact of Selectional Preference Agreement on Semantic Relational Similarity. In Proceedings of the 10th International Conference on Computational Seman- tics (IWCS) -Long Papers, pages 204-215, Pots- dam, Germany.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Contextual Correlates of Synonymy. Communications of the ACM",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "8",
"issue": "",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual Correlates of Synonymy. Communica- tions of the ACM, 8(10):627-633.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Word Representations: A Simple and General Method for Semi-supervised Learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and General Method for Semi-supervised Learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Up- psala, Sweden.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Features of similarity",
"authors": [
{
"first": "Amos",
"middle": [],
"last": "Tversky",
"suffix": ""
}
],
"year": 1977,
"venue": "Psychological Review",
"volume": "84",
"issue": "",
"pages": "327--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amos Tversky. 1977. Features of similarity. Psycho- logical Review, 84:327-352.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Measuring semantic similarity in the taxonomy of wordnet",
"authors": [
{
"first": "Dongqiang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Powers",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Twenty-eighth Australasian Conference on Computer Science",
"volume": "38",
"issue": "",
"pages": "315--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongqiang Yang and David M. W. Powers. 2005. Measuring semantic similarity in the taxonomy of wordnet. In Proceedings of the Twenty-eighth Aus- tralasian Conference on Computer Science, vol- ume 38, pages 315-322, Darlinghurst, Australia.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Improving Lexical Embeddings with Semantic Knowledge",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "545--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu and Mark Dredze. 2014. Improving Lexi- cal Embeddings with Semantic Knowledge. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics, volume 2, pages 545-550, Baltimore, Maryland.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Using Wiktionary for Computing Semantic Relatedness",
"authors": [
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 23rd National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "861--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Torsten Zesch, Christof M\u00fcller, and Iryna Gurevych. 2008. Using Wiktionary for Computing Seman- tic Relatedness. In Proceedings of the 23rd Na- tional Conference on Artificial Intelligence, vol- ume 2, pages 861-866, Chicago, Illinois.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Combining Heterogeneous Models for Measuring Relational Similarity",
"authors": [
{
"first": "Alisa",
"middle": [],
"last": "Zhila",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1000--1009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alisa Zhila, Wen-tau Yih, Christopher Meek, Geoffrey Zweig, and Tomas Mikolov. 2013. Combining Het- erogeneous Models for Measuring Relational Sim- ilarity. In Proceedings of Human Language Tech- nologies: The 2013 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 1000-1009, Atlanta, Geor- gia.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": ""
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Spearman correlation performance of the</td></tr><tr><td>multi-prototype and semantically-enhanced ap-</td></tr><tr><td>proaches on the WordSim-353 and the Stanford's</td></tr><tr><td>Contextual Word Similarities datasets.</td></tr></table>",
"text": ""
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"3\">: Spearman correlation performance of word embeddings (Word2vec) and SENSEMBED on dif-</td></tr><tr><td colspan=\"3\">ferent semantic similarity and relatedness datasets.</td></tr><tr><td>Measure</td><td>MaxDiff</td><td>Spearman</td></tr><tr><td>Com</td><td>45.2</td><td>0.353</td></tr><tr><td>PairDirection</td><td>45.2</td><td>-</td></tr><tr><td>RNN-1600</td><td>41.8</td><td>0.275</td></tr><tr><td>UTD-LDA</td><td>-</td><td>0.334</td></tr><tr><td>UTD-NB</td><td>39.4</td><td>0.229</td></tr><tr><td>UTD-SVM</td><td>34.7</td><td>0.116</td></tr><tr><td>PMI baseline</td><td>33.9</td><td>0.112</td></tr><tr><td>Word2vec</td><td>43.2</td><td>0.288</td></tr><tr><td>SENSEMBED closest</td><td>45.9</td><td>0.358</td></tr></table>",
"text": ""
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Spearman correlation performance of dif-</td></tr><tr><td>ferent systems on the SemEval-2012 Task on Re-</td></tr><tr><td>lational Similarity.</td></tr></table>",
"text": ""
}
}
}
}