ACL-OCL / Base_JSON /prefixK /json /K15 /K15-1026.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:49.076434Z"
},
"title": "Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hebrew University",
"location": {}
},
"email": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": "",
"affiliation": {
"laboratory": "Technion",
"institution": "",
"location": {
"region": "IIT"
}
},
"email": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hebrew University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel word level vector representation based on symmetric patterns (SPs). For this aim we automatically acquire SPs (e.g., \"X and Y\") from a large corpus of plain text, and generate vectors where each coordinate represents the cooccurrence in SPs of the represented word with another word of the vocabulary. Our representation has three advantages over existing alternatives: First, being based on symmetric word relationships, it is highly suitable for word similarity prediction. Particularly, on the SimLex999 word similarity dataset, our model achieves a Spearman's \u03c1 score of 0.517, compared to 0.462 of the state-of-the-art word2vec model. Interestingly, our model performs exceptionally well on verbs, outperforming stateof-the-art baselines by 20.2-41.5%. Second, pattern features can be adapted to the needs of a target NLP application. For example, we show that we can easily control whether the embeddings derived from SPs deem antonym pairs (e.g. (big,small)) as similar or dissimilar, an important distinction for tasks such as word classification and sentiment analysis. Finally, we show that a simple combination of the word similarity scores generated by our method and by word2vec results in a superior predictive power over that of each individual model, scoring as high as 0.563 in Spearman's \u03c1 on SimLex999. This emphasizes the differences between the signals captured by each of the models.",
"pdf_parse": {
"paper_id": "K15-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel word level vector representation based on symmetric patterns (SPs). For this aim we automatically acquire SPs (e.g., \"X and Y\") from a large corpus of plain text, and generate vectors where each coordinate represents the cooccurrence in SPs of the represented word with another word of the vocabulary. Our representation has three advantages over existing alternatives: First, being based on symmetric word relationships, it is highly suitable for word similarity prediction. Particularly, on the SimLex999 word similarity dataset, our model achieves a Spearman's \u03c1 score of 0.517, compared to 0.462 of the state-of-the-art word2vec model. Interestingly, our model performs exceptionally well on verbs, outperforming stateof-the-art baselines by 20.2-41.5%. Second, pattern features can be adapted to the needs of a target NLP application. For example, we show that we can easily control whether the embeddings derived from SPs deem antonym pairs (e.g. (big,small)) as similar or dissimilar, an important distinction for tasks such as word classification and sentiment analysis. Finally, we show that a simple combination of the word similarity scores generated by our method and by word2vec results in a superior predictive power over that of each individual model, scoring as high as 0.563 in Spearman's \u03c1 on SimLex999. This emphasizes the differences between the signals captured by each of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last decade, vector space modeling (VSM) for word representation (a.k.a word embedding), has become a key tool in NLP. Most approaches to word representation follow the distributional hypothesis (Harris, 1954) , which states that words that co-occur in similar contexts are likely to have similar meanings.",
"cite_spans": [
{
"start": 202,
"end": 216,
"text": "(Harris, 1954)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "VSMs differ in the way they exploit word cooccurrence statistics. Earlier works (see (Turney et al., 2010) ) encode this information directly in the features of the word vector representation. More Recently, Neural Networks have become prominent in word representation learning (Bengio et al., 2003; Collobert and Weston, 2008; Collobert et al., 2011; Mikolov et al., 2013a; Pennington et al., 2014, inter alia) . Most of these models aim to learn word vectors that maximize a language model objective, thus capturing the tendencies of the represented words to co-occur in the training corpus. VSM approaches have resulted in highly useful word embeddings, obtaining high quality results on various semantic tasks (Baroni et al., 2014) .",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Turney et al., 2010)",
"ref_id": "BIBREF45"
},
{
"start": 278,
"end": 299,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 300,
"end": 327,
"text": "Collobert and Weston, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 328,
"end": 351,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF12"
},
{
"start": 352,
"end": 374,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF28"
},
{
"start": 375,
"end": 411,
"text": "Pennington et al., 2014, inter alia)",
"ref_id": null
},
{
"start": 714,
"end": 735,
"text": "(Baroni et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interestingly, the impressive results of these models are achieved despite the shallow linguistic information most of them consider, which is limited to the tendency of words to co-occur together in a pre-specified context window. Particularly, very little information is encoded about the syntactic and semantic relations between the participating words, and, instead, a bag-of-words approach is taken. 1 This bag-of-words approach, however, comes with a cost. As recently shown by Hill et al. (2014) , despite the impressive results VSMs that take this approach obtain on modeling word association, they are much less successful in modeling word similarity. Indeed, when evaluating these VSMs with datasets such as wordsim353 (Finkelstein et al., 2001) , where the word pair scores re-flect association rather than similarity (and therefore the (cup,coffee) pair is scored higher than the (car,train) pair), the Spearman correlation between their scores and the human scores often crosses the 0.7 level. However, when evaluating with datasets such as SimLex999 (Hill et al., 2014) , where the pair scores reflect similarity, the correlation of these models with human judgment is below 0.5 (Section 6).",
"cite_spans": [
{
"start": 404,
"end": 405,
"text": "1",
"ref_id": null
},
{
"start": 483,
"end": 501,
"text": "Hill et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 728,
"end": 754,
"text": "(Finkelstein et al., 2001)",
"ref_id": "BIBREF19"
},
{
"start": 1063,
"end": 1082,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to address the challenge in modeling word similarity, we propose an alternative, pattern-based, approach to word representation. In previous work patterns were used to represent a variety of semantic relations, including hyponymy (Hearst, 1992) , meronymy (Berland and Charniak, 1999) and antonymy (Lin et al., 2003) . Here, in order to capture similarity between words, we use Symmetric patterns (SPs), such as \"X and Y\" and \"X as well as Y\", where each of the words in the pair can take either the X or the Y position. Symmetric patterns have shown useful for representing similarity between words in various NLP tasks including lexical acquisition (Widdows and Dorow, 2002) , word clustering (Davidov and Rappoport, 2006) and classification of words to semantic categories (Schwartz et al., 2014) . However, to the best of our knowledge, they have not been applied to vector space word representation.",
"cite_spans": [
{
"start": 239,
"end": 253,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF22"
},
{
"start": 265,
"end": 293,
"text": "(Berland and Charniak, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 307,
"end": 325,
"text": "(Lin et al., 2003)",
"ref_id": "BIBREF27"
},
{
"start": 660,
"end": 685,
"text": "(Widdows and Dorow, 2002)",
"ref_id": "BIBREF49"
},
{
"start": 704,
"end": 733,
"text": "(Davidov and Rappoport, 2006)",
"ref_id": "BIBREF13"
},
{
"start": 785,
"end": 808,
"text": "(Schwartz et al., 2014)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our representation is constructed in the following way (Section 3). For each word w, we construct a vector v of size V , where V is the size of the lexicon. Each element in v represents the cooccurrence in SPs of w with another word in the lexicon, which results in a sparse word representation. Unlike most previous works that applied SPs to NLP tasks, we do not use a hard coded set of patterns. Instead, we extract a set of SPs from plain text using an unsupervised algorithm (Davidov and Rappoport, 2006) . This substantially reduces the human supervision our model requires and makes it applicable for practically every language for which a large corpus of text is available.",
"cite_spans": [
{
"start": 479,
"end": 508,
"text": "(Davidov and Rappoport, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our SP-based word representation is flexible. Particularly, by exploiting the semantics of the pattern based features, our representation can be adapted to fit the specific needs of target NLP applications. In Section 4 we exemplify this property through the ability of our model to control whether its word representations will deem antonyms similar or dissimilar. Antonyms are words that have opposite semantic meanings (e.g., (small,big) ), yet, due to their tendency to co-occur in the same context, they are often assigned similar vectors by co-occurrence based representation models (Section 6). Controlling the model judgment of antonym pairs is highly useful for NLP tasks: in some tasks, like word classification, antonym pairs such as (small,big) belong to the same class (size adjectives), while in other tasks, like sentiment analysis, identifying the difference between them is crucial. As discussed in Section 4, we believe that this flexibility holds for various other pattern types and for other lexical semantic relations (e.g. hypernymy, the is-a relation, which holds in word pairs such as (dog,animal)).",
"cite_spans": [
{
"start": 429,
"end": 440,
"text": "(small,big)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We experiment (Section 6) with the SimLex999 dataset (Hill et al., 2014) , consisting of 999 pairs of words annotated by human subjects for similarity. When comparing the correlation between the similarity scores derived from our learned representation and the human scores, our representation receives a Spearman correlation coefficient score (\u03c1) of 0.517, outperforming six strong baselines, including the state-of-the-art word2vec (Mikolov et al., 2013a) embeddings, by 5.5-16.7%. Our model performs particularly well on the verb portion of SimLex999 (222 verb pairs), achieving a Spearman score of 0.578 compared to scores of 0.163-0.376 of the baseline models, an astonishing improvement of 20.2-41.5%. Our analysis reveals that the antonym adjustment capability of our model is vital for its success.",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 434,
"end": 457,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further demonstrate that the word pair scores produced by our model can be combined with those of word2vec to get an improved predictive power for word similarity. The combined scores result in a Spearman's \u03c1 correlation of 0.563, a further 4.6% improvement compared to our model, and a total of 10.1-21.3% improvement over the baseline models. This suggests that the models provide complementary information about word semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Vector Space Models for Lexical Semantics. Research on vector spaces for word representation dates back to the early 1970's (Salton, 1971) . In traditional methods, a vector for each word w is generated, with each coordinate representing the co-occurrence of w and another context item of interest -most often a word but possibly also a sentence, a document or other items. The feature rep-resentation generated by this basic construction is sometimes post-processed using techniques such as Positive Pointwise Mutual Information (PPMI) normalization and dimensionality reduction. For recent surveys, see (Turney et al., 2010; Clark, 2012; Erk, 2012) .",
"cite_spans": [
{
"start": 124,
"end": 138,
"text": "(Salton, 1971)",
"ref_id": "BIBREF39"
},
{
"start": 605,
"end": 626,
"text": "(Turney et al., 2010;",
"ref_id": "BIBREF45"
},
{
"start": 627,
"end": 639,
"text": "Clark, 2012;",
"ref_id": "BIBREF10"
},
{
"start": 640,
"end": 650,
"text": "Erk, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most VSM works share two important characteristics. First, they encode co-occurrence statistics from an input corpus directly into the word vector features. Second, they consider very little information on the syntactic and semantic relations between the represented word and its context items. Instead, a bag-of-words approach is taken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, there is a surge of work focusing on Neural Network (NN) algorithms for word representations learning (Bengio et al., 2003; Collobert and Weston, 2008; Mnih and Hinton, 2009; Collobert et al., 2011; Dhillon et al., 2011; Mikolov et al., 2013a; Mnih and Kavukcuoglu, 2013; Lebret and Collobert, 2014; Pennington et al., 2014) . Like the more traditional models, these works also take the bag-of-words approach, encoding only shallow co-occurrence information between linguistic items. However, they encode this information into their objective, often a language model, rather than directly into the features.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 134,
"end": 161,
"text": "Collobert and Weston, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 162,
"end": 184,
"text": "Mnih and Hinton, 2009;",
"ref_id": "BIBREF31"
},
{
"start": 185,
"end": 208,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF12"
},
{
"start": 209,
"end": 230,
"text": "Dhillon et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 231,
"end": 253,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF28"
},
{
"start": 254,
"end": 281,
"text": "Mnih and Kavukcuoglu, 2013;",
"ref_id": "BIBREF32"
},
{
"start": 282,
"end": 309,
"text": "Lebret and Collobert, 2014;",
"ref_id": "BIBREF25"
},
{
"start": 310,
"end": 334,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Consider, for example, the successful word2vec model (Mikolov et al., 2013a) . Its continuous-bagof-words architecture is designed to predict a word given its past and future context. The resulted objective function is:",
"cite_spans": [
{
"start": 53,
"end": 76,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "max T t=1 log p(w t |w t\u2212c , . . . , w t\u22121 , w t+1 , . . . , w t+c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where T is the number of words in the corpus, and c is a pre-determined window size. Another word2vec architecture, skip-gram, aims to predict the past and future context given a word. Its objective is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "max T t=1 \u2212c\u2264j\u2264c,j =0 log p(w t+j |w t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In both cases the objective function relates to the co-occurrence of words within a context window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A small number of works went beyond the bagof-words assumption, considering deeper relationships between linguistic items. The Strudel system (Baroni et al., 2010) represents a word using the clusters of lexico-syntactic patterns in which it occurs. Murphy et al. (2012) represented words through their co-occurrence with other words in syntactic dependency relations, and then used the Non-Negative Sparse Embedding (NNSE) method to reduce the dimension of the resulted representation. Levy and Goldberg (2014) extended the skip-gram word2vec model with negative sampling (Mikolov et al., 2013b) by basing the word co-occurrence window on the dependency parse tree of the sentence. Bollegala et al. 2015replaced bag-of-words contexts with various patterns (lexical, POS and dependency).",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Baroni et al., 2010)",
"ref_id": "BIBREF2"
},
{
"start": 250,
"end": 270,
"text": "Murphy et al. (2012)",
"ref_id": "BIBREF34"
},
{
"start": 487,
"end": 511,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF26"
},
{
"start": 573,
"end": 596,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We introduce a symmetric pattern based approach to word representation which is particularly suitable for capturing word similarity. In experiments we show the superiority of our model over six models of the above three families: (a) bag-of-words models that encode co-occurrence statistics directly in features; (b) NN models that implement the bag-of-words approach in their objective; and (c) models that go beyond the bag-ofwords assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similarity vs. Association Most recent VSM research does not distinguish between association and similarity in a principled way, although notable exceptions exist. Turney (2012) constructed two VSMs with the explicit goal of capturing either similarity or association. A classifier that uses the output of these models was able to predict whether two concepts are associated, similar or both. Agirre et al. (2009) partitioned the wordsim353 dataset into two subsets, one focused on similarity and the other on association. They demonstrated the importance of the association/similarity distinction by showing that some VSMs perform relatively well on one subset while others perform comparatively better on the other.",
"cite_spans": [
{
"start": 393,
"end": 413,
"text": "Agirre et al. (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, Hill et al. (2014) presented the Sim-Lex999 dataset consisting of 999 word pairs judged by humans for similarity only. The participating words belong to a variety of POS tags and concreteness levels, arguably providing a more realistic sample of the English lexicon. Using their dataset the authors show the tendency of VSMs that take the bag-of-words approach to capture association much better than similarity. This observation motivates our work.",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Hill et al. (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Symmetric Patterns. Patterns (symmetric or not) were found useful in a variety of NLP tasks, including identification of word relations such as hyponymy (Hearst, 1992) , meronymy (Berland and Charniak, 1999) and antonymy (Lin et al., 2003) . Patterns have also been applied to tackle sentence level tasks such as identification of sarcasm , sentiment analysis and authorship attribution (Schwartz et al., 2013) .",
"cite_spans": [
{
"start": 153,
"end": 167,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF22"
},
{
"start": 179,
"end": 207,
"text": "(Berland and Charniak, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 221,
"end": 239,
"text": "(Lin et al., 2003)",
"ref_id": "BIBREF27"
},
{
"start": 387,
"end": 410,
"text": "(Schwartz et al., 2013)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Symmetric patterns (SPs) were employed in various NLP tasks to capture different aspects of word similarity. Widdows and Dorow (2002) used SPs for the task of lexical acquisition. Dorow et al. (2005) and Davidov and Rappoport (2006) used them to perform unsupervised clustering of words. Kozareva et al. (2008) used SPs to classify proper names (e.g., fish names, singer names). Feng et al. (2013) used SPs to build a connotation lexicon, and Schwartz et al. (2014) used SPs to perform minimally supervised classification of words into semantic categories.",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "Widdows and Dorow (2002)",
"ref_id": "BIBREF49"
},
{
"start": 180,
"end": 199,
"text": "Dorow et al. (2005)",
"ref_id": "BIBREF16"
},
{
"start": 204,
"end": 232,
"text": "Davidov and Rappoport (2006)",
"ref_id": "BIBREF13"
},
{
"start": 288,
"end": 310,
"text": "Kozareva et al. (2008)",
"ref_id": "BIBREF24"
},
{
"start": 379,
"end": 397,
"text": "Feng et al. (2013)",
"ref_id": "BIBREF18"
},
{
"start": 443,
"end": 465,
"text": "Schwartz et al. (2014)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While some of these works used a hand crafted set of SPs (Widdows and Dorow, 2002; Dorow et al., 2005; Kozareva et al., 2008; Feng et al., 2013) , Davidov and Rappoport (2006) introduced a fully unsupervised algorithm for the extraction of SPs. Here we apply their algorithm in order to reduce the required human supervision and demonstrate the language independence of our approach.",
"cite_spans": [
{
"start": 57,
"end": 82,
"text": "(Widdows and Dorow, 2002;",
"ref_id": "BIBREF49"
},
{
"start": 83,
"end": 102,
"text": "Dorow et al., 2005;",
"ref_id": "BIBREF16"
},
{
"start": 103,
"end": 125,
"text": "Kozareva et al., 2008;",
"ref_id": "BIBREF24"
},
{
"start": 126,
"end": 144,
"text": "Feng et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 147,
"end": 175,
"text": "Davidov and Rappoport (2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Antonyms. A useful property of our model is its ability to control the representation of antonym pairs. Outside the VSM literature several works identified antonyms using word co-occurrence statistics, manually and automatically induced patterns, the WordNet lexicon and thesauri (Lin et al., 2003; Turney, 2008; Wang et al., 2010; Mohammad et al., 2013 ; Schulte im Walde and Koper, 2013; Roth and Schulte im Walde, 2014). Recently, Yih et al. 2012, Chang et al. (2013) and Ono et al. (2015) proposed word representation methods that assign dissimilar vectors to antonyms. Unlike our unsupervised model, which uses plain text only, these works used the WordNet lexicon and a thesaurus.",
"cite_spans": [
{
"start": 280,
"end": 298,
"text": "(Lin et al., 2003;",
"ref_id": "BIBREF27"
},
{
"start": 299,
"end": 312,
"text": "Turney, 2008;",
"ref_id": "BIBREF46"
},
{
"start": 313,
"end": 331,
"text": "Wang et al., 2010;",
"ref_id": "BIBREF48"
},
{
"start": 332,
"end": 353,
"text": "Mohammad et al., 2013",
"ref_id": null
},
{
"start": 451,
"end": 470,
"text": "Chang et al. (2013)",
"ref_id": "BIBREF8"
},
{
"start": 475,
"end": 492,
"text": "Ono et al. (2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we describe our approach for generating pattern-based word embeddings. We start by describing symmetric patterns (SPs), continue to show how SPs can be acquired automatically from text, and, finally, explain how these SPs are used for word embedding construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Lexico-syntactic patterns are sequences of words and wildcards (Hearst, 1992) . Examples of pat-",
"cite_spans": [
{
"start": 63,
"end": 77,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetric Patterns",
"sec_num": "3.1"
},
{
"text": "Examples of Instances \"X of Y\" \"point of view\", \"years of age\" \"X the Y\" \"around the world\", \"over the past\" \"X to Y\" \"nothing to do\", \"like to see\" \"X and Y\" \"men and women\", \"oil and gas\" \"X in Y\" \"keep in mind\", \"put in place\" \"X of the Y\" \"rest of the world\", \"end of the war\" Table 1: The six most frequent pattern candidates that contain exactly two wildcards and 1-3 words in our corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 289,
"text": "Table 1:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Candidate",
"sec_num": null
},
{
"text": "terns include \"X such as Y\", \"X or Y\" and \"X is a Y\". When patterns are instantiated in text, wildcards are replaced by words. For example, the pattern \"X is a Y\", with the X and Y wildcards, can be instantiated in phrases like \"Guffy is a dog\". Symmetric patterns are a special type of patterns that contain exactly two wildcards and that tend to be instantiated by wildcard pairs such that each member of the pair can take the X or the Y position. For example, the symmetry of the pattern \"X or Y\" is exemplified by the semantically plausible expressions \"cats or dogs\" and \"dogs or cats\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate",
"sec_num": null
},
{
"text": "Previous works have shown that words that cooccur in SPs are semantically similar (Section 2). In this work we use symmetric patterns to represent words. Our hypothesis is that such representation would reflect word similarity (i.e., that similar vectors would represent similar words). Our experiments show that this is indeed the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate",
"sec_num": null
},
{
"text": "Symmetric Patterns Extraction. Most works that used SPs manually constructed a set of such patterns. The most prominent patterns in these works are \"X and Y\" and \"X or Y\" (Widdows and Dorow, 2002; Feng et al., 2013) . In this work we follow (Davidov and Rappoport, 2006) and apply an unsupervised algorithm for the automatic extraction of SPs from plain text.",
"cite_spans": [
{
"start": 171,
"end": 196,
"text": "(Widdows and Dorow, 2002;",
"ref_id": "BIBREF49"
},
{
"start": 197,
"end": 215,
"text": "Feng et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 241,
"end": 270,
"text": "(Davidov and Rappoport, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate",
"sec_num": null
},
{
"text": "This algorithm starts by defining an SP template to be a sequence of 3-5 tokens, consisting of exactly two wildcards, and 1-3 words. It then traverses a corpus, looking for frequent pattern candidates that match this template. Table 1 shows the six most frequent pattern candidates, along with common instances of these patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Candidate",
"sec_num": null
},
{
"text": "The algorithm continues by traversing the pattern candidates and selecting a pattern p if a large portion of the pairs of words w i , w j that co-occur in p co-occur both in the (X = w i ,Y = w j ) form and in the (X = w j ,Y = w i ) form. Consider, for example, the pattern candidate \"X and Y\", and the pair of words \"cat\",\"dog\". Both pattern instances \"cat and dog\" and \"dog and cat\" are likely to be seen in a large corpus. If this property holds for a large portion 2 of the pairs of words that co-occur in this pattern, it is selected as symmetric. On the other hand, the pattern candidate \"X of Y\" is in fact asymmetric: pairs of words such as \"point\", \"view\" tend to come only in the (X = \"point\",Y = \"view\") form and not the other way around. The reader is referred to (Davidov and Rappoport, 2006) for a more formal description of this algorithm. The resulting pattern set we use in this paper is \"X and Y\", \"X or Y\", \"X and the Y\", \"from X to Y\", \"X or the Y\", \"X as well as Y\", \"X or a Y\",\"X rather than Y\", \"X nor Y\", \"X and one Y\", \"either X or Y\".",
"cite_spans": [
{
"start": 777,
"end": 806,
"text": "(Davidov and Rappoport, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate",
"sec_num": null
},
{
"text": "In order to generate word embeddings, our model requires a large corpus C, and a set of SPs P . The model first computes a symmetric matrix M of size V \u00d7 V (where V is the size of the lexicon). In this matrix, M i,j is the co-occurrence count of both w i ,w j and w j ,w i in all patterns p \u2208 P . For example, if w i ,w j co-occur 1 time in p 1 and 3 times in p 5 , while w j ,w i co-occur 7 times in p 9 , then M i,j = M j,i = 1 + 3 + 7 = 11. We then compute the Positive Pointwise Mutual Information (PPMI) of M , denoted by M * . 3 The vector representation of the word w i (denoted by v i ) is the i th row in M * .",
"cite_spans": [
{
"start": 533,
"end": 534,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SP-based Word Embeddings",
"sec_num": "3.2"
},
{
"text": "Smoothing. In order to decrease the sparsity of our representation, we apply a simple smoothing technique. For each word w i , W n i denotes the top n vectors with the smallest cosine-distance from v i . We define the word embedding of w i to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SP-based Word Embeddings",
"sec_num": "3.2"
},
{
"text": "v \u2032 i = v i + \u03b1 \u2022 v\u2208W n i v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SP-based Word Embeddings",
"sec_num": "3.2"
},
{
"text": "where \u03b1 is a smoothing factor. 4 This process reduces the sparsity of our vector representation. For example, when n = 0 (i.e., no smoothing), the average number of non-zero values per vector is only 0.3K (where the vector size is \u223c250K). When n = 250, this number reaches \u223c14K.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SP-based Word Embeddings",
"sec_num": "3.2"
},
{
"text": "In this section we show how our model allows us to adjust the representation of pairs of antonyms to the needs of a subsequent NLP task. This property will later be demonstrated to have a substantial impact on performance. Antonyms are pairs of words with an opposite meaning (e.g., (tall,short)). As the members of an antonym pair tend to occur in the same context, their word embeddings are often similar. For example, in the skip-gram model (Mikolov et al., 2013a) , the score of the (accept,reject) pair is 0.73, and the score of (long,short) is 0.71. Our SP-based word embeddings also exhibit a similar behavior.",
"cite_spans": [
{
"start": 444,
"end": 467,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "The question of whether antonyms are similar or not is not a trivial one. On the one hand, some NLP tasks might benefit from representing antonyms as similar. For example, in word classification tasks, words such as \"big\" and \"small\" potentially belong to the same class (size adjectives), and thus representing them as similar is desired. On the other hand, antonyms are very dissimilar by definition. This distinction is crucial in tasks such as search, where a query such as \"tall buildings\" might be poorly processed if the representations of \"tall\" and \"short\" are similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "In light of this, we construct our word embeddings to be controllable of antonyms. That is, our model contains an antonym parameter that can be turned on in order to generate word embeddings that represent antonyms as dissimilar, and turned off to represent them as similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "To implement this mechanism, we follow (Lin et al., 2003) , who showed that two patterns are particularly indicative of antonymy -\"from X to Y\" and \"either X or Y\" (e.g., \"from bottom to top\", \"either high or low\"). As it turns out, these two patterns are also symmetric, and are discovered by our automatic algorithm. Henceforth, we refer to these two patterns as antonym patterns.",
"cite_spans": [
{
"start": 39,
"end": 57,
"text": "(Lin et al., 2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "Based on this observation, we present a variant of our model, which is designed to assign dissimilar vector representations to antonyms. We define two new matrices: M SP and M AP , which are computed similarly to M * (see Section 3.2), only with different SP sets. M SP is computed using the original set of SPs, excluding the two antonym patterns, while M AP is computed using the two antonym patterns only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "Then, we define an antonym-sensitive, co-occurrence matrix M +AN to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "M +AN = M SP \u2212 \u03b2 \u2022 M AP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "where \u03b2 is a weighting parameter. 5 Similarly to M * , the antonym-sensitive word representation of the i th word is the i th row in M +AN .",
"cite_spans": [
{
"start": 34,
"end": 35,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "Discussion. The case of antonyms presented in this paper is an example of one relation that a pattern based representation model can control. This property can be potentially extended to additional word relations, as long as they can be identified using patterns. Consider, for example, the hypernymy relation (is-a, as in the (apple,fruit) pair). This relation can be accurately identified using patterns such as \"X such as Y\" and \"X like Y\" (Hearst, 1992) . Consequently, it is likely that a pattern-based model can be adapted to control its predictions with respect to this relation using a method similar to the one we use to control antonym representation. We consider this a strong motivation for a deeper investigation of patternbased VSMs in future work.",
"cite_spans": [
{
"start": 443,
"end": 457,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "We next turn to empirically evaluate the performance of our model in estimating word similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Representation",
"sec_num": "4"
},
{
"text": "Evaluation Dataset. We experiment with the SimLex999 dataset (Hill et al., 2014) , 6 consisting of 999 pairs of words. Each pair in this dataset was annotated by roughly 50 human subjects, who were asked to score the similarity between the pair members. SimLex999 has several appealing properties, including its size, part-of-speech diversity, and diversity in the level of concreteness of the participating words.",
"cite_spans": [
{
"start": 61,
"end": 80,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 83,
"end": 84,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "We follow a 10-fold cross-validation experimental protocol. In each fold, we randomly sample 25% of the SimLex999 word pairs (\u223c250 pairs) and use them as a development set for parameter tuning. We use the remaining 75% of the pairs (\u223c750 pairs) as a test set. We report the average of the results we got in the 10 folds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "Training Corpus. We use an 8G words corpus, constructed using the word2vec script. 7 Through this script we also apply a pre-processing step which employs the word2phrase tool (Mikolov et al., 2013c) to merge common word pairs and triples to expression tokens. Our corpus consists of four datasets: (a) The 2012 and 2013 crawled news articles from the ACL 2014 workshop on statistical machine translation (Bojar et al., 2014) ; 8 (b) The One Billion Word Benchmark of Chelba et al. 2013; 9 (c) The UMBC corpus (Han et al., 2013) ; 10 and (d) The September 2014 dump of the English Wikipedia. 11",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF30"
},
{
"start": 405,
"end": 425,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 488,
"end": 489,
"text": "9",
"ref_id": "BIBREF0"
},
{
"start": 510,
"end": 528,
"text": "(Han et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "We compare our model against six baselines: one that encodes bag-of-words co-occurrence statistics into its features (model 1 below), three NN models that encode the same type of information into their objective function (models 2-4), and two models that go beyond the bag-of-words assumption (models 5-6). Unless stated otherwise, all models are trained on our training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "A simple model where each coordinate corresponds to the co-occurrence count of the represented word with another word in the training corpus. The resulted features are re-weighted according to PPMI. The model's window size parameter is tuned on the development set. 12 2-3. word2vec. The state-of-the-art word2vec toolkit (Mikolov et al., 2013a) 13 offers two word embedding architectures: continuous-bagof-words (CBOW) and skip-gram. We follow the recommendations of the word2vec script for setting the parameters of both models, and tune the window size on the development set. 14 4. GloVe. GloVe (Pennington et al., 2014) 15 is a global log-bilinear regression model for word embedding generation, which trains only on the nonzero elements in a co-occurrence matrix. We use the parameters suggested by the authors, and tune the window size on the development set. 16 5. NNSE. The NNSE model (Murphy et al., 2012) . As no full implementation of this model is available online, we use the off-the-shelf embeddings available at the authors' website, 17 taking the full document and dependency model with 2500 dimensions. Embeddings were computed using a dataset about twice as big as our corpus.",
"cite_spans": [
{
"start": 322,
"end": 345,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF28"
},
{
"start": 599,
"end": 624,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 894,
"end": 915,
"text": "(Murphy et al., 2012)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BOW.",
"sec_num": "1."
},
{
"text": "6. Dep. The modified, dependency-based, skipgram model (Levy and Goldberg, 2014) . To generate dependency links, we use the Stanford POS Tagger (Toutanova et al., 2003) 18 and the MALT parser (Nivre et al., 2006) . 19 We follow the parameters suggested by the authors.",
"cite_spans": [
{
"start": 55,
"end": 80,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF26"
},
{
"start": 144,
"end": 168,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF43"
},
{
"start": 192,
"end": 212,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF35"
},
{
"start": 215,
"end": 217,
"text": "19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BOW.",
"sec_num": "1."
},
{
"text": "For evaluation we follow the standard VSM literature: the score assigned to each pair of words by a model m is the cosine similarity between the vectors induced by m for the participating words. m's quality is evaluated by computing the Spearman correlation coefficient score (\u03c1) between the ranking derived from m's scores and the one derived from the human scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.3"
},
{
"text": "Main Result. Table 2 presents our results. Our model outperforms the baselines by a margin of 5.5-16.7% in the Spearman's correlation coefficient (\u03c1). Note that the capability of our model to control antonym representation has a substantial impact, boosting its performance from \u03c1 = 0.434 when the antonym parameter is turned off to \u03c1 = 0.517 when it is turned on.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Model Combination. We turn to explore whether our pattern-based model and our best baseline, skip-gram, which implements a bag-ofwords approach, can be combined to provide an improved predictive power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For each pair of words in the test set, we take a linear combination of the cosine similarity score computed using our embeddings and the score computed using the skip-gram (SG) embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "f + (w i , w j ) = \u03b3\u2022f SP (w i , w j )+(1\u2212\u03b3)\u2022f SG (w i , w j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this equation f <m> (w i , w j ) is the cosine similarity between the vector representations of words w i and w j according to model m, and \u03b3 is a Spearman's \u03c1 scores of our SP-based model with the antonym parameter turned on (SP (+) ) or off (SP (\u2212) ) and of the baselines described in Section 5.2. Joint (SP (+) , skip-gram) is an interpolation of the scores produced by skip-gram and our SP (+) model. Average Human Score is the average correlation of a single annotator with the average score of all annotators, taken from (Hill et al., 2014) .",
"cite_spans": [
{
"start": 250,
"end": 253,
"text": "(\u2212)",
"ref_id": null
},
{
"start": 313,
"end": 316,
"text": "(+)",
"ref_id": null
},
{
"start": 530,
"end": 549,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "weighting parameter tuned on the development set (a common value is 0.8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As shown in Table 2 , this combination forms the top performing model on SimLex999, achieving a Spearman's \u03c1 score of 0.563. This score is 4.6% higher than the score of our model, and a 10.1-21.3% improvement compared to the baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "wordsim353 Experiments. The wordsim353 dataset (Finkelstein et al., 2001 ) is frequently used for evaluating word representations. In order to be compatible with previous work, we experiment with this dataset as well. As our word embeddings are designed to support word similarity rather than relatedness, we focus on the similarity subset of this dataset, according to the division presented in (Agirre et al., 2009) .",
"cite_spans": [
{
"start": 47,
"end": 72,
"text": "(Finkelstein et al., 2001",
"ref_id": "BIBREF19"
},
{
"start": 396,
"end": 417,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As noted by (Hill et al., 2014) , the word pair scores in both subsets of wordsim353 reflect word association. This is because the two subsets created by (Agirre et al., 2009) keep the original wordsim353 scores, produced by human evaluators that were instructed to score according to association rather than similarity. Consequently, we expect our model to perform worse on this dataset compared to a dataset, such as SimLex999, whose annotators were guided to score word pairs according to similarity.",
"cite_spans": [
{
"start": 12,
"end": 31,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 154,
"end": 175,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Contrary to SimLex999, wordsim353 treats antonyms as similar. For example, the similarity score of the (life,death) and (profit,loss) pairs are 7.88 and 7.63 respectively, on a 0-10 scale. Consequently, we turn the antonym parameter off for this experiment. Spearman's \u03c1 scores for the similarity portion of wordsim353 (Agirre et al., 2009) . SP (\u2212) is our model with the antonym parameter turned off. Other abbreviations are as in Table 2 model is not as successful on a dataset that doesn't reflect pure similarity. Yet, it still crosses the \u03c1 = 0.7 score, a quite high performance level.",
"cite_spans": [
{
"start": 319,
"end": 340,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 346,
"end": 349,
"text": "(\u2212)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Part-of-Speech Analysis. We next perform a POS-based evaluation of the participating models, using the three portions of the SimLex999: 666 pairs of nouns, 222 pairs of verbs, and 111 pairs of adjectives. Table 4 indicates that our SP (+) model is exceptionally successful in predicting verb and adjective similarity. On verbs, SP (+) obtains a score of \u03c1 = 0.578, a 20.2-41.5% improvement over the baselines. On adjectives, SP (+) performs even better (\u03c1 = 0.663), an improvement of 5.9-12.3% over the baselines. On nouns, SP (+) is second only to skip-gram, though with very small margin (0.497 vs. 0.501), and is outperforming the other baselines by 1-12%. The lower performance of our model on nouns might partially explain its relatively low performance on wordsim353, which is composed exclusively of nouns.",
"cite_spans": [
{
"start": 428,
"end": 431,
"text": "(+)",
"ref_id": null
},
{
"start": 527,
"end": 530,
"text": "(+)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Analysis of Antonyms. We now turn to a qualitative analysis, in order to understand the impact of our modeling decisions on the scores of antonym word pairs. Table 5 presents examples of antonym pairs taken from the SimLex999 dataset, along with their relative ranking among all pairs in the set, as judged by our model (SP (+) with \u03b2 = 10 or SP (\u2212) with \u03b2 = \u22121) and by the best -old 1 6 6 narrow -wide 1 7 8 necessary -unnecessary 2 2 9 bottom -top 3 8 10 absence -presence 4 7 9 receive -send 1 9 8 fail -succeed 1 8 6 Table 5 :",
"cite_spans": [
{
"start": 324,
"end": 327,
"text": "(+)",
"ref_id": null
},
{
"start": 346,
"end": 349,
"text": "(\u2212)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 5",
"ref_id": null
},
{
"start": 379,
"end": 557,
"text": "-old 1 6 6 narrow -wide 1 7 8 necessary -unnecessary 2 2 9 bottom -top 3 8 10 absence -presence 4 7 9 receive -send 1 9 8 fail -succeed 1 8 6 Table 5",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Examples of antonym pairs and their decile in the similarity ranking of our SP model with the antonym parameter turned on (+AN, \u03b2=10) or off (-AN, \u03b2=-1), and of the skip-gram model, the best baseline. All examples are judged in the lowest decile (1) by SimLex999's annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "baseline representation (skip-gram). Each pair of words is assigned a score between 1 and 10 by each model, where a score of M means that the pair is ranked at the M 'th decile. The examples in the table are taken from the first (lowest) decile according to SimLex999's human evaluators. The table shows that when the antonym parameter is off, our model generally recognizes antonyms as similar. In contrast, when the parameter is on, ranks of antonyms substantially decrease.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Antonymy as Word Analogy. One of the most notable features of the skip-gram model is that some geometric relations between its vectors translate to semantic relations between the represented words (Mikolov et al., 2013c) , e.g.:",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "v woman \u2212 v man + v king \u2248 v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "queen It is therefore possible that a similar method can be applied to capture antonymy -a useful property that our model was demonstrated to have.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "To test this hypothesis, we generated a set of 200 analogy questions of the form \"X -Y + Z = ?\" where X and Y are antonyms, and Z is a word with an unknown antonym. 20 Example questions include: \"stupid -smart + life = ?\" (death) and \"huge -tiny + arrive = ?\" (leave). We applied the standard word analogy evaluation (Mikolov et al., 2013c) on this dataset with the skip-gram embeddings, and found that results are quite poor: 3.5% accuracy (compared to an average 56% accuracy this model obtains on a standard word analogy dataset (Mikolov et al., 2013a) ). Given these results, the question of whether skip-gram is capa- 20 Two human annotators selected a list of potential antonym pairs from SimLex999 and wordsim353. We took the intersection of their selections (26 antonym pairs) and randomly generated 200 analogy questions, each containing two antonym pairs. The dataset can be found in www.cs. huji.ac.il/\u02dcroys02/papers/sp_embeddings/ antonymy_analogy_questions.zip ble of accounting for antonyms remains open.",
"cite_spans": [
{
"start": 317,
"end": 340,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF30"
},
{
"start": 532,
"end": 555,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF28"
},
{
"start": 623,
"end": 625,
"text": "20",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We presented a symmetric pattern based model for word vector representation. On SimLex999, our model is superior to six strong baselines, including the state-of-the-art word2vec skip-gram model by as much as 5.5-16.7% in Spearman's \u03c1 score. We have shown that this gain is largely attributed to the remarkably high performance of our model on verbs, where it outperforms all baselines by 20.2-41.5%. We further demonstrated the adaptability of our model to antonym judgment specifications, and its complementary nature with respect to word2vec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In future work we intend to extend our patternbased word representation framework beyond symmetric patterns. As discussed in Section 4, other types of patterns have the potential to further improve the expressive power of word vectors. A particularly interesting challenge is to enhance our pattern-based approach with bag-of-words information, thus enjoying the provable advantages of both frameworks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "A few recent VSMs go beyond the bag-of-words assumption and consider deeper linguistic information in word representation. We address this line of work in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use 15% of the pairs of words as a threshold. 3 PPMI was shown useful for various co-occurrence models(Baroni et al., 2014).4 We tune n and \u03b1 using a development set (Section 5). Typical values for n and \u03b1 are 250 and 7, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We tune \u03b2 using a development set (Section 5). Typical values are 7 and 10.6 www.cl.cam.ac.uk/\u02dcfh295/simlex.html 7 code.google.com/p/word2vec/source/ browse/trunk/demo-train-big-model-v1.sh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/wmt14/trainingmonolingual-news-crawl/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cs.cmu.edu/\u02dcbmurphy/NNSE/ 18 nlp.stanford.edu/software/ 19 http://www.maltparser.org/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Elad Eban for his helpful advice. This research was funded (in part) by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and the Israel Ministry of Science and Technology Center of Knowledge in Machine Learning and Artificial Intelligence (Grant number 3-9243).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Window size 2 is generally selected for both models",
"authors": [],
"year": null,
"venue": "",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "http://www.statmt.org/lm-benchmark/ 1-billion-word-language-modeling- benchmark-r13output.tar.gz 10 http://ebiquity.umbc.edu/redirect/to/ resource/id/351/UMBC-webbase-corpus 11 dumps.wikimedia.org/enwiki/latest/ enwiki-latest-pages-articles.xml.bz2 12 The value 2 is almost constantly selected. 13 https://code.google.com/p/word2vec/ 14 Window size 2 is generally selected for both models. 15 nlp.stanford.edu/projects/glove/ 16 Window size 2 is generally selected.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pasca",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distri- butional and wordnet-based approaches. In Proc. of HLT-NAACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Strudel: A corpus-based semantic model based on properties and types",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Barbu",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Brian Murphy, Eduard Barbu, and Mas- simo Poesio. 2010. Strudel: A corpus-based seman- tic model based on properties and types. Cognitive Science.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proc. of ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. JMLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Finding parts in very large corpora",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Berland",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Berland and Eugene Charniak. 1999. Find- ing parts in very large corpora. In Proc. of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Proc. of the Ninth Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, and Lucia Specia, editors. 2014. Proc. of the Ninth Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning word representations from relational graphs",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Takanori",
"middle": [],
"last": "Maehara",
"suffix": ""
},
{
"first": "Yuichi",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Ken",
"middle": [],
"last": "Kawarabayashi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Takanori Maehara, Yuichi Yoshida, and Ken ichi Kawarabayashi. 2015. Learning word representations from relational graphs. In Proc. of AAAI.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-Relational Latent Semantic Analysis",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-wei Chang, Wen-tau Yih, and Christopher Meek. 2013. Multi-Relational Latent Semantic Analysis. In Proc. of EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Phillipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2013. One billion word benchmark for measuring progress in statistical language modeling. CoRR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Vector space models of lexical meaning",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2012,
"venue": "Handbook of Contemporary Semanticssecond edition",
"volume": "",
"issue": "",
"pages": "1--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark. 2012. Vector space models of lexi- cal meaning. Handbook of Contemporary Seman- ticssecond edition, pages 1-42.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of ICML.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "JMLR",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12:2493-2537.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ACL-Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov and Ari Rappoport. 2006. Effi- cient unsupervised discovery of word categories us- ing symmetric patterns and high frequency words. In Proc. of ACL-Coling.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Enhanced sentiment learning using twitter hashtags and smileys",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proc. of Coling.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-view learning of word embeddings via cca",
"authors": [
{
"first": "Paramveer",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Lyle",
"middle": [
"H"
],
"last": "Foster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paramveer Dhillon, Dean P Foster, and Lyle H Ungar. 2011. Multi-view learning of word embeddings via cca. In Proc. of NIPS.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using Curvature and Markov Clustering in Graphs for Lexical Acquisition and Word Sense Discrimination",
"authors": [
{
"first": "Beate",
"middle": [],
"last": "Dorow",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Katarina",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Eckmann",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Sergi",
"suffix": ""
},
{
"first": "Elisha",
"middle": [],
"last": "Moses",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beate Dorow, Dominic Widdows, Katarina Ling, Jean- Pierre Eckmann, Danilo Sergi, and Elisha Moses. 2005. Using Curvature and Markov Clustering in Graphs for Lexical Acquisition and Word Sense Dis- crimination.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Vector Space Models of Word Meaning and Phrase Meaning: A Survey",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2012,
"venue": "Language and Linguistics Compass",
"volume": "6",
"issue": "10",
"pages": "635--653",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. 2012. Vector Space Models of Word Meaning and Phrase Meaning: A Survey. Language and Linguistics Compass, 6(10):635-653.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Connotation lexicon: A dash of sentiment beneath the surface meaning",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jun Seok",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proc. of ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proc. of WWW.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Umbc ebiquity-core: Semantic textual similarity systems",
"authors": [
{
"first": "Lushan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Abhay",
"middle": [
"L"
],
"last": "Kashyap",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of *SEM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lushan Han, Abhay L. Kashyap, Tim Finin, James Mayfield, and Jonathan Weese. 2013. Umbc ebiquity-core: Semantic textual similarity systems. In Proc. of *SEM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of Coling",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proc. of Coling -Volume 2.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.3456"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv:1408.3456 [cs.CL].",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Semantic class learning from the web with hyponym pattern linkage graphs",
"authors": [
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In Proc. of ACL- HLT.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Word embeddings through hellinger pca",
"authors": [
{
"first": "R\u00e9mi",
"middle": [],
"last": "Lebret",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00e9mi Lebret and Ronan Collobert. 2014. Word em- beddings through hellinger pca. In Proc. of EACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proc. of ACL (Volume 2: Short Papers).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Identifying synonyms among distributionally similar words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying synonyms among distri- butionally similar words. In Proc. of IJCAI.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Proc. of NIPS.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A scalable hierarchical distributed language model",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Proc. of NIPS.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning word embeddings efficiently with noise-contrastive estimation",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Proc. of NIPS.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"Pratim"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Murphy, Partha Pratim Talukdar, and Tom Mitchell. 2012. Learning Effective and In- terpretable Semantic Models using Non-Negative Sparse Embedding. In Proc. of Coling.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Maltparser: A data-driven parser-generator for dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for de- pendency parsing. In Proc. of LREC.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Word embedding-based antonym detection using thesauri and distributional information",
"authors": [
{
"first": "Masataka",
"middle": [],
"last": "Ono",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proc. of NAACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Sabine Schulte im Walde. 2014. Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification. In Proc. of ACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The SMART Retrieval System: Experiments in Automatic Document Processing",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton. 1971. The SMART Retrieval Sys- tem: Experiments in Automatic Document Process- ing. Prentice-Hall, Inc., Upper Saddle River, NJ, USA.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Pattern-based distinction of paradigmatic relations for german nouns, verbs, adjectives. Language Processing and Knowledge in the Web",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Koper",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "184--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte im Walde and Maximilian Koper. 2013. Pattern-based distinction of paradigmatic relations for german nouns, verbs, adjectives. Language Pro- cessing and Knowledge in the Web, pages 184-198.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Authorship attribution of micromessages",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
},
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Oren Tsur, Ari Rappoport, and Moshe Koppel. 2013. Authorship attribution of micro- messages. In Proc. of EMNLP.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Minimally supervised classification to semantic categories using automatically acquired symmetric patterns",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2014. Minimally supervised classification to semantic cat- egories using automatically acquired symmetric pat- terns. In Proc. of Coling.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proc. of NAACL.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Icwsm-a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm-a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews. In Proc. of ICWSM.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney, Patrick Pantel, et al. 2010. From fre- quency to meaning: Vector space models of seman- tics. Journal of Artificial Intelligence research.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A uniform approach to analogies, synonyms, antonyms, and associations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2008. A uniform approach to analo- gies, synonyms, antonyms, and associations. In Proc. of Coling.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Domain and function: A dual-space model of semantic relations and compositions",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "533--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2012. Domain and function: A dual-space model of semantic relations and compo- sitions. Journal of Artificial Intelligence Research, pages 533-585.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Pattern-based synonym and antonym extraction",
"authors": [
{
"first": "Wenbo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Sheth",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbo Wang, Christopher Thomas, Amit Sheth, and Victor Chan. 2010. Pattern-based synonym and antonym extraction. In Proc. of ACM.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "A graph model for unsupervised lexical acquisition",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Beate",
"middle": [],
"last": "Dorow",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows and Beate Dorow. 2002. A graph model for unsupervised lexical acquisition. In Proc. of Coling.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Polarity inducing latent semantic analysis",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Zweig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Geoffrey Zweig, and John C. Platt. 2012. Polarity inducing latent semantic analysis. In Proc. of EMNLP-CoNLL.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"text": "presents the results. As expected, our",
"html": null,
"content": "<table><tr><td>Model</td><td>Spearman's \u03c1</td></tr><tr><td>GloVe</td><td>0.677</td></tr><tr><td>Dep</td><td>0.712</td></tr><tr><td>BOW</td><td>0.729</td></tr><tr><td>CBOW</td><td>0.734</td></tr><tr><td>NNSE</td><td>0.78</td></tr><tr><td>skip-gram</td><td>0.792</td></tr><tr><td>SP (\u2212)</td><td>0.728</td></tr><tr><td>Average Human Score</td><td>0.756</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": ".",
"html": null,
"content": "<table><tr><td>Model</td><td>Adj.</td><td colspan=\"2\">Nouns Verbs</td></tr><tr><td>GloVe</td><td>0.571</td><td>0.377</td><td>0.163</td></tr><tr><td>Dep</td><td>0.54</td><td>0.449</td><td>0.376</td></tr><tr><td>BOW</td><td>0.548</td><td>0.451</td><td>0.276</td></tr><tr><td>CBOW</td><td>0.579</td><td>0.48</td><td>0.252</td></tr><tr><td>NNSE</td><td>0.594</td><td>0.487</td><td>0.318</td></tr><tr><td>skip-gram</td><td>0.604</td><td>0.501</td><td>0.307</td></tr><tr><td>SP (+)</td><td>0.663</td><td>0.497</td><td>0.578</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "A POS-based analysis of the various models. Numbers are the Spearman's \u03c1 scores of each model on each of the respective portions of SimLex999.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}