| { |
| "paper_id": "N10-1013", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:50:41.886855Z" |
| }, |
| "title": "Multi-Prototype Vector-Space Models of Word Meaning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "mooney@cs.utexas.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Current vector-space models of lexical semantics create a single \"prototype\" vector to represent the meaning of a word. However, due to lexical ambiguity, encoding word meaning with a single vector is problematic. This paper presents a method that uses clustering to produce multiple \"sense-specific\" vectors for each word. This approach provides a context-dependent vector representation of word meaning that naturally accommodates homonymy and polysemy. Experimental comparisons to human judgements of semantic similarity for both isolated words as well as words in sentential contexts demonstrate the superiority of this approach over both prototype and exemplar based vector-space models.", |
| "pdf_parse": { |
| "paper_id": "N10-1013", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Current vector-space models of lexical semantics create a single \"prototype\" vector to represent the meaning of a word. However, due to lexical ambiguity, encoding word meaning with a single vector is problematic. This paper presents a method that uses clustering to produce multiple \"sense-specific\" vectors for each word. This approach provides a context-dependent vector representation of word meaning that naturally accommodates homonymy and polysemy. Experimental comparisons to human judgements of semantic similarity for both isolated words as well as words in sentential contexts demonstrate the superiority of this approach over both prototype and exemplar based vector-space models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Automatically judging the degree of semantic similarity between words is an important task useful in text classification (Baker and McCallum, 1998) , information retrieval (Sanderson, 1994) , textual entailment, and other language processing tasks. The standard empirical approach to this task exploits the distributional hypothesis, i.e. that similar words appear in similar contexts (Curran and Moens, 2002; Pereira et al., 1993) . Traditionally, word types are represented by a single vector of contextual features derived from cooccurrence information, and semantic similarity is computed using some measure of vector distance (Lee, 1999; Lowe, 2001 ).", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 147, |
| "text": "(Baker and McCallum, 1998)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 172, |
| "end": 189, |
| "text": "(Sanderson, 1994)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 385, |
| "end": 409, |
| "text": "(Curran and Moens, 2002;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 410, |
| "end": 431, |
| "text": "Pereira et al., 1993)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 631, |
| "end": 642, |
| "text": "(Lee, 1999;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 643, |
| "end": 653, |
| "text": "Lowe, 2001", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, due to homonymy and polysemy, capturing the semantics of a word with a single vector is problematic. For example, the word club is similar to both bat and association, which are not at all similar to each other. Word meaning violates the triangle inequality when viewed at the level of word types, posing a problem for vector-space models (Tversky and Gati, 1982) . A single \"prototype\" vector is simply incapable of capturing phenomena such as homonymy and polysemy. Also, most vector-space models are context independent, while the meaning of a word clearly depends on context. The word club in \"The caveman picked up the club\" is similar to bat in \"John hit the robber with a bat,\" but not in \"The bat flew out of the cave.\"", |
| "cite_spans": [ |
| { |
| "start": 348, |
| "end": 372, |
| "text": "(Tversky and Gati, 1982)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We present a new resource-lean vector-space model that represents a word's meaning by a set of distinct \"sense specific\" vectors. The similarity of two isolated words A and B is defined as the minimum distance between one of A's vectors and one of B's vectors. In addition, a context-dependent meaning for a word is determined by choosing one of the vectors in its set based on minimizing the distance to the vector representing the current context. Consequently, the model supports judging the similarity of both words in isolation and words in context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The set of vectors for a word is determined by unsupervised word sense discovery (WSD) (Sch\u00fctze, 1998) , which clusters the contexts in which a word appears. In previous work, vector-space lexical similarity and word sense discovery have been treated as two separate tasks. This paper shows how they can be combined to create an improved vector-space model of lexical semantics. First, a word's contexts are clustered to produce groups of similar context vectors. An average \"prototype\" vector is then computed separately for each cluster, producing a set of vectors for each word. Finally, as described above, these cluster vectors can be used to determine the se-mantic similarity of both isolated words and words in context. The approach is completely modular, and can integrate any clustering method with any traditional vector-space model. We present experimental comparisons to human judgements of semantic similarity for both isolated words and words in sentential context. The results demonstrate the superiority of a clustered approach over both traditional prototype and exemplar-based vector-space models. For example, given the isolated target word singer our method produces the most similar word vocalist, while using a single prototype gives musician. Given the word cell in the context: \"The book was published while Piasecki was still in prison, and a copy was delivered to his cell.\" the standard approach produces protein while our method yields incarcerated.", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 102, |
| "text": "(Sch\u00fctze, 1998)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The remainder of the paper is organized as follows: Section 2 gives relevant background on prototype and exemplar methods for lexical semantics, Section 3 presents our multi-prototype method, Section 4 presents our experimental evaluations, Section 5 discusses future work, and Section 6 concludes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Psychological concept models can be roughly divided into two classes:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. Prototype models represented concepts by an abstract prototypical instance, similar to a cluster centroid in parametric density estimation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2. Exemplar models represent concepts by a concrete set of observed instances, similar to nonparametric approaches to density estimation in statistics (Ashby and Alfonso-Reese, 1995). Tversky and Gati (1982) famously showed that conceptual similarity violates the triangle inequality, lending evidence for exemplar-based models in psychology. Exemplar models have been previously used for lexical semantics problems such as selectional preference (Erk, 2007) and thematic fit (Vandekerckhove et al., 2009) . Individual exemplars can be quite noisy and the model can incur high computational overhead at prediction time since naively computing the similarity between two words using each occurrence in a textual corpus as an exemplar requires O(n 2 ) comparisons. Instead, the standard Figure 1 : Overview of the multi-prototype approach to near-synonym discovery for a single target word independent of context. Occurrences are clustered and cluster centroids are used as prototype vectors. Note the \"hurricane\" sense of position (cluster 3) is not typically considered appropriate in WSD. approach is to compute a single prototype vector for each word from its occurrences.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 207, |
| "text": "Tversky and Gati (1982)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 447, |
| "end": 458, |
| "text": "(Erk, 2007)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 476, |
| "end": 505, |
| "text": "(Vandekerckhove et al., 2009)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 785, |
| "end": 793, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This paper presents a multi-prototype vector space model for lexical semantics with a single parameter K (the number of clusters) that generalizes both prototype (K = 1) and exemplar (K = N , the total number of instances) methods. Such models have been widely studied in the Psychology literature (Griffiths et al., 2007; Love et al., 2004; Rosseel, 2002) . By employing multiple prototypes per word, vector space models can account for homonymy, polysemy and thematic variation in word usage. Furthermore, such approaches require only O(K 2 ) comparisons for computing similarity, yielding potential computational savings over the exemplar approach when K N , while reaping many of the same benefits.", |
| "cite_spans": [ |
| { |
| "start": 298, |
| "end": 322, |
| "text": "(Griffiths et al., 2007;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 323, |
| "end": 341, |
| "text": "Love et al., 2004;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 342, |
| "end": 356, |
| "text": "Rosseel, 2002)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Previous work on lexical semantic relatedness has focused on two approaches: (1) mining monolingual or bilingual dictionaries or other pre-existing resources to construct networks of related words (Agirre and Edmond, 2006; Ramage et al., 2009) , and (2) using the distributional hypothesis to automatically infer a vector-space prototype of word meaning from large corpora (Agirre et al., 2009; Curran, 2004; Harris, 1954) . The former approach tends to have greater precision, but depends on hand-crafted dictionaries and cannot, in general, model sense frequency (Budanitsky and Hirst, 2006) . The latter approach is fundamentally more scalable as it does not rely on specific resources and can model corpus-specific sense distributions. However, the distributional approach can suffer from poor precision, as thematically similar words (e.g., singer and actor) and antonyms often occur in similar contexts (Lin et al., 2003) .", |
| "cite_spans": [ |
| { |
| "start": 197, |
| "end": 222, |
| "text": "(Agirre and Edmond, 2006;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 223, |
| "end": 243, |
| "text": "Ramage et al., 2009)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 373, |
| "end": 394, |
| "text": "(Agirre et al., 2009;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 395, |
| "end": 408, |
| "text": "Curran, 2004;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 409, |
| "end": 422, |
| "text": "Harris, 1954)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 565, |
| "end": 593, |
| "text": "(Budanitsky and Hirst, 2006)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 909, |
| "end": 927, |
| "text": "(Lin et al., 2003)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Unsupervised word-sense discovery has been studied by number of researchers (Agirre and Edmond, 2006; Sch\u00fctze, 1998) . Most work has also focused on corpus-based distributional approaches, varying the vector-space representation, e.g. by incorporating syntactic and co-occurrence information from the words surrounding the target term (Pereira et al., 1993; .", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 101, |
| "text": "(Agirre and Edmond, 2006;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 102, |
| "end": 116, |
| "text": "Sch\u00fctze, 1998)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 335, |
| "end": 357, |
| "text": "(Pereira et al., 1993;", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our approach is similar to standard vector-space models of word meaning, with the addition of a perword-type clustering step: Occurrences for a specific word type are collected from the corpus and clustered using any appropriate method ( \u00a73.1). Similarity between two word types is then computed as a function of their cluster centroids ( \u00a73.2), instead of the centroid of all the word's occurrences. Figure 1 gives an overview of this process.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 401, |
| "end": 409, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multi-Prototype Vector-Space Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Multiple prototypes for each word w are generated by clustering feature vectors v(c) derived from each occurrence c \u2208 C(w) in a large textual corpus and collecting the resulting cluster centroids \u03c0 k (w), k \u2208 [1, K]. This approach is commonly employed in unsupervised word sense discovery; however, we do not assume that clusters correspond to traditional word senses. Rather, we only rely on clusters to capture meaningful variation in word usage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering Occurrences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Our experiments employ a mixture of von Mises-Fisher distributions (movMF) clustering method with first-order unigram contexts (Banerjee et al., 2005) . Feature vectors v(c) are composed of individual features I(c, f ), taken as all unigrams occurring f \u2208 F in a 10-word window around w.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 150, |
| "text": "(Banerjee et al., 2005)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering Occurrences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Like spherical k-means (Dhillon and Modha, 2001), movMF models semantic relatedness using cosine similarity, a standard measure of textual similarity. However, movMF introduces an additional per-cluster concentration parameter controlling its semantic breadth, allowing it to more accurately model non-uniformities in the distribution of cluster sizes. Based on preliminary experiments comparing various clustering methods, we found movMF gave the best results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering Occurrences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The similarity between two words in a multiprototype model can be computed straightforwardly, requiring only simple modifications to standard distributional similarity methods such as those presented by Curran (2004) . Given words w and w , we define two noncontextual clustered similarity metrics to measure similarity of isolated words:", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 216, |
| "text": "Curran (2004)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "AvgSim(w, w ) def = 1 K 2 K j=1 K k=1 d(\u03c0 k (w), \u03c0 j (w )) MaxSim(w, w ) def = max 1\u2264j\u2264K,1\u2264k\u2264K d(\u03c0 k (w), \u03c0 j (w ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "d(\u2022, \u2022)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "is a standard distributional similarity measure. In AvgSim, word similarity is computed as the average similarity of all pairs of prototype vectors; In MaxSim the similarity is the maximum over all pairwise prototype similarities. All results reported in this paper use cosine similarity, 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Cos(w, w ) = f \u2208F I(w, f ) \u2022 I(w , f ) f \u2208F I(w, f ) 2 f \u2208F I(w , f ) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We compare across two different feature functions tf-idf weighting and \u03c7 2 weighting, chosen due to their ubiquity in the literature (Agirre et al., 2009; Curran, 2004) . In AvgSim, all prototype pairs contribute equally to the similarity computation, thus two words are judged as similar if many of their senses are similar. MaxSim, on the other hand, only requires a single pair of prototypes to be close for the words to be judged similar. Thus, MaxSim models the similarity of words that share only a single sense (e.g. bat and club) at the cost of lower robustness to noisy clusters that might be introduced when K is large.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 154, |
| "text": "(Agirre et al., 2009;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 155, |
| "end": 168, |
| "text": "Curran, 2004)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "When contextual information is available, AvgSim and MaxSim can be modified to produce more precise similarity computations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "AvgSimC(w, w ) def = 1 K 2 K j=1 K k=1 d c,w,k d c ,w ,j d(\u03c0 k (w), \u03c0 j (w )) MaxSimC(w, w ) def = d(\u03c0(w),\u03c0(w )) where d c,w,k def = d(v(c), \u03c0 k (w)) is the likelihood of context c belonging to cluster \u03c0 k (w), and\u03c0(w) def = \u03c0 arg max 1\u2264k\u2264K d c,w,k (w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ", the maximum likelihood cluster for w in context c. Thus, AvgSimC corresponds to soft cluster assignment, weighting each similarity term in AvgSim by the likelihood of the word contexts appearing in their respective clusters. MaxSimC corresponds to hard assignment, using only the most probable cluster assignment. Note that AvgSim and MaxSim can be thought of as special cases of AvgSimC and MaxSimC with uniform weight to each cluster; hence AvgSimC and MaxSimC can be used to compare words in context to isolated words as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Semantic Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We employed two corpora to train our models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "1. A snapshot of English Wikipedia taken on Sept.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "29th, 2009. Wikitext markup is removed, as are articles with fewer than 100 words, leaving 2.8M articles with a total of 2.05B words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "2. The third edition English Gigaword corpus, with articles containing fewer than 100 words removed, leaving 6.6M articles and 3.9B words (Graff, 2003) .", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 151, |
| "text": "(Graff, 2003)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Wikipedia covers a wider range of sense distributions, whereas Gigaword contains only newswire text and tends to employ fewer senses of most ambiguous words. Our method outperforms baseline methods even on Gigaword, indicating its advantages even when the corpus covers few senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To evaluate the quality of various models, we first compared their lexical similarity measurements to human similarity judgements from the WordSim-353 data set (Finkelstein et al., 2001 ). This test corpus contains multiple human judgements on 353 word pairs, covering both monosemous and polysemous words, each rated on a 1-10 integer scale. Spearman's rank correlation (\u03c1) with average human judgements (Agirre et al., 2009) was used to measure the quality of various models. Figure 2 plots Spearman's \u03c1 on WordSim-353 against the number of clusters (K) for Wikipedia and Gigaword corpora, using pruned tf-idf and \u03c7 2 features. 2 In general pruned tf-idf features yield higher correlation than \u03c7 2 features. Using AvgSim, the multi-prototype approach (K > 1) yields higher correlation than the single-prototype approach (K = 1) across all corpora and feature types, achieving state-of-the-art results with pruned tf-idf features. This result is statistically significant in all cases for tf-idf and for K \u2208 [2, 10] on Wikipedia and K > 4 on Gigaword for \u03c7 2 features. 3 MaxSim yields similar performance when K < 10 but performance degrades as K increases.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 185, |
| "text": "(Finkelstein et al., 2001", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 405, |
| "end": 426, |
| "text": "(Agirre et al., 2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 478, |
| "end": 486, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Judging Semantic Similarity", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "It is possible to circumvent the model-selection problem (choosing the best value of K) by simply combining the prototypes from clusterings of different sizes. This approach represents words using both semantically broad and semantically tight prototypes, similar to hierarchical clustering. Table 1 and Figure 2 (squares) show the result of such a combined approach, where the prototypes for clusterings of size 2-5, 10, 20, 50, and 100 are unioned to form a single large prototype set. In general, this approach works about as well as picking the optimal value of K, even outperforming the single best cluster size for Wikipedia.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 292, |
| "end": 299, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 304, |
| "end": 322, |
| "text": "Figure 2 (squares)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Judging Semantic Similarity", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Finally, we also compared our method to a pure exemplar approach, averaging similarity across all occurrence pairs. 4 Table 1 summarizes the results. The exemplar approach yields significantly higher correlation than the single prototype approach in all cases except Gigaword with tf-idf features (p < 0.05). Furthermore, it performs significantly worse Spearman's \u03c1 prototype exemplar multi-prototype (AvgSim) combined K = 5 K = 20 K = 50 Wikipedia tf-idf 0.53\u00b10.02 0.60\u00b10.06 0.69\u00b10.02 0.76\u00b10.01 0.76\u00b10.01 0.77\u00b10.01 Wikipedia \u03c7 2 0.54\u00b10.03 0.65\u00b10.07 0.58\u00b10.02 0.56\u00b10.02 0.52\u00b10.03 0.59\u00b10.04 Gigaword tf-idf 0.49\u00b10.02 0.48\u00b10.10 0.64\u00b10.02 0.61\u00b10.02 0.61\u00b10.02 0.62\u00b10.02 Gigaword \u03c7 2 0.25\u00b10.03 0.41\u00b10.14 0.32\u00b10.03 0.35\u00b10.03 0.33\u00b10.03 0.34\u00b10.03 Table 1 : Spearman correlation on the WordSim-353 dataset broken down by corpus and feature type. than combined multi-prototype for tf-idf features, and does not differ significantly for \u03c7 2 features. Overall this result indicates that multi-prototype performs at least as well as exemplar in the worst case, and significantly outperforms when using the best feature representation / corpus pair.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 740, |
| "end": 747, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Judging Semantic Similarity", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We next evaluated the multi-prototype approach on its ability to determine the most closely related words for a given target word (using the Wikipedia corpus with tf-idf features). The top k most similar words were computed for each prototype of each target word. Using a forced-choice setup, human subjects were asked to evaluate the quality of these near synonyms relative to those produced by a sin-homonymous carrier, crane, cell, company, issue, interest, match, media, nature, party, practice, plant, racket, recess, reservation, rock, space, value polysemous cause, chance, journal, market, network, policy, power, production, series, trading, train (Snow et al., 2008) were asked to choose between two possible alternatives (one from a prototype model and one from a multi-prototype model) as being most similar to a given target word. The target words were presented either in isolation or in a sentential context randomly selected from the corpus. Table 2 lists the ambiguous words used for this task. They are grouped into homonyms (words with very distinct senses) and polysemes (words with related senses). All words were chosen such that their usages occur within the same part of speech. In the non-contextual task, 79 unique raters completed 7,620 comparisons of which 72 were discarded due to poor performance on a known test set. 6 In the contextual task, 127 raters completed 9,930 comparisons of which 87 were discarded.", |
| "cite_spans": [ |
| { |
| "start": 657, |
| "end": 676, |
| "text": "(Snow et al., 2008)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1348, |
| "end": 1349, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 958, |
| "end": 965, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Near-Synonyms", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For the non-contextual case, Figure 3 left plots the fraction of raters preferring the multi-prototype prediction (using AvgSim) over that of a single prototype as the number of clusters is varied. When asked to choose between the single best word for 5 http://mturk.com 6 (Rater reliability) The reliability of Mechanical Turk raters is quite variable, so we computed an accuracy score for each rater by including a control question with a known correct answer in each HIT. Control questions were generated by selecting a random word from WordNet 3.0 and including as possible choices a word in the same synset (correct answer) and a word in a synset with a high path distance (incorrect answer). Raters who got less than 50% of these control questions correct, or spent too little time on the HIT were discarded.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 37, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Near-Synonyms", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Non-contextual Near-Synonym Prediction Contextual Near-Synonym Prediction Figure 3 : (left) Near-synonym evaluation for isolated words showing fraction of raters preferring multiprototype results vs. number of clusters. Colored squares indicate performance when combining across clusterings. 95% confidence intervals computed using the Wald test. (right) Near-synonym evaluation for words in a sentential context chosen either from the minority sense or the majority sense. each method (top word), the multi-prototype prediction is chosen significantly more frequently (i.e. the result is above 0.5) when the number of clusters is small, but the two methods perform similarly for larger numbers of clusters (Wald test, \u03b1 = 0.05.) Clustering more accurately identifies homonyms' clearly distinct senses and produces prototypes that better capture the different uses of these words. As a result, compared to using a single prototype, our approach produces better nearsynonyms for homonyms compared to polysemes. However, given the right number of clusters, it also produces better results for polysemous words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 82, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Near-Synonyms", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The near-synonym prediction task highlights one of the weaknesses of the multi-prototype approach: as the number of clusters increases, the number of occurrences assigned to each cluster decreases, increasing noise and resulting in some poor prototypes that mainly cover outliers. The word similarity task is somewhat robust to this phenomenon, but synonym prediction is more affected since only the top predicted choice is used. When raters are forced to chose between the top three predictions for each method (presented as top set in Figure 3 left) , the effect of this noise is reduced and the multi-prototype approach remains dominant even for a large number of clusters. This indicates that although more clusters can capture finer-grained sense distinctions, they also can introduce noise.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 537, |
| "end": 551, |
| "text": "Figure 3 left)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Near-Synonyms", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "When presented with words in context ( Figure 3 right), 7 raters found no significant difference in the two methods for words used in their majority sense. 8 However, when a minority sense is pre-sented (e.g. the \"prison\" sense of cell), raters prefer the choice predicted by the multi-prototype approach. This result is to be expected since the single prototype mainly reflects the majority sense, preventing it from predicting appropriate synonyms for a minority sense. Also, once again, the performance of the multi-prototype approach is better for homonyms than polysemes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 39, |
| "end": 48, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Near-Synonyms", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Variance in pairwise prototype distances can help explain the variance in human similarity judgements for a given word pair. We evaluate this hypothesis empirically on WordSim-353 by computing the Spearman correlation between the variance of the per-cluster similarity computations,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicting Variation in Human Ratings", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "V[D], D def = {d(\u03c0 k (w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicting Variation in Human Ratings", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": ", \u03c0 j (w )) : 1 \u2264 k, j \u2264 K}, and the variance of the human annotations for that pair. Correlations for each dataset are shown in Figure 4 left. In general, we find a statistically significant negative correlation between these values using \u03c7 2 features, indicating that as the entropy of the pairwise cluster similarities increases (i.e., prototypes become more similar, and similarities become uniform), rater disagreement increases. This result is intuitive: if the occurrences of a particular word cannot be easily separated into coherent clusters (perhaps indicating high polysemy instead of homonymy), then human judgement will be naturally more difficult.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 129, |
| "end": 137, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Variation in Human Ratings", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Rater variance depends more directly on the actual word similarity: word pairs at the extreme ranges of similarity have significantly lower variance as raters are more certain. By removing word pairs with similarity judgements in the middle two quartile ranges (4.4 to 7.5) we find significantly higher variance correlation (Figure 4 right) . This result indicates that multi-prototype similarity variance accounts for a secondary effect separate from the primary effect that variance is naturally lower for ratings in extreme ranges.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 324, |
| "end": 340, |
| "text": "(Figure 4 right)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Variation in Human Ratings", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Although the entropy of the prototypes correlates with the variance of the human ratings, we find that the individual senses captured by each prototype do not correspond to human intuition for a given word, e.g. the \"hurricane\" sense of position in Figure 1 . This notion is evaluated empirically by computing the correlation between the predicted similarity us- Figure 4 : Plots of variance correlation; lower numbers indicate higher negative correlation, i.e. that prototype entropy predicts rater disagreement.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 257, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 363, |
| "end": 371, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Variation in Human Ratings", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "ing the contextual multi-prototype method and human similarity judgements for different usages of the same word. The Usage Similarity (USim) data set collected in Erk et al. (2009) provides such similarity scores from human raters. However, we find no evidence for correlation between USim scores and their corresponding prototype similarity scores (\u03c1 = 0.04), indicating that prototype vectors may not correspond well to human senses. Table 3 compares the inferred synonyms for several target words, generally demonstrating the ability of the multi-prototype model to improve the precision of inferred near-synonyms (e.g. in the case of singer or need) as well as its ability to include synonyms from less frequent senses (e.g., the experiment sense of research or the verify sense of prove). However, there are a number of ways it could be improved:", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 180, |
| "text": "Erk et al. (2009)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 436, |
| "end": 443, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicting Variation in Human Ratings", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Feature representations: Multiple prototypes improve Spearman correlation on WordSim-353 compared to previous methods using the same underlying representation (Agirre et al., 2009) . However we have not yet evaluated its performance when using more powerful feature representations such those based on Latent or Explicit Semantic Analysis (Deerwester et al., 1990; Gabrilovich and Markovitch, 2007) . Due to its modularity, the multiprototype approach can easily incorporate such advances in order to further improve its effectiveness. Table 3 : Examples of the top 5 inferred nearsynonyms using the single-and multi-prototype approaches (with results merged). In general such clustering improves the precision and coverage of the inferred near-synonyms.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 180, |
| "text": "(Agirre et al., 2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 339, |
| "end": 364, |
| "text": "(Deerwester et al., 1990;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 365, |
| "end": 398, |
| "text": "Gabrilovich and Markovitch, 2007)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 536, |
| "end": 543, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The success of the combined approach indicates that the optimal number of clusters may vary per word. A more principled approach to selecting the number of prototypes per word is to employ a clustering model with infinite capacity, e.g. the Dirichlet Process Mixture Model (Rasmussen, 2000) . Such a model would allow naturally more polysemous words to adopt more flexible representations.", |
| "cite_spans": [ |
| { |
| "start": 273, |
| "end": 290, |
| "text": "(Rasmussen, 2000)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nonparametric clustering:", |
| "sec_num": null |
| }, |
| { |
| "text": "Cluster similarity metrics: Besides AvgSim and MaxSim, there are many similarity metrics over mixture models, e.g. KL-divergence, which may correlate better with human similarity judgements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nonparametric clustering:", |
| "sec_num": null |
| }, |
| { |
| "text": "Comparing to traditional senses: Compared to WordNet, our best-performing clusterings are significantly more fine-grained. Furthermore, they often do not correspond to agreed upon semantic distinctions (e.g., the \"hurricane\" sense of position in Fig. 1) . We posit that the finer-grained senses actually capture useful aspects of word meaning, leading to better correlation with WordSim-353. However, it would be good to compare prototypes learned from supervised sense inventories to prototypes produced by automatic clustering.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 246, |
| "end": 253, |
| "text": "Fig. 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Nonparametric clustering:", |
| "sec_num": null |
| }, |
| { |
| "text": "Joint model: The current method independently clusters the contexts of each word, so the senses discovered for w cannot influence the senses discovered for w = w. Sharing statistical strength across similar words could yield better results for rarer words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nonparametric clustering:", |
| "sec_num": null |
| }, |
| { |
| "text": "We presented a resource-light model for vectorspace word meaning that represents words as collections of prototype vectors, naturally accounting for lexical ambiguity. The multi-prototype approach uses word sense discovery to partition a word's contexts and construct \"sense specific\" prototypes for each cluster. Doing so significantly increases the accuracy of lexical-similarity computation as demonstrated by improved correlation with human similarity judgements and generation of better near synonyms according to human evaluators. Furthermore, we show that, although performance is sensitive to the number of prototypes, combining prototypes across a large range of clusterings performs nearly as well as the ex-post best clustering. Finally, variance in the prototype similarities is found to correlate with inter-annotator disagreement, suggesting psychological plausibility.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The main results also hold for weighted Jaccard similarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "(Feature pruning) We find that results using tf-idf features are extremely sensitive to feature pruning while \u03c7 2 features are more robust. In all experiments we prune tf-idf features by their overall weight, taking the top 5000. This setting was found to optimize the performance of the single-prototype approach.3 Significance is calculated using the large-sample approximation of the Spearman rank test; (p < 0.05).4 Averaging across all pairs was found to yield higher correlation than averaging over the most similar pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Results for the multi-prototype method are generated using AvgSimC (soft assignment) as this was found to significantly outperform MaxSimC.8 Sense frequency determined using Google; senses labeled manually by trained human evaluators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Katrin Erk for helpful discussions and making the USim data set available. This work was supported by an NSF Graduate Research Fellowship and a Google Research Award. Experiments were run on the Mastodon Cluster, provided by NSF Grant EIA-0303609.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Word Sense Disambiguation: Algorithms and Applications (Text, Speech and Language Technology)", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Phillip", |
| "middle": [], |
| "last": "Edmond", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre and Phillip Edmond. 2006. Word Sense Disambiguation: Algorithms and Applications (Text, Speech and Language Technology). Springer-Verlag New York, Inc., Secaucus, NJ, USA.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A study on similarity and relatedness using distributional and WordNet-based approaches", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Alfonseca", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Kravalova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of NAACL-HLT-09", |
| "volume": "", |
| "issue": "", |
| "pages": "19--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proc. of NAACL- HLT-09, pages 19-27.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Categorization as probability density estimation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Ashby", |
| "suffix": "" |
| }, |
| { |
| "first": "Leola", |
| "middle": [ |
| "A" |
| ], |
| "last": "Alfonso-Reese", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "J. Math. Psychol", |
| "volume": "39", |
| "issue": "2", |
| "pages": "216--233", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Gregory Ashby and Leola A. Alfonso-Reese. 1995. Categorization as probability density estimation. J. Math. Psychol., 39(2):216-233.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Distributional clustering of words for text classification", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "K" |
| ], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of 21st International ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "96--103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Douglas Baker and Andrew K. McCallum. 1998. Dis- tributional clustering of words for text classification. In Proceedings of 21st International ACM SIGIR Con- ference on Research and Development in Information Retrieval, pages 96-103.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Clustering on the unit hypersphere using von Mises-Fisher distributions", |
| "authors": [ |
| { |
| "first": "Arindam", |
| "middle": [], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Inderjit", |
| "middle": [], |
| "last": "Dhillon", |
| "suffix": "" |
| }, |
| { |
| "first": "Joydeep", |
| "middle": [], |
| "last": "Ghosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Suvrit", |
| "middle": [], |
| "last": "Sra", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "6", |
| "issue": "", |
| "pages": "1345--1382", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arindam Banerjee, Inderjit Dhillon, Joydeep Ghosh, and Suvrit Sra. 2005. Clustering on the unit hypersphere using von Mises-Fisher distributions. Journal of Ma- chine Learning Research, 6:1345-1382.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Evaluating wordnet-based measures of lexical semantic relatedness", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Budanitsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Graeme", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computational Linguistics", |
| "volume": "32", |
| "issue": "1", |
| "pages": "13--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Budanitsky and Graeme Hirst. 2006. Evalu- ating wordnet-based measures of lexical semantic re- latedness. Computational Linguistics, 32(1):13-47.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Improvements in automatic thesaurus extraction", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Curran", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition", |
| "volume": "", |
| "issue": "", |
| "pages": "59--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James R. Curran and Marc Moens. 2002. Improvements in automatic thesaurus extraction. In Proceedings of the ACL-02 workshop on Unsupervised lexical acqui- sition, pages 59-66.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "From Distributional to Semantic Similarity", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James R. Curran. 2004. From Distributional to Seman- tic Similarity. Ph.D. thesis, University of Edinburgh. College of Science.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Indexing by latent semantic analysis", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Scott", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Deerwester", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [ |
| "W" |
| ], |
| "last": "Dumais", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "K" |
| ], |
| "last": "Furnas", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "A" |
| ], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harshman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of the American Society for Information Science", |
| "volume": "41", |
| "issue": "", |
| "pages": "391--407", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott C. Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American Society for Information Science, 41:391-407.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Concept decompositions for large sparse text data using clustering", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Inderjit", |
| "suffix": "" |
| }, |
| { |
| "first": "Dharmendra", |
| "middle": [ |
| "S" |
| ], |
| "last": "Dhillon", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Modha", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Machine Learning", |
| "volume": "42", |
| "issue": "", |
| "pages": "143--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Inderjit S. Dhillon and Dharmendra S. Modha. 2001. Concept decompositions for large sparse text data us- ing clustering. Machine Learning, 42:143-175.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Investigations on word senses and word usages", |
| "authors": [ |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| }, |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Nicholas Gaylord Investigations on Word Senses, and Word Usages", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katrin Erk, Diana McCarthy, Nicholas Gaylord Investi- gations on Word Senses, and Word Usages. 2009. In- vestigations on word senses and word usages. In Proc. of ACL-09.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A simple, similarity-based model for selectional preferences", |
| "authors": [ |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Association for Computer Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Association for Computer Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Placing search in context: the concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of WWW-01", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: the concept revisited. In Proc. of WWW-01, pages 406-414, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Computing semantic relatedness using Wikipedia-based explicit semantic analysis", |
| "authors": [ |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaul", |
| "middle": [], |
| "last": "Markovitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of IJCAI-07", |
| "volume": "", |
| "issue": "", |
| "pages": "1606--1611", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting semantic relatedness using Wikipedia-based ex- plicit semantic analysis. In Proc. of IJCAI-07, pages 1606-1611.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "English Gigaword. Linguistic Data Consortium, Philadephia", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Graff", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Graff. 2003. English Gigaword. Linguistic Data Consortium, Philadephia.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Unifying rational models of categorization via the hierarchical Dirichlet process", |
| "authors": [ |
| { |
| "first": "Tom", |
| "middle": [ |
| "L" |
| ], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [ |
| "R" |
| ], |
| "last": "Canini", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [ |
| "N" |
| ], |
| "last": "Sanborn", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "J" |
| ], |
| "last": "Navarro", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of CogSci-07", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom L. Griffiths, Kevin. R. Canini, Adam N. Sanborn, and Daniel. J. Navarro. 2007. Unifying rational mod- els of categorization via the hierarchical Dirichlet pro- cess. In Proc. of CogSci-07.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Distributional structure. Word", |
| "authors": [ |
| { |
| "first": "Zellig", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| } |
| ], |
| "year": 1954, |
| "venue": "", |
| "volume": "10", |
| "issue": "", |
| "pages": "146--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146-162.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Measures of distributional similarity", |
| "authors": [ |
| { |
| "first": "Lillian", |
| "middle": [ |
| "Lee" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "37th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lillian Lee. 1999. Measures of distributional similarity. In 37th Annual Meeting of the Association for Compu- tational Linguistics, pages 25-32.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Concept discovery from text", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of COLING-02", |
| "volume": "", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin and Patrick Pantel. 2002. Concept discovery from text. In Proc. of COLING-02, pages 1-7.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Identifying synonyms among distributionally similar words", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaojun", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Lijuan", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Interational Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1492--1493", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying synonyms among distributionally similar words. In Proceedings of the Interational Joint Conference on Artificial Intelligence, pages 1492- 1493. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "SUSTAIN: A network model of category learning", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Bradley", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Love", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [ |
| "M" |
| ], |
| "last": "Medin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gureckis", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Psych. Review", |
| "volume": "111", |
| "issue": "2", |
| "pages": "309--332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bradley C. Love, Douglas L. Medin, and Todd M. Gureckis. 2004. SUSTAIN: A network model of cat- egory learning. Psych. Review, 111(2):309-332.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Towards a theory of semantic space", |
| "authors": [ |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 23rd Annual Meeting of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "576--581", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Will Lowe. 2001. Towards a theory of semantic space. In Proceedings of the 23rd Annual Meeting of the Cog- nitive Science Society, pages 576-581.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Discovering word senses from text", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of SIGKDD-02", |
| "volume": "", |
| "issue": "", |
| "pages": "613--619", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proc. of SIGKDD-02, pages 613- 619, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Distributional clustering of English words", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "N" |
| ], |
| "last": "Fernando", |
| "suffix": "" |
| }, |
| { |
| "first": "Naftali", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Lillian", |
| "middle": [], |
| "last": "Tishby", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL-93)", |
| "volume": "", |
| "issue": "", |
| "pages": "183--190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernando C. N. Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 31st Annual Meeting of the Associ- ation for Computational Linguistics (ACL-93), pages 183-190, Columbus, Ohio.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Random walks for text semantic similarity", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Ramage", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [ |
| "N" |
| ], |
| "last": "Rafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)", |
| "volume": "", |
| "issue": "", |
| "pages": "23--31", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Ramage, Anna N. Rafferty, and Christopher D. Manning. 2009. Random walks for text seman- tic similarity. In Proc. of the 2009 Workshop on Graph-based Methods for Natural Language Process- ing (TextGraphs-4), pages 23-31.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "The infinite Gaussian mixture model", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Carl", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rasmussen", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "554--560", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carl E. Rasmussen. 2000. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems, pages 554-560. MIT Press.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Mixture models of categorization", |
| "authors": [ |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Rosseel", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "J. Math. Psychol", |
| "volume": "46", |
| "issue": "2", |
| "pages": "178--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yves Rosseel. 2002. Mixture models of categorization. J. Math. Psychol., 46(2):178-210.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Word sense disambiguation and information retrieval", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Sanderson", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proc. of SIGIR-94", |
| "volume": "", |
| "issue": "", |
| "pages": "142--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Sanderson. 1994. Word sense disambiguation and information retrieval. In Proc. of SIGIR-94, pages 142-151.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Automatic word sense discrimination", |
| "authors": [ |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "97--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrimi- nation. Computational Linguistics, 24(1):97-123.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Cheap and fast-but is it good? Evaluating non-expert annotations for natural language tasks", |
| "authors": [ |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of EMNLP-08", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and An- drew Ng. 2008. Cheap and fast-but is it good? Eval- uating non-expert annotations for natural language tasks. In Proc. of EMNLP-08.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Similarity, separability, and the triangle inequality. Psychological Review", |
| "authors": [ |
| { |
| "first": "Amos", |
| "middle": [], |
| "last": "Tversky", |
| "suffix": "" |
| }, |
| { |
| "first": "Itamar", |
| "middle": [], |
| "last": "Gati", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "89", |
| "issue": "", |
| "pages": "123--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amos Tversky and Itamar Gati. 1982. Similarity, sepa- rability, and the triangle inequality. Psychological Re- view, 89(2):123-154.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "A robust and extensible exemplarbased model of thematic fit", |
| "authors": [ |
| { |
| "first": "Bram", |
| "middle": [], |
| "last": "Vandekerckhove", |
| "suffix": "" |
| }, |
| { |
| "first": "Dominiek", |
| "middle": [], |
| "last": "Sandra", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "826--834", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bram Vandekerckhove, Dominiek Sandra, and Walter Daelemans. 2009. A robust and extensible exemplar- based model of thematic fit. In Proc. of EACL 2009, pages 826-834. Association for Computational Lin- guistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "WordSim-353 rank correlation vs. number of clusters (log scale) for both the Wikipedia (left) and Gigaword (right) corpora. Horizontal bars show the performance of single-prototype. Squares indicate performance when combining across clusterings. Error bars depict 95% confidence intervals using the Spearman test. Squares indicate performance when combining across clusterings.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "... chose Zbigniew Brzezinski for the position of ... ... thus the symbol s position on his clothing was ... ... writes call options against the stock position ... ... offered a position with ...", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td>(cluster#1)</td></tr><tr><td/><td/><td>location</td></tr><tr><td/><td/><td>importance</td></tr><tr><td/><td/><td>bombing</td></tr><tr><td/><td/><td>(cluster#2)</td></tr><tr><td/><td/><td>post</td></tr><tr><td/><td/><td>appointme</td></tr><tr><td/><td/><td>nt, role, job</td></tr><tr><td>... a position he would hold until his retirement in ... ... endanger their position as ... on the chart of the vessel s a cultural group...</td><td>single prototype</td><td>hour, gust (cluster#3) intensity, winds,</td></tr><tr><td>current position ... ... not in a position to help...</td><td/><td>(cluster#4) lineman,</td></tr><tr><td/><td/><td>tackle, role,</td></tr><tr><td/><td/><td>scorer</td></tr><tr><td>(collect contexts)</td><td>(cluster)</td><td>(similarity)</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "text": "Words used in predicting near synonyms.", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |