| { |
| "paper_id": "S17-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:29:48.181296Z" |
| }, |
| "title": "A Mixture Model for Learning Multi-Sense Word Embeddings", |
| "authors": [ |
| { |
| "first": "Dai", |
| "middle": [ |
| "Quoc" |
| ], |
| "last": "Nguyen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Saarland University", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Dat", |
| "middle": [ |
| "Quoc" |
| ], |
| "last": "Nguyen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Macquarie University", |
| "location": { |
| "country": "Australia" |
| } |
| }, |
| "email": "dat.nguyen@students.mq.edu.au" |
| }, |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Modi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Saarland University", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "ashutosh@coli.uni-saarland.de" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Thater", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Saarland University", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Saarland University", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "pinkal@coli.uni-saarland.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.", |
| "pdf_parse": { |
| "paper_id": "S17-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Word embeddings have shown to be useful in various NLP tasks such as sentiment analysis, topic models, script learning, machine translation, sequence labeling and parsing Sutskever et al., 2014; Modi and Titov, 2014; Nguyen et al., 2015a,b; Modi, 2016; Ma and Hovy, 2016; Nguyen et al., 2017; Modi et al., 2017) . A word embedding captures the syntactic and semantic properties of a word by representing the word in a form of a real-valued vector (Mikolov et al., 2013a,b; Pennington et al., 2014; Levy and Goldberg, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 194, |
| "text": "Sutskever et al., 2014;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 195, |
| "end": 216, |
| "text": "Modi and Titov, 2014;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 217, |
| "end": 240, |
| "text": "Nguyen et al., 2015a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 241, |
| "end": 252, |
| "text": "Modi, 2016;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 253, |
| "end": 271, |
| "text": "Ma and Hovy, 2016;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 272, |
| "end": 292, |
| "text": "Nguyen et al., 2017;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 293, |
| "end": 311, |
| "text": "Modi et al., 2017)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 447, |
| "end": 472, |
| "text": "(Mikolov et al., 2013a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 473, |
| "end": 497, |
| "text": "Pennington et al., 2014;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 498, |
| "end": 522, |
| "text": "Levy and Goldberg, 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, usually word embedding models do not take into account lexical ambiguity. For example, the word bank is usually represented by a single vector representation for all senses including sloping land and financial institution. Recently, approaches have been proposed to learn multi-sense word embeddings, where each sense of a word corresponds to a sense-specific embedding. Reisinger and Mooney (2010) , Huang et al. (2012) and Wu and Giles (2015) proposed methods to cluster the contexts of each word and then using cluster centroids as vector representations for word senses. Neelakantan et al. (2014) , Tian et al. (2014) , Li and Jurafsky (2015) and extended Word2Vec models (Mikolov et al., 2013a,b) to learn a vector representation for each sense of a word. , Iacobacci et al. (2015) and Flekova and Gurevych (2016) performed word sense induction using external resources (e.g., WordNet, Babel-Net) and then learned sense embeddings using the Word2Vec models. Rothe and Sch\u00fctze (2015) and Pilehvar and Collier (2016) presented methods using pre-trained word embeddings to learn embeddings from WordNet synsets. , Liu et al. (2015b) , Liu et al. (2015a) and Zhang and Zhong (2016) directly opt the Word2Vec Skipgram model (Mikolov et al., 2013b) for learning the embeddings of words and topics on a topicassigned corpus.", |
| "cite_spans": [ |
| { |
| "start": 380, |
| "end": 407, |
| "text": "Reisinger and Mooney (2010)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 410, |
| "end": 429, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 434, |
| "end": 453, |
| "text": "Wu and Giles (2015)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 584, |
| "end": 609, |
| "text": "Neelakantan et al. (2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 612, |
| "end": 630, |
| "text": "Tian et al. (2014)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 633, |
| "end": 655, |
| "text": "Li and Jurafsky (2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 685, |
| "end": 710, |
| "text": "(Mikolov et al., 2013a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 772, |
| "end": 795, |
| "text": "Iacobacci et al. (2015)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 800, |
| "end": 827, |
| "text": "Flekova and Gurevych (2016)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 972, |
| "end": 996, |
| "text": "Rothe and Sch\u00fctze (2015)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1001, |
| "end": 1028, |
| "text": "Pilehvar and Collier (2016)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1125, |
| "end": 1143, |
| "text": "Liu et al. (2015b)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1146, |
| "end": 1164, |
| "text": "Liu et al. (2015a)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1169, |
| "end": 1191, |
| "text": "Zhang and Zhong (2016)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 1233, |
| "end": 1256, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One issue in these previous works is that they assign the same weight to every sense of a word. The central assumption of our work is that each sense of a word given a context, should correspond to a mixture of weights reflecting different association degrees of the word with multiple senses in the context. The mixture weights will help to model word meaning better.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose a new model for learning Multi-Sense Word Embeddings (MSWE). Our MSWE model learns vector representations of a word based on a mixture of its sense representations. The key difference between MSWE and other models is that we induce the weights of senses while jointly learning the word and sense embeddings. Specifically, we train a topic model (Blei et al., 2003) to obtain the topic-to-word and document-to-topic probability distributions which are then used to infer the weights of topics. We use these weights to define a compositional vector representation for each target word to predict its context words. MSWE thus is different from the topic-based models Liu et al., 2015b,a; Zhang and Zhong, 2016) , in which we do not use the topic assignments when jointly learning vector representations of words and topics. Here we not only learn vectors based on the most suitable topic of a word given its context, but we also take into consideration all possible meanings of the word.", |
| "cite_spans": [ |
| { |
| "start": 371, |
| "end": 390, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 690, |
| "end": 710, |
| "text": "Liu et al., 2015b,a;", |
| "ref_id": null |
| }, |
| { |
| "start": 711, |
| "end": 733, |
| "text": "Zhang and Zhong, 2016)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The main contributions of our study are: (i) We introduce a mixture model for learning word and sense embeddings (MSWE) by inducing mixture weights of word senses. (ii) We show that MSWE performs better than the baseline Word2Vec Skipgram and other embedding models on the word analogy task (Mikolov et al., 2013a) and the word similarity task (Reisinger and Mooney, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 291, |
| "end": 314, |
| "text": "(Mikolov et al., 2013a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 344, |
| "end": 372, |
| "text": "(Reisinger and Mooney, 2010)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we present the mixture model for learning multi-sense word embeddings. Here we treat topics as senses. The model learns a representation for each word using a mixture of its topical representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a number of topics and a corpus D of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "documents d = {w d,1 , w d,2 , ..., w d,M d },", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "we apply a topic model (Blei et al., 2003) to obtain the topicto-word Pr(w|t) and document-to-topic Pr(t|d) probability distributions. We then infer a weight for the m th word w d,m with topic t in document d:", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 42, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03bb d,m,t = Pr(w d,m |t) \u00d7 Pr(t|d)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We define two MSWE variants: MSWE-1 learns vectors for words based on the most suitable topic given document d while MSWE-2 marginalizes over all senses of a word to take into account all possible senses of the word:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "MSWE-1: s w d,m = v w d,m + \u03bb d,m,t \u00d7 v t 1 + \u03bb d,m,t MSWE-2: s w d,m = v w d,m + T t=1 \u03bb d,m,t \u00d7 v t 1 + T t=1 \u03bb d,m,t", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where s w d,m is the compositional vector representation of the m th word w d,m and the topics in document d; v w is the target vector representation of a word type w in vocabulary V ; v t is the vector representation of topic t; T is the number of topics; \u03bb d,m,t is defined as in Equation 1, and in MSWE-1 we define t = arg max", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "t \u03bb d,m,t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We learn representations by minimizing the following negative log-likelihood function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "L = \u2212 d\u2208D M d m=1 \u2212k\u2264j\u2264k j =0 log Pr(\u1e7d w d,m+j |s w d,m ) (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where the m th word w d,m in document d is a target word while the (m + j) th word w d,m+j in document d is a context word of w d,m and k is the context size. In addition,\u1e7d w is the context vector representation of the word type w. The probability", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Pr(\u1e7d w d,m+j |s w d,m )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is defined using the softmax function as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Pr(\u1e7d w d,m+j |s w d,m ) = exp(\u1e7d T w d,m+j s w d,m ) c \u2208V exp(\u1e7d T c s w d,m )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Since computing log Pr(\u1e7d w d,m+j |s w d,m ) is expensive for each training instance, we approximate log Pr(\u1e7d w d,m+j |s w d,m ) in Equation 2 with the following negative-sampling objective (Mikolov et al., 2013b) :", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 212, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "O d,m,m+j = log \u03c3 \u1e7d T w d,m+j s w d,m + K i=1 log \u03c3 \u2212\u1e7d T c i s w d,m", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where each word c i is sampled from a noise distribution. 1 In fact, MSWE can be viewed as a generalization of the well-known Word2Vec Skip-gram model with negative sampling (Mikolov et al., 2013b) where all the mixture weights \u03bb d,m,t are set to zero. The models are trained using Stochastic Gradient Descent (SGD).", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 59, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 174, |
| "end": 197, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The mixture model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We evaluate MSWE on two different tasks: word similarity and word analogy. We also provide experimental results obtained by the baseline Word2Vec Skip-gram model and other previous works.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Note that not all previous results are mentioned in this paper for comparison because the training corpora used in most previous research work are much larger than ours Li and Jurafsky, 2015; Schwartz et al., 2015; Levy et al., 2015) . Also there are differences in the pre-processing steps that could affect the results. We could also improve obtained results by using a larger training corpus, but this is not central point of our paper. The objective of our paper is that the embeddings of topic and word can be combined into a single mixture model, leading to good improvements as established empirically.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 191, |
| "text": "Li and Jurafsky, 2015;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 192, |
| "end": 214, |
| "text": "Schwartz et al., 2015;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 215, |
| "end": 233, |
| "text": "Levy et al., 2015)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Following Huang et al. (2012) and Neelakantan et al. (2014) , we use the Wesbury Lab Wikipedia corpus (Shaoul and Westbury, 2010) containing over 2M articles with about 990M words for training. In the preprocessing step, texts are lowercased and tokenized, numbers are mapped to 0, and punctuation marks are removed. We extract a vocabulary of 200,000 most frequent word tokens from the pre-processed corpus. Words not occurring in the vocabulary are mapped to a special token UNK, in which we use the embedding of UNK for unknown words in the benchmark datasets.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 29, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 34, |
| "end": 59, |
| "text": "Neelakantan et al. (2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 102, |
| "end": 129, |
| "text": "(Shaoul and Westbury, 2010)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We firstly use a small subset extracted from the WS353 dataset (Finkelstein et al., 2002) to tune the hyper-parameters of the baseline Word2Vec Skip-gram model for the word similarity task (see Section 3.2 for the task definition). We then directly use the tuned hyper-parameters for our MSWE variants. Vector size is also a hyperparameter. While some approaches use a higher number of dimensions to obtain better results, we fix the vector size to be 300 as used by the baseline for a fair comparison. The vanilla Latent Dirichlet Allocation (LDA) topic model (Blei et al., 2003) is not scalable to a very large corpus, so we explore faster online topic models developed for large corpora. We train the online LDA topic model (Hoffman et al., 2010) on the training corpus, and use the output of this topic model to compute the mixture weights as in Equation 1. 2 We also use the same WS353 subset to tune the numbers of topics T \u2208 {50, 100, 200, 300, 400}. We find that the most suitable numbers are T = 50 and T = 200 then used for all our experiments. Here we learn 300-dimensional embeddings with the fixed context size k = 5 (in Equation 2) and K = 10 (in Equation 3) as used by the baseline. During training, we randomly initialize model parameters (i.e. word and topic embeddings) and then learn them by using SGD with the initial learning rate of 0.01. ", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 89, |
| "text": "(Finkelstein et al., 2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 561, |
| "end": 580, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 862, |
| "end": 863, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The word similarity task evaluates the quality of word embedding models (Reisinger and Mooney, 2010) . For a given dataset of word pairs, the evaluation is done by calculating correlation between the similarity scores of corresponding word embedding pairs with the human judgment scores. Higher Spearman's rank correlation (\u03c1) reflects better word embedding model. We evaluate MSWE on standard datasets (as given in Table 1 ) for the word similarity evaluation task.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 100, |
| "text": "(Reisinger and Mooney, 2010)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 416, |
| "end": 423, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Following Reisinger and Mooney (2010) , Huang et al. (2012) , Neelakantan et al. (2014) , we compute the similarity scores for a pair of words (w, w ) with or without their respective contexts (c, c ) as:", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 37, |
| "text": "Reisinger and Mooney (2010)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 40, |
| "end": 59, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 62, |
| "end": 87, |
| "text": "Neelakantan et al. (2014)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "GlobalSim w, w = cos (vw, v w ) AvgSim w, w = 1 T 2 T t=1 T t =1 cos (vw,t, v w ,t ) AvgSimC w, w = 1 T 2 T t=1 T t =1 \u03b4 (vw,t, vc) \u00d7 \u03b4 (v w ,t , v c ) \u00d7 cos (vw,t, v w ,t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where v w is the vector representation of the word w, v w,t is the multiple representation of the word w and the topic t, v c is the vector representation of the context c of the word w. And cos (v, v ) is the cosine similarity between two vectors v and v . For our experiments, we set Table 2 : Spearman's rank correlation (\u03c1 \u00d7 100) for the word similarity task when using GlobalSim. Subscripts 50 and 200 denote the online LDA topic model trained with T = 50 and T = 200 topics, respectively. denotes that our best score is significantly higher than the score of the baseline (with p < 0.05, online toolkit from http: //www.philippsinger.info/?p=347). Scores in bold and underline are the best and second best scores.", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 202, |
| "text": "(v, v )", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 286, |
| "end": 293, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "v w,t = v w \u2295 (Pr(w|t) \u00d7 v t ) and v c = 1 |c| w\u2208c v w \u2295 ( t Pr (t|c) \u00d7 v t ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "while AvgSim considers multiple representations to capture different meanings (i.e. topics) and usages of a word. AvgSimC generalizes AvgSim by taking into account the likelihood", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u03b4 (v w,t , v c ) that word w takes topic t given context c. \u03b4 (v, v )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "is the inverse of the cosine distance from v to v (Huang et al., 2012; Neelakantan et al., 2014) . Table 2 compares the evaluation results of MSWE with results reported in prior work on the standard word similarity task when using GlobalSim. We use subscripts 50 and 200 to denote the topic model trained with T = 50 and T = 200 topics, respectively. Table 2 shows that our model outperforms the baseline Word2Vec Skip-gram model (in fifth row from bottom). Specifically, on the RW dataset, MSWE obtains a significant improvement of 2.92 in the Spearman's rank correlation (which is about 8.5% relative improvement). Compared to the published results, MSWE obtains the highest accuracy on the RW, SCWS, WS353 and MEN datasets, and achieves the second highest result on the SIMLEX dataset. These indicate that MSWE learns better representations for words taking into account different meanings.", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 70, |
| "text": "(Huang et al., 2012;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 71, |
| "end": 96, |
| "text": "Neelakantan et al., 2014)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 99, |
| "end": 106, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 351, |
| "end": 358, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We evaluate our model MSWE by using AvgSim and AvgSimC on the benchmark SCWS dataset", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results for contextual word similarity", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "AvgSim AvgSimC Huang et al. (2012) 62.8 65.7 Neelakantan et al. (2014) 67.3 69.3 66.2 68.9 65.7 66.4 Wu and Giles 2015-66.4 Jauhar et al. 2015-65.7 Cheng and Kartsaklis (2015) 62.5 - Iacobacci et al. (2015) 62.4 -Cheng et al. 2015 which considers effects of the contextual information on the word similarity task. As shown in Table 3, MSWE scores better than the closely related model proposed by and generally obtains good results for this context sensitive dataset. Although we produce better scores than Neelakantan et al. (2014) and when using GlobalSim, we are outperformed by them when using AvgSim and AvgSimC. Neelakantan et al. (2014) clustered the embeddings of the context words around each target word to predict its sense and Chen et al. (2014) used pretrained word embeddings to initialize vector representations of senses taken from WordNet, while we use a fixed number of topics as senses for words in MSWE.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 34, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 45, |
| "end": 70, |
| "text": "Neelakantan et al. (2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 183, |
| "end": 206, |
| "text": "Iacobacci et al. (2015)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 507, |
| "end": 532, |
| "text": "Neelakantan et al. (2014)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "We evaluate the embedding models on the word analogy task introduced by Mikolov et al. (2013a). The task aims to answer questions in the form of \"a is to b as c is to ?\", denoted as \"a : b \u2192 c : ?\" (e.g., \"Hanoi : Vietnam \u2192 Bern : ?\"). There are 8,869 semantic and 10,675 syntactic questions grouped into 14 categories. Each question is answered by finding the most suitable word closest to \"v b \u2212 v a + v c \" measured by the cosine similarity. The answer is correct only if the found closest word is exactly the same as the gold-standard (correct) one for the question. We report accuracies in Table 4 and show that MSWE achieves better results in comparison with the baseline Word2Vec Skip-gram. In particular, MSWE reaches the accuracies of around 69.7% Model Accuracy (%) Pennington et al. (2014) 70.3 68.0 Neelakantan et al. (2014) 64.0 Ghannay et al. (2016) 62. Table 4 : Accuracies for the word analogy task. All our results are significantly higher than the result of Word2Vec Skip-gram (with two-tail p < 0.001 using McNemar's test). Pennington et al. (2014) used a larger training corpus of 1.6B words.", |
| "cite_spans": [ |
| { |
| "start": 776, |
| "end": 800, |
| "text": "Pennington et al. (2014)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 811, |
| "end": 836, |
| "text": "Neelakantan et al. (2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 842, |
| "end": 863, |
| "text": "Ghannay et al. (2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1043, |
| "end": 1067, |
| "text": "Pennington et al. (2014)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 595, |
| "end": 602, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 868, |
| "end": 875, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Analogy", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "which is higher than the accuracy of 68.6% obtained by Word2Vec Skip-gram.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Analogy", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In this paper, we described a mixture model for learning multi-sense embeddings. Our model induces mixture weights to represent a word given context based on a mixture of its sense representations. The results show that our model scores better than Word2Vec, and produces highly competitive results on the standard evaluation tasks. In future work, we will explore better methods for taking into account the contextual information. We also plan to explore different approaches to compute the mixture weights in our model. For example, if there is a large sense-annotated corpus available for training, the mixture weights could be defined based on the frequency (sense-count) distributions, instead of using the probability distributions produced by a topic model. Furthermore, it is possible to consider the weights of senses as additional model parameters to be then learned during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use an unigram distribution raised to the 3/4 power(Mikolov et al., 2013b) as the noise distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use default parameters in gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) for the online LDA model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was funded by the German Research Foundation (DFG) as part of SFB 1102 \"Information Density and Linguistic Encoding\". We would like to thank anonymous reviewers for their helpful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 238-247.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Latent Dirichlet Allocation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "993--1022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research 3:993-1022.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Multimodal distributional semantics", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam", |
| "middle": [ |
| "Khanh" |
| ], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "49", |
| "issue": "", |
| "pages": "1--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research 49:1-47.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Improving distributed representation of word sense via wordnet gloss composition and context clustering", |
| "authors": [ |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruifeng", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulan", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "15--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tao Chen, Ruifeng Xu, Yulan He, and Xuan Wang. 2015. Improving distributed representation of word sense via wordnet gloss composition and context clustering. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers). pages 15-20.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A unified model for word sense representation and disambiguation", |
| "authors": [ |
| { |
| "first": "Xinxiong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1025--1035", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP). pages 1025-1035.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Syntaxaware multi-sense word embeddings for deep compositional models of meaning", |
| "authors": [ |
| { |
| "first": "Jianpeng", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dimitri", |
| "middle": [], |
| "last": "Kartsaklis", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1531--1542", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jianpeng Cheng and Dimitri Kartsaklis. 2015. Syntax- aware multi-sense word embeddings for deep com- positional models of meaning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1531-1542.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Contextual text understanding in distributional semantic space", |
| "authors": [ |
| { |
| "first": "Jianpeng", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhongyuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji-Rong", |
| "middle": [], |
| "last": "Wen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| }, |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management", |
| "volume": "", |
| "issue": "", |
| "pages": "133--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jianpeng Cheng, Zhongyuan Wang, Ji-Rong Wen, Jun Yan, and Zheng Chen. 2015. Contextual text under- standing in distributional semantic space. In Pro- ceedings of the 24th ACM International on Confer- ence on Information and Knowledge Management. pages 133-142.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ACM Transactions on Information Systems", |
| "volume": "20", |
| "issue": "", |
| "pages": "116--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems 20:116-131.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization", |
| "authors": [ |
| { |
| "first": "Lucie", |
| "middle": [], |
| "last": "Flekova", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "2029--2041", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucie Flekova and Iryna Gurevych. 2016. Supersense embeddings: A unified model for supersense inter- pretation, prediction, and utilization. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers). pages 2029-2041.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Word embedding evaluation and combination", |
| "authors": [ |
| { |
| "first": "Sahar", |
| "middle": [], |
| "last": "Ghannay", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Benoit Favre", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathalie", |
| "middle": [], |
| "last": "Estve", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Camelin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sahar Ghannay, Benoit Favre, Yannick Estve, and Nathalie Camelin. 2016. Word embedding evalua- tion and combination. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Simlex-999: Evaluating semantic models with genuine similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "", |
| "pages": "665--695", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with gen- uine similarity estimation. Computational Linguis- tics 41:665-695.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Online learning for latent dirichlet allocation", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Hoffman", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [ |
| "R" |
| ], |
| "last": "Bach", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Advances in Neural Information Processing Systems 23", |
| "volume": "", |
| "issue": "", |
| "pages": "856--864", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Hoffman, Francis R. Bach, and David M. Blei. 2010. Online learning for latent dirichlet al- location. In Advances in Neural Information Pro- cessing Systems 23. pages 856-864.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Improving word representations via global context and multiple word prototypes", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [ |
| "H" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
| "volume": "1", |
| "issue": "", |
| "pages": "873--882", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers -Volume 1. pages 873-882.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Sensembed: Learning sense embeddings for word and relational similarity", |
| "authors": [ |
| { |
| "first": "Ignacio", |
| "middle": [], |
| "last": "Iacobacci", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Taher Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "95--105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers). pages 95-105.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Ontologically grounded multi-sense representation learning for semantic vector space models", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Sujay Kumar Jauhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "683--693", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically grounded multi-sense repre- sentation learning for semantic vector space models. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. pages 683-693.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Neural word embedding as implicit matrix factorization", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems 27", |
| "volume": "", |
| "issue": "", |
| "pages": "2177--2185", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems 27. pages 2177-2185.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Improving distributional similarity with lessons learned from word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "211--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics 3:211-225.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Do multi-sense embeddings improve natural language understanding?", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1722--1732", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense em- beddings improve natural language understanding? In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing. pages 1722-1732.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning context-sensitive word embeddings with neural tensor skip-gram model", |
| "authors": [ |
| { |
| "first": "Pengfei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xipeng", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuanjing", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 24th International Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1284--1290", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2015a. Learning context-sensitive word embeddings with neural tensor skip-gram model. In Proceedings of the 24th International Conference on Artificial In- telligence. pages 1284-1290.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Tat-Seng Chua, and Maosong Sun", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2418--2424", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015b. Topical word embeddings. In AAAI Conference on Artificial Intelligence. pages 2418- 2424.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Better word representations with recursive neural networks for morphology", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "104--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Richard Socher, and Christopher Man- ning. 2013. Better word representations with recur- sive neural networks for morphology. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning. pages 104-113.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", |
| "authors": [ |
| { |
| "first": "Xuezhe", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1064--1074", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). pages 1064-1074.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. CoRR abs/1301.3781.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their com- positionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Event embeddings for semantic script modeling", |
| "authors": [ |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Modi", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "75--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashutosh Modi. 2016. Event embeddings for seman- tic script modeling. In Proceedings of the Confer- ence on Computational Natural Language Learning. pages 75-83.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Inducing neural models of script knowledge", |
| "authors": [ |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Modi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "49--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashutosh Modi and Ivan Titov. 2014. Inducing neu- ral models of script knowledge. In Proceedings of the Eighteenth Conference on Computational Natu- ral Language Learning. pages 49-57.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Modelling semantic expectation: Using script knowledge for referent prediction", |
| "authors": [ |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Modi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Vera", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "31--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashutosh Modi, Ivan Titov, Vera Demberg, Asad Say- eed, and Manfred Pinkal. 2017. Modelling seman- tic expectation: Using script knowledge for refer- ent prediction. Transactions of the Association for Computational Linguistics 5:31-44.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Efficient nonparametric estimation of multiple embeddings per word in vector space", |
| "authors": [ |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Neelakantan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeevan", |
| "middle": [], |
| "last": "Shankar", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1059--1069", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP). pages 1059-1069.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Improving Topic Models with Latent Feature Word Representations. Transactions of the Association for", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Dat Quoc Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Lan", |
| "middle": [], |
| "last": "Billingsley", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "299--313", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015a. Improving Topic Models with Latent Feature Word Representations. Trans- actions of the Association for Computational Lin- guistics 3:299-313.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dat Quoc Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dras", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dat Quoc Nguyen, Mark Dras, and Mark Johnson. 2017. A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Pars- ing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Uni- versal Dependencies.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Improving Topic Coherence with Latent Feature Word Representations in MAP Estimation for Topic Modeling", |
| "authors": [ |
| { |
| "first": "Kairit", |
| "middle": [], |
| "last": "Dat Quoc Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Sirts", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Australasian Language Technology Association Workshop 2015", |
| "volume": "", |
| "issue": "", |
| "pages": "116--121", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dat Quoc Nguyen, Kairit Sirts, and Mark Johnson. 2015b. Improving Topic Coherence with Latent Feature Word Representations in MAP Estimation for Topic Modeling. In Proceedings of the Aus- tralasian Language Technology Association Work- shop 2015. pages 116-121.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). pages 1532- 1543.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "De-conflated semantic representations", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Taher Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "Nigel", |
| "middle": [], |
| "last": "Collier", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1680--1690", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1680-1690.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Co-learning of word representations and morpheme representations", |
| "authors": [ |
| { |
| "first": "Siyu", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Qing", |
| "middle": [], |
| "last": "Cui", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiang", |
| "middle": [], |
| "last": "Bian", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Tie-Yan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COL-ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "141--150", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Co-learning of word representations and morpheme representations. In Proceedings of COL- ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. pages 141-150.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Multiview LSA: Representation Learning via Generalized CCA", |
| "authors": [ |
| { |
| "first": "Pushpendre", |
| "middle": [], |
| "last": "Rastogi", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "Raman", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "556--566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pushpendre Rastogi, Benjamin Van Durme, and Ra- man Arora. 2015. Multiview LSA: Representation Learning via Generalized CCA. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies. pages 556-566.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Software Framework for Topic Modelling with Large Corpora", |
| "authors": [ |
| { |
| "first": "Petr", |
| "middle": [], |
| "last": "Radim\u0159eh\u016f\u0159ek", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sojka", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
| "volume": "", |
| "issue": "", |
| "pages": "45--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. pages 45-50.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Multi-prototype vector-space models of word meaning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Raymond", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Reisinger and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word mean- ing. In Human Language Technologies: The 2010", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "109--117", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics. pages 109-117.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes", |
| "authors": [ |
| { |
| "first": "Sascha", |
| "middle": [], |
| "last": "Rothe", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1793--1803", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. Autoex- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing, Vol- ume 1: Long Papers. pages 1793-1803.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Evaluation methods for unsupervised word embeddings", |
| "authors": [ |
| { |
| "first": "Tobias", |
| "middle": [], |
| "last": "Schnabel", |
| "suffix": "" |
| }, |
| { |
| "first": "Igor", |
| "middle": [], |
| "last": "Labutov", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mimno", |
| "suffix": "" |
| }, |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "298--307", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 298-307.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Symmetric pattern based word embeddings for improved word similarity prediction", |
| "authors": [ |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "258--267", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In Proceedings of CoNLL 2015. pages 258-267.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "The westbury lab wikipedia corpus", |
| "authors": [ |
| { |
| "first": "Cyrus", |
| "middle": [], |
| "last": "Shaoul", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Westbury", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cyrus Shaoul and Chris Westbury. 2010. The westbury lab wikipedia corpus. Edmonton, AB: University of Alberta .", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Perelygin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1631--1642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing. pages 1631-1642.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Proceedings of the 27th International Conference on Neural Information Processing Sys- tems. pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A probabilistic model for learning multi-prototype word embeddings", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanjun", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiang", |
| "middle": [], |
| "last": "Bian", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Enhong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Tie-Yan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "151--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilis- tic model for learning multi-prototype word embed- dings. In Proceedings of COLING 2014, the 25th In- ternational Conference on Computational Linguis- tics: Technical Papers. pages 151-160.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Word representations via gaussian embedding. International Conference on Learning Representations (ICLR)", |
| "authors": [ |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Vilnis", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. International Conference on Learning Representations (ICLR) .", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Sense-aware semantic analysis: A multi-prototype word representation model using wikipedia", |
| "authors": [ |
| { |
| "first": "Zhaohui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "C. Lee", |
| "middle": [], |
| "last": "Giles", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2188--2194", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaohui Wu and C. Lee Giles. 2015. Sense-aware se- mantic analysis: A multi-prototype word represen- tation model using wikipedia. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelli- gence. pages 2188-2194.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Improving short text classification by learning vector representations of both words and hidden topics", |
| "authors": [ |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Guoqiang", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Knowledge-Based Systems", |
| "volume": "102", |
| "issue": "", |
| "pages": "76--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heng Zhang and Guoqiang Zhong. 2016. Improv- ing short text classification by learning vector representations of both words and hidden topics. Knowledge-Based Systems 102:76-86.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "text": "in which \u2295 is the concatenation operation and Pr (t|c) is inferred from the topic models by considering context c as a document. GlobalSim only regards word embeddings,", |
| "type_str": "table", |
| "content": "<table><tr><td>Model</td><td>RW</td><td colspan=\"4\">SIMLEX SCWS WS353 MEN</td></tr><tr><td>Huang et al. (2012)</td><td>-</td><td>-</td><td colspan=\"2\">58.6 71.3</td><td>-</td></tr><tr><td>Luong et al. (2013)</td><td colspan=\"2\">34.36 -</td><td colspan=\"3\">48.48 64.58 -</td></tr><tr><td>Qiu et al. (2014)</td><td colspan=\"2\">32.13 -</td><td colspan=\"3\">53.40 65.19 -</td></tr><tr><td>Neelakantan et al. (2014)</td><td>-</td><td>-</td><td colspan=\"2\">65.5 69.2</td><td>-</td></tr><tr><td>Chen et al. (2014)</td><td>-</td><td>-</td><td colspan=\"2\">64.2 -</td><td>-</td></tr><tr><td>Hill et al. (2015)</td><td>-</td><td>41.4</td><td>-</td><td>65.5</td><td>69.9</td></tr><tr><td colspan=\"2\">Vilnis and McCallum (2015) -</td><td>32.23</td><td>-</td><td colspan=\"2\">65.49 71.31</td></tr><tr><td>Schnabel et al. (2015)</td><td>-</td><td>-</td><td>-</td><td>64.0</td><td>70.7</td></tr><tr><td>Rastogi et al. (2015)</td><td colspan=\"2\">32.9 36.7</td><td colspan=\"2\">65.6 70.8</td><td>73.9</td></tr><tr><td colspan=\"2\">Flekova and Gurevych (2016) -</td><td>-</td><td>-</td><td>-</td><td>74.26</td></tr><tr><td>Word2Vec Skip-gram</td><td colspan=\"2\">32.64 38.20</td><td colspan=\"3\">66.37 71.61 75.49</td></tr><tr><td>MSWE-1 50</td><td colspan=\"2\">34.85 38.77</td><td colspan=\"3\">66.83 72.40 76.23</td></tr><tr><td>MSWE-1 200</td><td colspan=\"2\">35.27 38.70</td><td colspan=\"3\">66.80 72.05 76.05</td></tr><tr><td>MSWE-2 50</td><td colspan=\"2\">34.98 38.79</td><td colspan=\"3\">66.61 71.71 75.90</td></tr><tr><td>MSWE-2 200</td><td colspan=\"2\">35.56 39.19</td><td colspan=\"3\">66.65 72.29 76.37</td></tr></table>" |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "text": "Spearman's rank correlation (\u03c1 \u00d7 100) on SCWS, using AvgSim and AvgSimC.", |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |