ACL-OCL / Base_JSON /prefixD /json /D16 /D16-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:35:10.480508Z"
},
"title": "Context-Dependent Sense Embedding *",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Qiu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"settlement": "Shanghai",
"country": "China"
}
},
"email": "lqiu@apex.sjtu.edu.cn"
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embedding has been widely studied and proven helpful in solving many natural language processing tasks. However, the ambiguity of natural language is always a problem on learning high quality word embeddings. A possible solution is sense embedding which trains embedding for each sense of words instead of each word. Some recent work on sense embedding uses context clustering methods to determine the senses of words, which is heuristic in nature. Other work creates a probabilistic model and performs word sense disambiguation and sense embedding iteratively. However, most of the previous work has the problems of learning sense embeddings based on imperfect word embeddings as well as ignoring the dependency between sense choices of neighboring words. In this paper, we propose a novel probabilistic model for sense embedding that is not based on problematic word embedding of polysemous words and takes into account the dependency between sense choices. Based on our model, we derive a dynamic programming inference algorithm and an Expectation-Maximization style unsupervised learning algorithm. The empirical studies show that our model outperforms the state-of-the-art model on a word sense induction task by a 13% relative gain.",
"pdf_parse": {
"paper_id": "D16-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embedding has been widely studied and proven helpful in solving many natural language processing tasks. However, the ambiguity of natural language is always a problem on learning high quality word embeddings. A possible solution is sense embedding which trains embedding for each sense of words instead of each word. Some recent work on sense embedding uses context clustering methods to determine the senses of words, which is heuristic in nature. Other work creates a probabilistic model and performs word sense disambiguation and sense embedding iteratively. However, most of the previous work has the problems of learning sense embeddings based on imperfect word embeddings as well as ignoring the dependency between sense choices of neighboring words. In this paper, we propose a novel probabilistic model for sense embedding that is not based on problematic word embedding of polysemous words and takes into account the dependency between sense choices. Based on our model, we derive a dynamic programming inference algorithm and an Expectation-Maximization style unsupervised learning algorithm. The empirical studies show that our model outperforms the state-of-the-art model on a word sense induction task by a 13% relative gain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributed representation of words (aka word embedding) aims to learn continuous-valued vectors to represent words based on their context in a large corpus. They can serve as input features for algorithms of natural language processing (NLP) tasks. High quality word embeddings have been proven helpful in many NLP tasks (Collobert and Weston, 2008; Turian et al., 2010; Collobert et al., 2011; Maas et al., 2011; Chen and Manning, 2014) . Recently, with the development of deep learning, many novel neural network architectures are proposed for training high quality word embeddings (Mikolov et al., 2013a; Mikolov et al., 2013b) .",
"cite_spans": [
{
"start": 322,
"end": 350,
"text": "(Collobert and Weston, 2008;",
"ref_id": "BIBREF5"
},
{
"start": 351,
"end": 371,
"text": "Turian et al., 2010;",
"ref_id": "BIBREF29"
},
{
"start": 372,
"end": 395,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 396,
"end": 414,
"text": "Maas et al., 2011;",
"ref_id": "BIBREF17"
},
{
"start": 415,
"end": 438,
"text": "Chen and Manning, 2014)",
"ref_id": "BIBREF3"
},
{
"start": 585,
"end": 608,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF18"
},
{
"start": 609,
"end": 631,
"text": "Mikolov et al., 2013b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, since natural language is intrinsically ambiguous, learning one vector for each word may not cover all the senses of the word. In the case of a multi-sense word, the learned vector will be around the average of all the senses of the word in the embedding space, and therefore may not be a good representation of any of the senses. A possible solution is sense embedding which trains a vector for each sense of a word. There are two key steps in training sense embeddings. First, we need to perform word sense disambiguation (WSD) or word sense induction (WSI) to determine the senses of words in the training corpus. Then, we need to train embedding vectors for word senses according to their contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early work on sense embedding (Reisinger and Mooney, 2010; Huang et al., 2012; Neelakantan et al., 2014; Kageback et al., 2015; Li and Jurafsky, 2015) proposes context clustering methods which determine the sense of a word by clustering aggregated embeddings of words in its context. This kind of methods is heuristic in nature and relies on external knowledge from lexicon like WordNet (Miller, 1995) .",
"cite_spans": [
{
"start": 30,
"end": 58,
"text": "(Reisinger and Mooney, 2010;",
"ref_id": "BIBREF25"
},
{
"start": 59,
"end": 78,
"text": "Huang et al., 2012;",
"ref_id": "BIBREF12"
},
{
"start": 79,
"end": 104,
"text": "Neelakantan et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 105,
"end": 127,
"text": "Kageback et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 128,
"end": 150,
"text": "Li and Jurafsky, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 387,
"end": 401,
"text": "(Miller, 1995)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, sense embedding methods based on complete probabilistic models and well-defined learning objective functions (Tian et al., 2014; Bartunov et al., 2016; Jauhar et al., 2015) become more popular. These methods regard the choice of senses of the words in a sentence as hidden variables. Learning is therefore done with expectationmaximization style algorithms, which alternate between inferring word sense choices in the training corpus and learning sense embeddings.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Tian et al., 2014;",
"ref_id": "BIBREF28"
},
{
"start": 139,
"end": 161,
"text": "Bartunov et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 162,
"end": 182,
"text": "Jauhar et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common problem with these methods is that they model the sense embedding of each center word dependent on the word embeddings of its context words. As we previously explained, word embedding of a polysemous word is not a good representation and may negatively influence the quality of inference and learning. Furthermore, these methods choose the sense of each word in a sentence independently, ignoring the dependency that may exist between the sense choices of neighboring words. We argue that such dependency is important in word sense disambiguation and therefore helpful in learning sense embeddings. For example, consider the sentence \"He cashed a check at the bank\". Both \"check\" and \"bank\" are ambiguous here. Although the two words hint at banking related senses, the hint is not decisive (as an alternative interpretation, they may represent a check mark at a river bank). Fortunately, \"cashed\" is not ambiguous and it can help disambiguate \"check\". However, if we consider a small context window in sense embedding, then \"cashed\" cannot directly help disambiguate \"bank\". We need to rely on the dependency between the sense choices of \"check\" and \"bank\" to disambiguate \"bank\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel probabilistic model for sense embedding that takes into account the dependency between sense choices of neighboring words. We do not learn any word embeddings in our model and hence avoid the problem with embedding polysemous words discussed above. Our model has a similar structure to a high-order hidden Markov model. It contains a sequence of observable words and latent senses and models the dependency between each word-sense pair and between neighboring senses in the sequence. The energy of neighboring senses can be modeled using existing word embedding approaches such as CBOW and Skip-gram (Mikolov et al., 2013a; Mikolov et al., 2013b ). Given the model and a sentence, we can perform exact inference using dynamic programming and get the optimal sense sequence of the sentence. Our model can be learned from an unannotated corpus by optimizing a max-margin objective using an algorithm similar to hard-EM.",
"cite_spans": [
{
"start": 634,
"end": 657,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF18"
},
{
"start": 658,
"end": 679,
"text": "Mikolov et al., 2013b",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a complete probabilistic model for sense embedding. Unlike previous work, we model the dependency between sense choices of neighboring words and do not learn sense embeddings dependent on problematic word embeddings of polysemous words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Based on our proposed model, we derive an exact inference algorithm and a max-margin learning algorithm which do not rely on external knowledge from any knowledge base or lexicon (except that we determine the numbers of senses of polysemous words according to an existing sense inventory).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. The performance of our model on contextual word similarity task is competitive with previous work and we obtain a 13% relative gain compared with previous state-of-the-art methods on the word sense induction task of SemEval-2013.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. We introduce related work in section 2. Section 3 describes our models and algorithms in detail. We present our experiments and results in section 4. In section 5, a conclusion is given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distributed representation of words (aka word embedding) was proposed in 1986 (Hinton, 1986; Rumelhart et al., 1986) . In 2003 , Bengio et al. (2003 proposed a neural network architecture to train language models which produced word embeddings in the neural network. Mnih and Hinton (2007) replaced the global normalization layer of Bengio's model with a tree-structure to accelerate the training process. Collobert and Weston (2008) introduced a max-margin objective function to replace the most computationally expensive maxlikelihood objective function. Recently proposed Skip-gram model, CBOW model and GloVe model (Mikolov et al., 2013a; Mikolov et al., 2013b; Pennington et al., 2014) were more efficient than traditional models by introducing a log-linear layer and making it possible to train word embeddings with a large scale corpus. With the development of neural network and deep learning techniques, there have been a lot of work based on neural network models to obtain word embedding (Turian et al., 2010; Collobert et al., 2011; Maas et al., 2011; Chen and Manning, 2014) . All of them have proven that word embedding is helpful in NLP tasks.",
"cite_spans": [
{
"start": 78,
"end": 92,
"text": "(Hinton, 1986;",
"ref_id": "BIBREF11"
},
{
"start": 93,
"end": 116,
"text": "Rumelhart et al., 1986)",
"ref_id": "BIBREF27"
},
{
"start": 119,
"end": 126,
"text": "In 2003",
"ref_id": "BIBREF2"
},
{
"start": 127,
"end": 148,
"text": ", Bengio et al. (2003",
"ref_id": "BIBREF2"
},
{
"start": 267,
"end": 289,
"text": "Mnih and Hinton (2007)",
"ref_id": "BIBREF21"
},
{
"start": 406,
"end": 433,
"text": "Collobert and Weston (2008)",
"ref_id": "BIBREF5"
},
{
"start": 619,
"end": 642,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF18"
},
{
"start": 643,
"end": 665,
"text": "Mikolov et al., 2013b;",
"ref_id": "BIBREF19"
},
{
"start": 666,
"end": 690,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 999,
"end": 1020,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF29"
},
{
"start": 1021,
"end": 1044,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 1045,
"end": 1063,
"text": "Maas et al., 2011;",
"ref_id": "BIBREF17"
},
{
"start": 1064,
"end": 1087,
"text": "Chen and Manning, 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, the models above assumed that one word has only one vector as its representation which is problematic for polysemous words. Reisinger and Mooney (2010) proposed a method for constructing multiple sense-specific representation vectors for one word by performing word sense disambiguation with context clustering. Huang et al. 2012further extended this context clustering method and incorporated global context to learn multi-prototype representation vectors. extended the context clustering method and performed word sense disambiguation according to sense glosses from WordNet (Miller, 1995) . Neelakantan et al. (2014) proposed an extension of the Skip-gram model combined with context clustering to estimate the number of senses for each word as well as learn sense embedding vectors. Instead of performing word sense disambiguation tasks, Kageback et al. (2015) proposed the instance-context embedding method based on context clustering to perform word sense induction tasks. Li and Jurafsky (2015) introduced a multi-sense embedding model based on the Chinese Restaurant Process and applied it to several natural language understanding tasks.",
"cite_spans": [
{
"start": 133,
"end": 160,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF25"
},
{
"start": 586,
"end": 600,
"text": "(Miller, 1995)",
"ref_id": "BIBREF20"
},
{
"start": 603,
"end": 628,
"text": "Neelakantan et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 851,
"end": 873,
"text": "Kageback et al. (2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since the context clustering based models are heuristic in nature and rely on external knowledge, recent work tends to create probabilistic models for learning sense embeddings. Tian et al. (2014) proposed a multi-prototype Skip-gram model and designed an Expectation-Maximization (EM) algorithm to do word sense disambiguation and learn sense embedding vectors iteratively. Jauhar et al. (2015) extended the EM training framework and retrofitted embedding vectors to the ontology of WordNet. Bartunov et al. (2016) proposed a nonparametric Bayesian extension of Skip-gram to automatically learn the required numbers of representations for all words and perform word sense induction tasks.",
"cite_spans": [
{
"start": 178,
"end": 196,
"text": "Tian et al. (2014)",
"ref_id": "BIBREF28"
},
{
"start": 493,
"end": 515,
"text": "Bartunov et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose the context-dependent sense embedding model for training high quality sense embeddings which takes into account the dependency between sense choices of neighboring words. Unlike pervious work, we do not learn any word embeddings in our model and hence avoid the problem with embedding polysemous words discussed previously. In this section, we will introduce our model and describe our inference and learning algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Dependent Sense Embedding Model",
"sec_num": "3"
},
{
"text": "We begin with the notation in our model. In a sentence, let w i be the i th word of the sentence and s i be the sense of the i th word. S(w) denotes the set of all the senses of word w. We assume that the sets of senses of different words do not overlap. Therefore, in this paper a word sense can be seen as a lexeme of the word (Rothe and Schutze, 2015) . Our model can be represented as a Markov network shown in Figure 1 . It is similar to a highorder hidden Markov model. The model contains a sequence of observable words (w 1 , w 2 , . . .) and latent senses (s 1 , s 2 , . . .). It models the dependency between each word-sense pair and between neighboring senses in the sequence. The energy function is formulated as follows:",
"cite_spans": [
{
"start": 329,
"end": 354,
"text": "(Rothe and Schutze, 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 415,
"end": 423,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(w, s) = i E 1 (w i , s i ) + E 2 (s i\u2212k , . . . , s i+k )",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Here w = {w i |1 \u2264 i \u2264 l} is the set of words in a sentence with length l and s = {s i |1 \u2264 i \u2264 l} is the set of senses. The function E 1 models the dependency between a word-sense pair. As we assume that the sets of senses of different words do not overlap, we can formulate E 1 as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E 1 (w i , s i ) = 0 s i \u2208 S(w i ) +\u221e s i / \u2208 S(w i )",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Here we assume that all the matched word-sense pairs have the same energy, but it would also be interesting to model the degrees of matching with different energy values in E 1 . In Equation 1, the function E 2 models the compatibility of neighboring senses in a context window with fixed size k. Existing embedding approaches like CBOW and Skipgram (Mikolov et al., 2013a; Mikolov et al., 2013b) can be used here to define E 2 . The formulation using CBOW is as follows:",
"cite_spans": [
{
"start": 350,
"end": 373,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF18"
},
{
"start": 374,
"end": 396,
"text": "Mikolov et al., 2013b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E 2 (s i\u2212k , . . . , s i+k ) = \u2212 \u03c3 i\u2212k\u2264j\u2264i+k,j =i V T (s j )V (s i )",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Here V (s) and V (s) are the input and output embedding vectors of sense s. The function \u03c3 is an activation function and we use the sigmoid function here in our model. The formulation using Skip-gram can be defined in a similar way:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E 2 (s i\u2212k , . . . , s i+k ) = \u2212 i\u2212k\u2264j\u2264i+k,j =i \u03c3 V T (s j )V (s i )",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "In this section, we introduce our inference algorithm. Given the model and a sentence w, we want to infer the most likely values of the hidden variables (i.e. the optimal sense sequence of the sentence) that minimize the energy function in Equation 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s * = arg min s E(w, s)",
"eq_num": "(5)"
}
],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "We use dynamic programming to do inference which is similar to the Viterbi algorithm of the hidden Markov model. Specifically, for every valid assignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "A i\u22122k , . . . , A i\u22121 of every sub- sequence of senses s i\u22122k , . . . , s i\u22121 , we define m(A i\u22122k , . . . , A i\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "as the energy of the best sense sequence up to position i \u2212 1 that is consistent with the assignment A i\u22122k , . . . , A i\u22121 . We start with m(A 1 , . . . , A 2k ) = 0 and then recursively compute m in a left-to-right forward process based on the update formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "m(A i\u22122k+1 , . . . , A i ) = min A i\u22122k m(A i\u22122k , . . . , A i\u22121 ) + E 1 (w i , A i ) + E 2 (A i\u22122k , . . . , A i ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "Once we finish the forward process, we can retrieve the best sense sequence with a backward process. The time complexity of the algorithm is O(n 4k l) where n is the maximal number of senses of a word. Because most words in a typical sentence have either a single sense or far less than n senses, the actual running time of the algorithm is very fast.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2"
},
{
"text": "In this section, we introduce our unsupervised learning algorithm. In learning, we want to learn all the input and output sense embedding vectors that optimize the following max-margin objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "\u0398 * = arg min \u0398 w\u2208C min s w i=1 sneg\u2208Sneg(w i ) max 1 + E 1 (w i , s i ) + E 2 (s i\u2212k , . . . , s i+k )\u2212 E 2 (s i\u2212k , . . . , s i\u22121 , s neg , s i+1 , . . . , s i+k ), 0 (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "Here \u0398 is the set of all the parameters including V and V for all the senses. C is the set of training sentences. Our learning objective is similar to the negative sampling and max-margin objective proposed for word embedding (Collobert and Weston, 2008 ). S neg (w i ) denotes the set of negative samples of senses of word w i which is defined with the following strategy. For a polysemous word w i , S neg (w i ) = S(w i )\\{s i }. For the other words with a single sense, S neg (w i ) is a set of randomly selected senses of a fixed size.",
"cite_spans": [
{
"start": 226,
"end": 253,
"text": "(Collobert and Weston, 2008",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "The objective in Equation 7 can be optimized by coordinate descent which in our case is equivalent to the hard Expectation-Maximization algorithm. In the hard E step, we run the inference algorithm using the current model parameters to get the optimal sense sequences of the training sentences. In the M step, with the sense sequences s of all the sentences fixed, we learn sense embedding vectors. Assume we use the CBOW model for E 2 (Equation 3), then the M-step objective function is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u0398 * = arg min \u0398 w\u2208C w i=1 sneg\u2208Sneg(w i ) max 1 \u2212 \u03c3( i\u2212k\u2264j\u2264i+k,j =i V (s j ) T V (s i )) + \u03c3( i\u2212k\u2264j\u2264i+k,j =i V (s j ) T V (s neg )), 0",
"eq_num": "(8)"
}
],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "Here E 1 is omitted because the sense sequences produced from the E-step always have zero E 1 value. Similarly, if we use the Skip-gram model for E 2 (Equation 4), then the M-step objective function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u0398 * = arg min \u0398 w\u2208C w i=1 i\u2212k\u2264j\u2264i+k,j =i sneg\u2208Sneg(w i ) max 1 \u2212 \u03c3(V (s j ) T V (s i )) + \u03c3(V (s j ) T V (s neg )), 0",
"eq_num": "(9)"
}
],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "We optimize the M-step objective function using stochastic gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "We use a mini batch version of the hard EM algorithm. For each sentence in the training corpus, we run E-step to infer its sense sequence and then immediately run M-step (for 1 iteration of stochastic gradient descent) to update the model parameters based on the senses in the sentence. Therefore, the batch size of our algorithm depends on the length of each sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "The advantage of using mini batch is twofold. First, while our learning objective is highly nonconvex (Tian et al., 2014) , the randomness in mini batch hard EM may help us avoid trapping into local optima. Second, the model parameters are updated more frequently in mini batch hard EM, resulting in faster convergence.",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Tian et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "Note that before running hard-EM, we need to determine, for each word w, the size of S(w). In our experiments, we used the sense inventory provided by Coarse-Grained English All-Words Task of SemEval-2007 Task 07 (Navigli et al., 2007) to determine the number of senses for each word. The sense inventory is a coarse version of WordNet sense inventory. We do not use the WordNet sense inventory because the senses in WordNet are too finegrained and are difficult to recognize even for human annotators (Edmonds and Kilgarriff, 2002 ). Since we do not link our learned senses with external sense inventories, our approach can be seen as performing WSI instead of WSD.",
"cite_spans": [
{
"start": 213,
"end": 235,
"text": "(Navigli et al., 2007)",
"ref_id": "BIBREF22"
},
{
"start": 502,
"end": 531,
"text": "(Edmonds and Kilgarriff, 2002",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "This section presents our experiments and results. First, we describe our experimental setup including the training corpus and the model configuration. Nearest Neigbors bank 1 banking, lender, loan bank 2 river, canal, basin bank 3 slope, tilted, slant apple 1 macintosh, imac, blackberry apple 2 peach, cherry, pie date 1 birthdate, birth, day date 2 appointment, meet, dinner fox 1 cbs, abc, nbc fox 2 wolf, deer, rabbit Then, we perform a qualitative evaluation on our model by presenting the nearest neighbors of senses of some polysemous words. Finally, we introduce two different tasks and show the experimental results on these tasks respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 304,
"text": "Nearest Neigbors bank 1 banking, lender, loan bank 2 river, canal, basin bank 3 slope, tilted, slant apple 1 macintosh, imac, blackberry apple 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our training corpus is the commonly used Wikipedia corpus. We dumped the October 2015 snapshot of the Wikipedia corpus which contains 3.6 million articles. In our experiments, we removed the infrequent words with less than 20 occurrences and the training corpus contains 1.3 billion tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus",
"sec_num": "4.1.1"
},
{
"text": "In our experiments, we set the context window size to 5 (5 words before and after the center word). The embedding vector size is set to 300. The size of negative sample sets of single-sense words is set to 5. We trained our model using AdaGrad stochastic gradient decent (Duchi et al., 2010) with initial learning rate set to 0.025. Our configuration is similar to that of previous work.",
"cite_spans": [
{
"start": 271,
"end": 291,
"text": "(Duchi et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Configuration",
"sec_num": "4.1.2"
},
{
"text": "Similar to Word2vec, we initialized our model by randomizing the sense embedding vectors. The number of senses of all the words is determined with the sense inventory provided by Coarse-Grained English All-Words Task of SemEval-2007 Task 07 (Navigli et al., 2007) as we explained in section 3.3.",
"cite_spans": [
{
"start": 241,
"end": 263,
"text": "(Navigli et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Configuration",
"sec_num": "4.1.2"
},
{
"text": "In this section, we give a qualitative evaluation of our model by presenting the nearest neighbors of the senses of some polysemous words. Table 1 shows the results of our qualitative evaluation. We list several polysemous words in the table, and for each word, some typical senses of the word are picked. The nearest neighbors of each sense are listed aside. We used the cosine distance to calculate the distance between sense embedding vectors and find the nearest neighbors.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.2"
},
{
"text": "In Table 1 , we can observe that our model produces good senses for polysemous words. For example, the word \"bank\" can be seen to have three different sense embedding vectors. The first one means the financial institution. The second one means the sloping land beside water. The third one means the action of tipping laterally.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.2"
},
{
"text": "This section gives a quantitative evaluation of our model on word similarity tasks. Word similarity tasks evaluate a model's performance with the Spearman's rank correlation between the similarity scores of pairs of words given by the model and the manual labels. However, traditional word similarity tasks like Wordsim-353 (Finkelstein et al., 2001) are not suitable for evaluating sense embedding models because these datasets do not include enough ambiguous words and there is no context information for the models to infer and disambiguate the senses of the words. To overcome this issue, Huang et al. (2012) released a new dataset named Stanford's Contextual Word Similarities (SCWS) dataset. The dataset consists of 2003 pairs of words along with human labelled similarity scores and the sentences containing these words.",
"cite_spans": [
{
"start": 324,
"end": 350,
"text": "(Finkelstein et al., 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Similarity in Context",
"sec_num": "4.3"
},
{
"text": "Given a pair of words and their contexts, we can perform inference using our model to disambiguate the questioned words. A similarity score can be calculated with the cosine distance between the two embedding vectors of the inferred senses of the questioned words. We also propose another method for calculating similarity scores. In the inference process, we compute the energy of each sense choice of the questioned word and consider the negative energy as the confidence of the sense choice. Then we calculate the cosine similarity between all pairs of senses of the questioned words and compute the average of similarity weighted by the confidence of the senses. The first method is named HardSim and the Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset. In this table, \u03c1 refers to the Spearman's rank correlation and a higher value of \u03c1 indicates better performance. The baseline performances are from Huang et al. (2012), , Neelakantan et al. (2014) , Li and Jurafsky (2015) , Tian et al. (2014) and Bartunov et al. (2016) . Here Ours + CBOW denotes our model with a CBOW based energy function and Ours + Skip-gram denotes our model with a Skip-gram based energy function. The results above the thick line are the models based on context clustering methods and the results below the thick line are the probabilistic models including ours. The similarity metrics of context clustering based models are AvgSim and AvgSimC proposed by Reisinger and Mooney (2010) . Tian et al. (2014) propose two metrics Model M and Model W which are similar to our HardSim and SoftSim metrics.",
"cite_spans": [
{
"start": 974,
"end": 999,
"text": "Neelakantan et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 1002,
"end": 1024,
"text": "Li and Jurafsky (2015)",
"ref_id": "BIBREF16"
},
{
"start": 1027,
"end": 1045,
"text": "Tian et al. (2014)",
"ref_id": "BIBREF28"
},
{
"start": 1050,
"end": 1072,
"text": "Bartunov et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 1482,
"end": 1509,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF25"
},
{
"start": 1512,
"end": 1530,
"text": "Tian et al. (2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 709,
"end": 716,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Word Similarity in Context",
"sec_num": "4.3"
},
{
"text": "From Table 2 , we can observe that our model outperforms the other probabilistic models and is not as good as the best context clustering based model. The context clustering based models are overall better than the probabilistic models on this task. A possible reason is that most context clustering based methods make use of more external knowledge than probabilistic models. However, note that Faruqui et al. (2016) presented several problems associated with the evaluation of word vectors on word similarity datasets and pointed out that the use of word similarity tasks for evaluation of word vectors is not sustainable. Bartunov et al. (2016) also suggest that SCWS should be of limited use for evaluating word representation models. Therefore, the results on this task shall be taken with caution. We consider that more realistic natural language processing tasks like word sense induction are better for evaluating sense embedding models.",
"cite_spans": [
{
"start": 396,
"end": 417,
"text": "Faruqui et al. (2016)",
"ref_id": "BIBREF9"
},
{
"start": 625,
"end": 647,
"text": "Bartunov et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Word Similarity in Context",
"sec_num": "4.3"
},
{
"text": "In this section, we present an evaluation of our model on the word sense induction (WSI) tasks. The WSI task aims to discover the different meanings for words used in sentences. Unlike a word sense disambiguation (WSD) system, a WSI system does not link the sense annotation results to an existing sense inventory. Instead, it produces its own sense inventory and links the sense annotation results to this sense inventory. Our model can be seen as a WSI system, so we can evaluate our model with WSI tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "4.4"
},
{
"text": "We used the dataset from task 13 of SemEval-2013 as our evaluation set (Jurgens and Klapaftis, 2013) . The dataset contains 4664 instances inflected from one of the 50 lemmas. Both single-sense instances and instances with a graded mixture of senses are included in the dataset. In this paper, we only consider the single sense instances. Jurgens and Klapaftis (2013) propose two fuzzy measures named Fuzzy B-Cubed (FBC) and Fuzzy Normalized Mutual Information (FNMI) for comparing fuzzy sense assignments from WSI systems. the FBC measure summarizes the performance per instance while the FNMI measure is based on sense clusters rather than instances. Table 3 shows the results of our contextdependent sense embedding models on this dataset. Here HM is the harmonic mean of FBC and FNMI. The result of AI-KU is from Baskaya et al. (2013) , MSSG is from Neelakantan et al. (2014) , ICEonline and ICE-kmeans are from Kageback et al. (2015) . Our models are denoted in the same way as in the previous section.",
"cite_spans": [
{
"start": 71,
"end": 100,
"text": "(Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 339,
"end": 367,
"text": "Jurgens and Klapaftis (2013)",
"ref_id": "BIBREF14"
},
{
"start": 817,
"end": 838,
"text": "Baskaya et al. (2013)",
"ref_id": "BIBREF1"
},
{
"start": 854,
"end": 879,
"text": "Neelakantan et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 916,
"end": 938,
"text": "Kageback et al. (2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 653,
"end": 660,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "4.4"
},
{
"text": "From Table 3 , we can observe that our models Model FBC(%) FNMI(%) HM AI-KU 35.1 4.5 8.0 MSSG 45.9 3.7 6.8 ICE-online 48.7 5.5 9.9 ICE-kmeans 51.1 5.9 10.6 Ours + CBOW 53.8 6.3 11.3 Ours + Skip-gram 56.9 6.7 12.0 outperform the previous state-of-the-art models and achieve a 13% relative gain. It shows that our models can beat context clustering based models on realistic natural language processing tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "4.4"
},
{
"text": "In this paper we propose a novel probabilistic model for learning sense embeddings. Unlike previous work, we do not learn sense embeddings dependent on word embeddings and hence avoid the problem with inaccurate embeddings of polysemous words. Furthermore, we model the dependency between sense choices of neighboring words which can help us disambiguate multiple ambiguous words in a sentence. Based on our model, we derive a dynamic programming inference algorithm and an EM-style unsupervised learning algorithm which do not rely on external knowledge from any knowledge base or lexicon except that we determine the number of senses of polysemous words according to an existing sense inventory. We evaluate our model both qualitatively by case studying and quantitatively with the word similarity task and the word sense induction task. Our model is competitive with previous work on the word similarity task. On the word sense induction task, our model outperforms the state-ofthe-art model and achieves a 13% relative gain. For the future work, we plan to try learning our model with soft EM. Besides, we plan to use shared senses instead of lexemes in our model to improve the generality of our model. Also, we will study unsupervised methods to link the learned senses to existing inventories and to automatically determine the numbers of senses. Finally, we plan to evaluate our model with more NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Breaking sticks and ambiguities with adaptive skip-gram",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Bartunov",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Kondrashkin",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Osokin",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Vetrov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, and Dmitry Vetrov. 2016. Breaking sticks and am- biguities with adaptive skip-gram.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ai-ku: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation",
"authors": [
{
"first": "Osman",
"middle": [],
"last": "Baskaya",
"suffix": ""
},
{
"first": "Enis",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2013,
"venue": "Second Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "2",
"issue": "",
"pages": "300--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Osman Baskaya, Enis Sert, Volkan Cirik, and Deniz Yuret. 2013. Ai-ku: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation. In Second Joint Conference on Lexi- cal and Computational Semantics (*SEM), Volume 2: Seventh International Workshop on Semantic Evalua- tion (SemEval 2013), pages 300-306.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Sbastien"
],
"last": "Sencal",
"suffix": ""
},
{
"first": "Frderic",
"middle": [],
"last": "Morin",
"suffix": ""
},
{
"first": "Jean Luc",
"middle": [],
"last": "Gauvain",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "6",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Holger Schwenk, Jean Sbastien Sencal, Frderic Morin, and Jean Luc Gauvain. 2003. A neu- ral probabilistic language model. Journal of Machine Learning Research, 3(6):1137-1155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural net- works. In EMNLP, pages 740-750.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A unified model for word sense representation and disambiguation",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1025--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and dis- ambiguation. In EMNLP, pages 1025-1035. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of the 25th international conference on Machine learn- ing, pages 160-167. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "7",
"pages": "257--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(7):257-269.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Introduction to the special issue on evaluating word sense disambiguation systems",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2002,
"venue": "Natural Language Engineering",
"volume": "8",
"issue": "4",
"pages": "279--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Edmonds and Adam Kilgarriff. 2002. Introduc- tion to the special issue on evaluating word sense dis- ambiguation systems. Natural Language Engineering, 8(4):279-291.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Problems with evaluation of word embeddings using word similarity tasks",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.02276"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. arXiv preprint arXiv:1605.02276.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Placing search in context: the concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: the concept revisited. In Proceedings of international conference on World Wide Web, pages 406-414.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning distributed representations of concepts",
"authors": [
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the eighth annual conference of the cognitive science society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. E. Hinton. 1986. Learning distributed representations of concepts. In Proceedings of the eighth annual con- ference of the cognitive science society.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representa- tions via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics: Long Papers- Volume 1, pages 873-882. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ontologically grounded multi-sense representation learning for semantic vector space models",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Sujay Kumar Jauhar",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "683--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically grounded multi-sense represen- tation learning for semantic vector space models. In Proc. NAACL, pages 683-693.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2013 task 13: Word sense induction for graded and non-graded senses",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Klapaftis",
"suffix": ""
}
],
"year": 2013,
"venue": "Second Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "2",
"issue": "",
"pages": "290--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens and Ioannis Klapaftis. 2013. Semeval- 2013 task 13: Word sense induction for graded and non-graded senses. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Vol- ume 2: Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 290-299.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural context embeddings for automatic discovery of word senses",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Kageback",
"suffix": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "Johansson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Kageback, Fredrik Johansson, Richard Johans- son, and Devdatt Dubhashi. 2015. Neural context embeddings for automatic discovery of word senses. In Proceedings of NAACL-HLT, pages 25-32.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Do multi-sense embeddings improve natural language understanding",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1722--1732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense em- beddings improve natural language understanding? In EMNLP, pages 1722-1732. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Maas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daly",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies-Volume 1, pages 142-150. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. In Workshop at ICLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Three new graphical models for statistical language modelling",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Twenty-Fourth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the Twenty-Fourth International Con- ference on Machine Learning, pages 641-648.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semeval-2007 task 07: coarse-grained english all-words task",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"C"
],
"last": "Litkowski",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2007,
"venue": "International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Kenneth C. Litkowski, and Orin Har- graves. 2007. Semeval-2007 task 07: coarse-grained english all-words task. In International Workshop on Semantic Evaluations, pages 30-35.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient nonparametric estimation of multiple embeddings per word in vector space",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Jeevan",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1059--1069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In EMNLP, pages 1059-1069. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multiprototype vector-space models of word meaning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond J Mooney. 2010. Multi- prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 109- 117. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1793--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Schutze. 2015. Autoex- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics, pages 1793-1803. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning representation by backpropagating errors",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "323",
"issue": "6088",
"pages": "533--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representation by back- propagating errors. Nature, 323(6088):533-536.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A probabilistic model for learning multi-prototype word embeddings",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Hanjun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In COLING, pages 151-160.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 384-394. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Context-Dependent Sense Embedding Model with window size k = 1",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "The nearest neighbors of senses of polysemous words",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "Spearman's rank correlation results on the SCWS",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"text": "Results of single-sense instances on task 13 of",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>SemEval-2013</td></tr></table>"
}
}
}
}