ACL-OCL / Base_JSON /prefixQ /json /Q15 /Q15-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q15-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:07:56.521486Z"
},
"title": "A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"postCode": "60607",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"postCode": "60637",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": "mbansal@ttic.edu"
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"postCode": "60637",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": "kgimpel@ttic.edu"
},
{
"first": "Brian",
"middle": [
"D"
],
"last": "Ziebart",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"postCode": "60607",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": "bziebart@uic.edu"
},
{
"first": "Clement",
"middle": [
"T"
],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"postCode": "60607",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task.",
"pdf_parse": {
"paper_id": "Q15-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word sense induction (WSI) is the task of automatically discovering all senses of an ambiguous word in a corpus. The inputs to WSI are instances of the ambiguous word with its surrounding context. The output is a grouping of these instances into clusters corresponding to the induced senses. WSI is generally conducted as an unsupervised learning task, relying on the assumption that the surrounding context of a word indicates its meaning. Most previous work assumed that each instance is best labeled with a single sense, and therefore, that each instance belongs to exactly one sense cluster. However, recent work Jurgens, 2013) has shown that more than one sense can be used to interpret certain instances, due to context ambiguity and sense relatedness.",
"cite_spans": [
{
"start": 617,
"end": 631,
"text": "Jurgens, 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To handle these characteristics of WSI (unsupervised, senses represented by token clusters, multiple senses per instance), we consider approaches based on topic models. A topic model is an unsupervised method that discovers the semantic topics underlying a collection of documents. The most popular is latent Dirichlet allocation (LDA; Blei et al., 2003) , in which each topic is represented as a multinomial distribution over words, and each document is represented as a multinomial distribution over topics.",
"cite_spans": [
{
"start": 336,
"end": 354,
"text": "Blei et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One approach would be to run LDA on the instances for an ambiguous word, then simply interpret topics as induced senses (Brody and Lapata, 2009) . However, while sense and topic are related, they are distinct linguistic phenomena. Topics are assigned to entire documents and are expressed by all word tokens, while senses relate to a single ambiguous word and are expressed through the local context of that word. One possible approach would be to only keep the local context of each ambiguous word, discarding the global context. However, the topical information contained in the broader context, though it may not determine the sense directly, might still be useful for narrowing down the likely senses of the ambiguous word.",
"cite_spans": [
{
"start": 120,
"end": 144,
"text": "(Brody and Lapata, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consider the ambiguous word cold. In the sentence \"His reaction to the experiments was cold\", the possible senses for cold include cold temperature, a cold sensation, common cold, or a negative emotional reaction. However, if we know that the topic of the document concerns the effects of low temperatures on physical health, then the negative emotional reaction sense should become less likely. Therefore, in this case, knowing the topic helps narrow down the set of plausible senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the same time, knowing the sense can also help determine possible topics. Consider a set of texts that all include the word cold. Without further information, the texts might discuss any of a number of possible topics. However, if the sense of cold is that of cold ischemia, then the most probable topics would be those related to organ transplantation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly ( \u00a74). When relating the sense and topic variables, a bidirectional edge is drawn between them to represent their cyclic dependence (Heckerman et al., 2001) . We perform inference using collapsed Gibbs sampling ( \u00a74.2), then estimate the sense distribution for each instance as the solution to the WSI task. We conduct experiments on the SemEval-2013 Task 13 WSI dataset, showing improvements over several strong baselines and task systems ( \u00a75).",
"cite_spans": [
{
"start": 272,
"end": 296,
"text": "(Heckerman et al., 2001)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also present unsupervised ways of enriching our dataset, including using neural word embeddings (Mikolov et al., 2013) and external Web-scale corpora to enrich the context of each data instance or to add more instances ( \u00a76). Each data enrichment method gives further gains, resulting in significant improvements over existing state-of-the-art WSI systems. Overall, we find gains of up to 22% relative improvement in fuzzy B-cubed and 50% relative improvement in fuzzy normalized mutual information (Jurgens and Klapaftis, 2013) .",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 502,
"end": 531,
"text": "(Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We discuss the WSI task, then discuss several areas of research that are related to our approach, including applications of topic modeling to WSI as well as other approaches that use word embeddings and clustering algorithms. WSD and WSI: WSI is related to but distinct from word sense disambiguation (WSD). WSD seeks to assign a particular sense label to each target word instance, where the sense labels are known and usually drawn from an existing sense inventory like WordNet (Miller et al., 1990) . Although extensive research has been devoted to WSD, WSI may be more useful for downstream tasks. WSD relies on sense inventories whose construction is time-intensive, expensive, and subject to poor inter-annotator agreement (Passonneau et al., 2010) . Sense inventories also impose a fixed sense granularity for each ambiguous word, which may not match the ideal granularity for the task of interest. Finally, they may lack domain-specific senses and are difficult to adapt to low-resource domains or languages. In contrast, senses induced by WSI are more likely to represent the task and domain of interest. Researchers in machine translation and information retrieval have found that predefined senses are often not well-suited for these tasks (Voorhees, 1993; Carpuat and Wu, 2005) , while induced senses can lead to improved performance (V\u00e9ronis, 2004; Vickrey et al., 2005; Carpuat and Wu, 2007) .",
"cite_spans": [
{
"start": 480,
"end": 501,
"text": "(Miller et al., 1990)",
"ref_id": "BIBREF35"
},
{
"start": 729,
"end": 754,
"text": "(Passonneau et al., 2010)",
"ref_id": "BIBREF40"
},
{
"start": 1251,
"end": 1267,
"text": "(Voorhees, 1993;",
"ref_id": "BIBREF50"
},
{
"start": 1268,
"end": 1289,
"text": "Carpuat and Wu, 2005)",
"ref_id": "BIBREF11"
},
{
"start": 1346,
"end": 1361,
"text": "(V\u00e9ronis, 2004;",
"ref_id": "BIBREF48"
},
{
"start": 1362,
"end": 1383,
"text": "Vickrey et al., 2005;",
"ref_id": "BIBREF49"
},
{
"start": 1384,
"end": 1405,
"text": "Carpuat and Wu, 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Topic Modeling for WSI: Brody and Lapata (2009) proposed a topic model that uses a weighted combination of separate LDA models based on different feature sets (e.g. word tokens, parts of speech, and dependency relations). They only used smaller units of text surrounding the ambiguous word, discarding the global context of each instance. Yao and Van Durme (2011) proposed a model based on a hierarchical Dirichlet process (HDP; Teh et al., 2006) , which has the advantage that it can automatically discover the number of senses. Lau et al. (2012) described a model based on an HDP with positional word features; it formed the basis for their submission (unimelb, Lau et al., 2013) to the SemEval-2013 WSI task (Jurgens and Klapaftis, 2013) .",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "Brody and Lapata (2009)",
"ref_id": "BIBREF8"
},
{
"start": 339,
"end": 363,
"text": "Yao and Van Durme (2011)",
"ref_id": "BIBREF51"
},
{
"start": 429,
"end": 446,
"text": "Teh et al., 2006)",
"ref_id": "BIBREF45"
},
{
"start": 530,
"end": 547,
"text": "Lau et al. (2012)",
"ref_id": "BIBREF29"
},
{
"start": 654,
"end": 681,
"text": "(unimelb, Lau et al., 2013)",
"ref_id": null
},
{
"start": 711,
"end": 740,
"text": "(Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Our sense-topic model is distinct from this prior work in that we model sense and topic as two separate latent variables and learn them jointly. We compare to the performance of unimelb in \u00a75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "For word sense disambiguation, there also exist several approaches that use topic models (Cai et al., 2007; Li et al., 2010) ; space does not permit a full discussion.",
"cite_spans": [
{
"start": 89,
"end": 107,
"text": "(Cai et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 108,
"end": 124,
"text": "Li et al., 2010)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Word Representations for WSI: Another approach to solving WSI is to use word representations built by distributional semantic models (DSMs; Sahlgren, 2006) or neural net language models (NNLMs; Bengio et al., 2003; Mnih and Hinton, 2007) . Their assumption is that words with similar distributions have similar meanings. Akkaya et al. (2012) use word representations learned from DSMs directly for WSI. Each word is represented by a co-occurrence vector, and the meaning of an ambiguous word in a specific context is computed through element-wise multiplication applied to the vector of the target word and its surrounding words in the context. Then instances are clustered by hierarchical clustering based on their representations.",
"cite_spans": [
{
"start": 140,
"end": 155,
"text": "Sahlgren, 2006)",
"ref_id": "BIBREF43"
},
{
"start": 194,
"end": 214,
"text": "Bengio et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 215,
"end": 237,
"text": "Mnih and Hinton, 2007)",
"ref_id": "BIBREF36"
},
{
"start": 321,
"end": 341,
"text": "Akkaya et al. (2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Word representations trained by NNLMs, often called word embeddings, capture information via training criteria based on predicting nearby words. They have been useful as features in many NLP tasks (Turian et al., 2010; Collobert et al., 2011; Dhillon et al., 2012; Hisamoto et al., 2013; Bansal et al., 2014) . The similarity between two words can be computed using cosine similarity of their embedding vectors. Word embeddings are often also used to build representations for larger units of text, such as sentences, through vector operations (e.g., summation) applied to the vector of each token in the sentence. In our work, we use word embeddings to compute word similarities (for better modeling of our data distribution), to represent sentences (to find similar sentences in external corpora for data enrichment), and in a product-of-embeddings baseline. Baskaya et al. (2013) represent the context of each ambiguous word by using the most likely substitutes according to a 4-gram LM. They pair the ambiguous word with likely substitutes, project the pairs onto a sphere (Maron et al., 2010) , and obtain final senses via k-means clustering. We compare to their SemEval-2013 system AI-KU ( \u00a75).",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF47"
},
{
"start": 219,
"end": 242,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 243,
"end": 264,
"text": "Dhillon et al., 2012;",
"ref_id": "BIBREF15"
},
{
"start": 265,
"end": 287,
"text": "Hisamoto et al., 2013;",
"ref_id": "BIBREF24"
},
{
"start": 288,
"end": 308,
"text": "Bansal et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 861,
"end": 882,
"text": "Baskaya et al. (2013)",
"ref_id": "BIBREF3"
},
{
"start": 1077,
"end": 1097,
"text": "(Maron et al., 2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Other Approaches to WSI: Other approaches include clustering algorithms to partition instances of an ambiguous word into sense-based clusters (Sch\u00fctze, 1998; Pantel and Lin, 2002; Purandare and Pedersen, 2004) , or graph-based methods to induce senses (Dorow and Widdows, 2003; V\u00e9ronis, 2004; Agirre and Soroa, 2007) .",
"cite_spans": [
{
"start": 142,
"end": 157,
"text": "(Sch\u00fctze, 1998;",
"ref_id": "BIBREF44"
},
{
"start": 158,
"end": 179,
"text": "Pantel and Lin, 2002;",
"ref_id": "BIBREF39"
},
{
"start": 180,
"end": 209,
"text": "Purandare and Pedersen, 2004)",
"ref_id": "BIBREF42"
},
{
"start": 252,
"end": 277,
"text": "(Dorow and Widdows, 2003;",
"ref_id": "BIBREF16"
},
{
"start": 278,
"end": 292,
"text": "V\u00e9ronis, 2004;",
"ref_id": "BIBREF48"
},
{
"start": 293,
"end": 316,
"text": "Agirre and Soroa, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we induce senses for a set of word types, which we refer to as target words. For each target word, we have a set of instances. Each instance provides context for a single occurrence of the target word. 1 For our experiments, we use the Figure 1 : Proposed sense-topic model in plate notation. There are M D instances for the given target word. In an instance, there are N g global context words (w g ) and N local context words (w ), all of which are observed. There is one latent variable (\"topic\" t g ) for the w g and two latent variables (\"topic\" t and \"sense\" s ) for the w . Each instance has topic mixing proportions \u03b8 t and sense mixing proportions \u03b8 s . For clarity, not all variables are shown. The complete figure with all variables is given in Appendix A. This is a dependency network, not a directed graphical model, as shown by the directed arrows between t and s ; see text for details. dataset released for SemEval-2013 Task 13 (Jurgens and Klapaftis, 2013), collected from the Open American National Corpus (OANC; Ide and Suderman, 2004). 2 It includes 50 target words: 20 verbs, 20 nouns, and 10 adjectives. There are a total of 4,664 instances across all target words. Each instance contains only one sentence, with a minimum length of 22 and a maximum length of 100. The gold standard for the dataset was prepared by multiple annotators, where each annotator labeled instances based on the sense inventories in WordNet 3.1. For each instance, they rated all senses of a target word on a Likert scale from one to five.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 259,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "3"
},
{
"text": "We now present our sense-topic model, shown in plate notation in Figure 1 . It generates the words in the set of instances for a single target word; we run the model separately for each target word, sharing no parameters across target words. We treat sense and topic as two separate latent variables to be inferred jointly. To differentiate sense and topic, we use a window around the target word in each instance. Word tokens inside the window are local context words (w ), while tokens outside the window are global context words (w g ). The number of words in the window is fixed to 21 in all experiments (10 words before the target word and 10 after).",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "Generating global context words: As shown in the left part of Figure 1 , each global context word w g is generated from a latent topic variable t g for the instance, which follows the same generative story as LDA. The corresponding probability of the ith global context word w",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(i) g within instance d is: 3 Pr(w (i) g |d,\u03b8 t , \u03c8 t ) = T j=1 P \u03c8t j (w (i) g |t (i) g = j)P \u03b8t (t (i) g = j|d)",
"eq_num": "(1)"
}
],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "where T is the number of topics,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "P \u03c8t j (w (i) g |t (i) g = j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "is the multinomial distribution over words for topic j (parameterized by \u03c8 t j ) and P \u03b8t (t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "(i) g = j|d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "is the multinomial distribution over topics for instance d (parameterized by \u03b8 t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "Generating local context words: A local context word w is generated from a topic variable t and a sense variable s :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(w |d, \u03b8 t , \u03c8 t , \u03b8 s , \u03c8 s , \u03b8 s|t , \u03b8 t|s , \u03b8 st ) = T j=1 S k=1 Pr(w |t = j, s = k) Pr(t = j, s = k|d)",
"eq_num": "(2)"
}
],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "where S is the number of senses, Pr(w |t = j, s = k) is the probability of generating word w given topic j and sense k, and Pr(t = j, s = k|d) is the joint probability over topics and senses for d. 4 Unlike in Eq. (1), we do not use multinomial parameterizations for the distributions in Eq. (2). When parameterizing them, we make several departures from purely-generative modeling. All our choices result in distributions over smaller event spaces and/or those that condition on fewer variables. This helps to mitigate data sparsity issues arising from attempting to estimate highdimensional distributions from small datasets. A secondary benefit is that we can avoid biases caused by particular choices of generative directionality in the model. We later include an empirical comparison to justify some of our modeling choices ( \u00a75).",
"cite_spans": [
{
"start": 198,
"end": 199,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "First, when relating the sense and topic variables, we avoid making a single decision about generative dependence. Taking inspiration from dependency networks (Heckerman et al., 2001) , we use the following factorization:",
"cite_spans": [
{
"start": 159,
"end": 183,
"text": "(Heckerman et al., 2001)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(t = j, s = k|d) = 1 Z d Pr(s = k|d, t = j) Pr(t = j|d, s = k)",
"eq_num": "(3)"
}
],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "where Z d is a normalization constant. We factorize further by using redundant probabilistic events, then ignore the normalization constants during learning, a concept commonly called deficiency (Brown et al., 1993) . Deficient modeling has been found to be useful for a wide range of NLP tasks (Klein and Manning, 2002; May and Knight, 2007; Toutanova and Johnson, 2007) . In particular, we factor the conditional probabilities in Eq. 3into products of multinomial probabilities:",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF9"
},
{
"start": 295,
"end": 320,
"text": "(Klein and Manning, 2002;",
"ref_id": "BIBREF28"
},
{
"start": 321,
"end": 342,
"text": "May and Knight, 2007;",
"ref_id": "BIBREF33"
},
{
"start": 343,
"end": 371,
"text": "Toutanova and Johnson, 2007)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "Pr(s = k|d, t = j) = P \u03b8s (s = k|d)P \u03b8 s|t j (s = k|t = j)P \u03b8st (t = j, s = k) Z d,tj Pr(t = j|d, s = k) = P \u03b8t (t = j|d)P \u03b8 t|s k (t = j|s = k) Z d,s k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "where Z d,t j and Z d,s k are normalization factors and we have introduced new multinomial parameters \u03b8 s , \u03b8 s|t j , \u03b8 st , and \u03b8 t|s k . We use the same idea to factor the word generation distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "Pr(w |t = j, s = k) = P \u03c8t j (w |t = j)P \u03c8s k (w |s = k) Z tj ,s k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "where Z t j ,s k is a normalization factor, and we have new multinomial parameters \u03c8 s k for the sense-word distributions. One advantage of this parameterization is that we naturally tie the topic-word distributions across the global and local context words by using the same parameters \u03c8 t j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sense-Topic Model for WSI",
"sec_num": "4"
},
{
"text": "We now give the full generative story of our model. We describe it for generating a set of instances of size M D , where all instances contain the same target word. We use symmetric Dirichlet priors for all multinomial distributions mentioned above, using the same fixed hyperparameter value (\u03b1) for all. We use \u03c8 to denote parameters of multinomial distributions over words, and \u03b8 to denote parameters of multinomial distributions over topics and/or senses. We leave unspecified the distributions over N (number of local words in an instance) and N g (number of global words in an instance), as we only use our model to perform inference given fixed instances, not to generate new instances. The generative story first follows the steps described in Algo. 1 to generate parameters that are shared across all instances; then for each instance d, it follows Algo. 2 to generate global and local words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "4.1"
},
{
"text": "Algorithm 1 Generative story for instance set 1: for each topic j \u2190 1 to T do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "4.1"
},
{
"text": "Choose topic-word params. \u03c8 tj \u223c Dir(\u03b1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "4.1"
},
{
"text": "Choose topic-sense params. \u03b8 s|tj \u223c Dir(\u03b1) 4: for each sense k \u2190 1 to S do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "Choose sense-word params. \u03c8 s k \u223c Dir(\u03b1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Choose sense-topic params. \u03b8 t|s k \u223c Dir(\u03b1) 7: Choose topic/sense params. \u03b8 st \u223c Dir(\u03b1) Algorithm 2 Generative story for instance d 1: Choose topic proportions \u03b8 t \u223c Dir(\u03b1) 2: Choose sense proportions \u03b8 s \u223c Dir(\u03b1) 3: Choose N g and N from unspecified distributions 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "for i \u2190 1 to N g do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Choose a topic j \u223c Mult(\u03b8 t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Choose a word w g \u223c Mult(\u03c8 tj )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "7: for i \u2190 1 to N do 8: repeat 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Choose a topic j \u223c Mult(\u03b8 t ) until w = w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "We use collapsed Gibbs sampling (Geman and Geman, 1984) to obtain samples from the posterior distribution over latent variables, with all multinomial parameters analytically integrated out before sampling. Then we estimate the sense distribution \u03b8 s for each instance using maximum likelihood estimation on the samples. These sense distributions are the output of our WSI system. We note that deficient modeling does not ordinarily affect Gibbs sampling when used for computing posteriors over latent variables, as long as parameters (the \u03b8 and \u03c8) are kept fixed. This is the case during the E step of an EM algorithm, which is the usual setting in which deficiency is used. Only the M step is affected; it becomes an approximate M step by assuming the normalization constants equal 1 (Brown et al., 1993) .",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Geman and Geman, 1984)",
"ref_id": "BIBREF20"
},
{
"start": 785,
"end": 805,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "However, here we use collapsed Gibbs sampling for posterior inference, and the analytic integration is disrupted by the presence of the normalization constants. To bypass this, we employ the standard approximation of deficient models that all normalization constants are 1, permitting us to use standard formulas for analytic integration of multinomial parameters with Dirichlet priors. Empirically, we found this \"collapsed deficient Gibbs sampler\" to slightly outperform a more principled approach based on EM, presumably due to the ability of collapsing to accelerate mixing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "During the sampling process, each sampler is run on the full set of instances for a target word, iterating through all word tokens in each instance. If the current word token is a global context word, we sample a new topic for it conditioned on all other latent variables across instances. If the current word is a local context word, we sample a new topic/sense pair for it again conditioned on all other latent variable values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "We write the conditional posterior distribution over topics for global context word token i in instance d as Pr(t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "(i) g = j|d, t \u2212i , s, \u2022), where t (i) g = j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "is the topic assignment of token i, d is the current instance, t \u2212i is the set of topic assignments of all word tokens aside from i for instance d, s is the set of sense assignments for all local word tokens in instance d, and \"\u2022\" stands for all other observed or known information, including all words, all Dirichlet hyperparameters, and all latent variable assignments in other instances. The conditional posterior can be computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(t (i) g = j|d, t \u2212i , s, \u2022) \u221d C DT dj + \u03b1 T k=1 C DT dk + T \u03b1 Pr(t=j|d,t \u2212i ,s,\u2022) C W T ij + \u03b1 Wt k =1 C W T k j + W t \u03b1 Pr(w (i) g |t=j,t \u2212i ,s,\u2022)",
"eq_num": "(4)"
}
],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "where we use the superscript DT as a mnemonic for \"instance/topic\" when counting topic assignments in an instance and W T for \"word/topic\" when counting topic assignments for a word. C DT dj contains the number of times topic j is assigned to some word token in instance d, excluding the current word token w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "(i) g ; C W T ij is the number of times word w (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "g is assigned to topic j, across all instances, excluding the current word token. W t is the number of distinct word types in the full set of instances. We show the corresponding conditional posterior probabilities underneath each term; the count ratios are obtained using standard Dirichlet-multinomial collapsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "The conditional posterior distribution over topic/sense pairs for a local context word token w (i) can be computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(t (i) = j, s (i) = k|d, t \u2212i , s \u2212i , \u2022) \u221d C DT dj + \u03b1 T k =1 C DT dk + T \u03b1 Pr(t=j|d,t \u2212i ,s,\u2022) C W T ij + \u03b1 Wt k =1 C W T k j + W t \u03b1 Pr(w (i) |t=j,t \u2212i ,s,\u2022) C DS dk + \u03b1 S k =1 C DS dk + S\u03b1 Pr(s=k|d,s \u2212i ,\u2022) C W S ik + \u03b1 Ws k =1 C W S k k + W s \u03b1 Pr(w (i) |s=k,s \u2212i ,\u2022) C ST kj + \u03b1 S k =1 C ST k j + S\u03b1 Pr(s=k|t=j,t \u2212i ,s \u2212i ,\u2022) C ST kj + \u03b1 T k =1 C ST kk + T \u03b1 Pr(t=j|s=k,t \u2212i ,s \u2212i ,\u2022) C ST kj + \u03b1 S k =1 T j =1 C ST k j + ST \u03b1 Pr(s=k,t=j|t \u2212i ,s \u2212i ,\u2022)",
"eq_num": "(5)"
}
],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "where C DS dk contains the number of times sense k is assigned to some local word token in instance d, excluding the current word token; C W S ik contains the number of time word w (i) is assigned to sense k, excluding the current time; C ST kj contains the number of times sense k and topic j are assigned to some local word tokens. W s is the number of distinct local context word types across the collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "Decoding After the sampling process, we obtain a fixed-point estimate of the sense distribution (\u03b8 s ) for each instance d using the counts from our samples. Where we use \u03b8 k s to denote the probability of sense k for the instance, this amounts to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "\u03b8 k s = C DS dk S k =1 C DS dk (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "This distribution is considered the final sense assignment distribution for the target word in instance d for the WSI task; the full distribution is fed to the evaluation metrics defined in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "To inspect what the model learned, we similarly obtain the sense-word distribution (\u03c8 s ) from the counts as follows, where \u03c8 i s k is the probability of word type i given sense k:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "\u03c8 i s k = C W S ik Ws i =1 C W S i k (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.2"
},
{
"text": "In this section, we evaluate our sense-topic model and compare it to several strong baselines and stateof-the-art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Evaluation Metrics To evaluate WSI systems, Jurgens and Klapaftis (2013) propose two metrics: fuzzy B-cubed and fuzzy normalized mutual information (NMI). They are each computed separately for each target word, then averaged across target words. Fuzzy B-cubed prefers labeling all instances with the same sense, while fuzzy NMI prefers the opposite extreme of labeling all instances with distinct senses. Hence, we report both fuzzy B-cubed (%) and fuzzy NMI (%) in our evaluation. For ease of comparison, we also report the geometric mean of the 2 metrics, which we denote by AVG. 5 SemEval-2013 Task 13 also provided a trial dataset (TRIAL) that consists of eight target ambiguous words, each with 50 instances . We use it for preliminary experiments of our model and for tuning certain hyperparameters, and evaluate final performance on the SemEval-2013 dataset (TEST) with 50 target words. Hyperparameter Tuning We use TRIAL to analyze performance of our sense-topic model under different settings for the numbers of senses (S) and topics (T ); see Table 1 . We always set T = 2S for simplicity. We find that small S values work best, which is unsurprising considering the relatively small number of instances and small size of each instance. When evaluating on TEST, we use S = 3 (which gives the best AVG results on TRIAL). Later, when we add larger context or more instances (see \u00a76), tuning on TRIAL chooses a larger S value. During inference, the Gibbs sampler was run for 4,000 iterations for each target word, setting the first 500 iterations as the burn-in period. In order to get a representative set of samples, every 13th sample (after burn-in) is saved to prevent correlations among samples. Due to the randomized nature of the inference procedure, all reported results are average scores over 5 runs. The hyperparameters (\u03b1) for all Dirichlet priors in our model are set to the (untuned) value of 0.01, following prior work on topic modeling (Griffiths and Steyvers, 2004; Heinrich, 2005) .",
"cite_spans": [
{
"start": 582,
"end": 583,
"text": "5",
"ref_id": null
},
{
"start": 1959,
"end": 1989,
"text": "(Griffiths and Steyvers, 2004;",
"ref_id": "BIBREF21"
},
{
"start": 1990,
"end": 2005,
"text": "Heinrich, 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 1053,
"end": 1060,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Baselines We include two na\u00efve baselines corresponding to the two extremes (biases) preferred by fuzzy B-cubed and NMI, respectively: 1 sense (label each instance with the same single sense) and all distinct (label each instance with its own sense).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "We also consider two baselines based on LDA. We run LDA for each target word in TEST, using the set of instances as the set of documents. We treat the learned topics as induced senses. When setting the number of topics (senses), we use the gold-standard number of senses for each target word, making this baseline unreasonably strong. We run LDA both with full context (FULL) and local context (LOCAL), using the same window size as above (10 words before and after the target word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "We also present results for the two best systems in the SemEval-2013 task (according to fuzzy Bcubed and fuzzy NMI, respectively): unimelb and AI-KU. As described in Section 2, unimelb uses hierarchical Dirichlet processes (HDPs). It extracts 50,000 extra instances for each target word as training data from the ukWac corpus\u2212a web corpus of approximately 2 billion tokens. 6 Among all systems in the task, it performs best according to fuzzy Bcubed. AI-KU is based on a lexical substitution method; a language model is built to identify lexical substitutes for target words from the dataset and the ukWac corpus. It performed best among all systems according to fuzzy NMI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "In Table 2 , we present results for these systems and compare them to our basic (i.e., without any data enrichment) sense-topic model with S = 3 (row 9). According to both fuzzy B-cubed and fuzzy NMI, our model outperforms the other WSI systems (LDA, AI-KU, and unimelb). Hence, we are able to achieve state-of-the-art results on the SemEval-2013 task even when only using the single sentence of context given in each instance (while AI-KU and unimelb use large training sets from ukWac). We found similar performance improvements when only tested on instances labeled with a single sense.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "To measure the impact of the bidirectional dependency between the topic and sense variables in our model, we also evaluate the performance of our sense-topic model when dropping one of the directions. In Table 3 , we compare their performance with our full sense-topic model on TEST. Both unidirectional models perform worse than the full model, and dropping t \u2192 s hurts more. This result verifies our intuition that topics would help narrow down the set of likely senses, and suggests that bidirectional modeling between topic and sense is desirable for WSI.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Bidirectionalilty Analysis",
"sec_num": null
},
{
"text": "In subsequent sections, we investigate several ways of exploiting additional data to build betterperforming sense-topic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectionalilty Analysis",
"sec_num": null
},
{
"text": "The primary signal used by our model is word cooccurrence information across instances. If we en- rich the instances, we can have more robust cooccurrence statistics. The SemEval-2013 dataset may be too small to induce meaningful senses, since there are only about 100 instances for each target word, and each instance only contains one sentence. This is why most shared task systems added instances from external corpora. In this section, we consider three unsupervised ways of enriching data and measure their impact on performance. In \u00a76.1 we augment the context of each instance in our original dataset while keeping the number of instances fixed. In \u00a76.2 we collect more instances of each target word from ukWac, similar to the AI-KU and unimelb systems. In \u00a76.3, we change the distribution of words in each instance based on their similarity to the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Data Enrichment",
"sec_num": "6"
},
{
"text": "Throughout, we make use of word embeddings (see \u00a72). We trained 100-dimensional skip-gram vectors (Mikolov et al., 2013) on English Wikipedia (tokenized/lowercased, resulting in 1.8B tokens of text) using window size 10, hierarchical softmax, and no downsampling. 7 7 We used a minimum count cutoff of 20 during training,",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 266,
"end": 267,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Data Enrichment",
"sec_num": "6"
},
{
"text": "The first way we explore of enriching data is to add a broader context for each instance while keeping the number of instances unchanged. This will introduce more word tokens into the set of global context words, while keeping the set of local context words mostly unchanged, as the window size we use is typically smaller than the length of the original instance. With more global context words, the model has more evidence to learn coherent topics, which could also improve the induced senses via the connection between sense and topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "The ideal way of enriching context for an instance is to add its actual context from the corpus from which it was extracted. To do this for the SemEval-2013 task, we find each instance in the OANC and retrieve three sentences before the instance and three sentences after. While not provided for the SemEval task, it is reasonable to assume this larger context in many real-world applications, such as information retrieval and machine translation of documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "However, in other settings, the corpus may only have a single sentence containing the target word (e.g., search queries or machine translation of sentences). To address this, we find a semanticallysimilar sentence from the English ukWac corpus and append it to the instance as additional context. For each instance in the original dataset, we extract its then only retained vectors for the most frequent 100,000 word types, averaging the rest to get a vector for unknown words. most similar sentence that contains the same target word and add it to increase its set of global context words. To compute similarity, we first represent instances and ukWac sentences by summing the word embeddings across their word tokens, then compute cosine similarity. The ukWac sentence (s * ) with the highest cosine similarity to each original instance (d) is appended to that instance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "s * = arg max s\u2208ukWac sim(d, s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "Results Since the vocabulary has increased, we expect we may need larger values for S and T . On TRIAL, we find best performance for S = 10, so we run on TEST with this value. Performance is shown in Table 2 (rows 10 and 11). These two methods have higher AVG scores than all others. Both their fuzzy B-cubed and NMI improvements over the baselines and previous WSI systems are statistically significant, as measured by a paired bootstrap test (p < 0.01; Efron and Tibshirani, 1994) .",
"cite_spans": [
{
"start": 455,
"end": 482,
"text": "Efron and Tibshirani, 1994)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 200,
"end": 207,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "It is unsurprising that we find best performance with actual context. Interestingly, however, we can achieve almost the same gains when automatically finding relevant context from a different corpus. Thus, even in real-world settings where we only have a single sentence of context, we can induce substantially better senses by automatically broadening the global context in an unsupervised manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "As a comparative experiment, we also evaluate the performance of LDA when adding actual context (Table 2, row 7). Compared with LDA with full context (FULL) in row 6, performance is slightly improved, perhaps due to the fact that longer contexts induce more accurate topics. However, those topics are not necessarily related to senses, which is why LDA with only local context actually performs best among all three LDA models. Thus we see that merely adding context does not necessarily help topic models for WSI. Importantly, since our model includes both sense and topic, we are able to leverage the additional context to learn better topics while also improving the quality of the induced senses, leading to our strongest results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "Examples We present examples to illustrate our sense-topic model's advantage over LDA and the further improvement when adding actual context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "Consider instances (1) and (2) below, with target word occurrences in bold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "(1) Nigeria then sent troops to challenge the coup, evidently to restore the president and repair Nigeria's corrupt image abroad. (image%1:07:01::/4) 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "(2) When asked about the Bible's literal account of creation, as opposed to the attractive concept of divine creation, every major Republican presidential candidate-even Bauer-has squirmed, ducked, and tried to steer the discussion back to \"faith,\" \"morals,\" and the general idea that humans \"were created in the image of God.\" (image%1:06:00::/2 image%1:09:02::/4) Both instances share the common word stem president. LDA uses this to put these two instances into the same topic (i.e., sense). In our sense-topic model, president is a local context word in instance (1) but a global context word in instance (2). So the effect of sharing words is decreased, and these two instances are assigned to different senses by our model. According to the gold standard, the two instances are annotated with different senses, so our sense-topic model provides the correct prediction. Next, consider instances (3), (4), and (5):(3) I have recently deliberately begun to use variations of \"kick ass\" and \"bites X in the ass\" because they are colorful, evocative phrases; because, thanks to South Park, ass references are newly familiar and hilarious and because they don't evoke particularly vivid mental image of asses any longer. (im-age%1:09:00::/4) (4) Also, playing video games that require rapid mental rotation of visual image enhances the spatial test scores of boys and girls alike. (image%1:06:00::/4) (5) Practicing and solidifying modes of representation, Piaget emphasized, make it possible for the child to free thought from the here and now; create larger images of reality that take into account past, present, and future; and transform those image mentally in the service of logical thinking. (im-age%1:09:00::/4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "In the gold standard, instances (3) and (4) have different senses while (3) and (5) have the same sense. However, sharing the local context word \"mental\" triggers both LDA and our sense-topic model to assign them to the same sense label with high probability. When augmenting the instances by their real contexts, we have a better understanding about the topics. Instance (3) is about phrase variations, instance (4) is about enhancing boys' spatial skills, while instance (5) discusses the effect of makebelieve play for children's development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "When LDA is run with the actual context, it leaves (4) and (5) in the same topic (i.e., sense), while assigning (3) into another topic with high probability. This could be because (4) and (5) both relate to child development, and therefore LDA considers them as sharing the same topic. However, topic is not the same as sense, especially when larger contexts are available. Our sense-topic model built on the actual context makes correct predictions, leaving (3) and (5) into the same sense cluster while labeling (4) with a different sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Context",
"sec_num": "6.1"
},
{
"text": "We also consider a way to augment our dataset with additional instances from an external corpus. We have no gold standard senses for these instances, so we will not evaluate our model on them; they are merely used to provide richer co-occurrence statistics about the target word so that we can perform better on the instances on which we evaluate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Instances",
"sec_num": "6.2"
},
{
"text": "If we added randomly-chosen instances (containing the target word), we would be concerned that the learned topics and senses may not reflect the distributions of the original instance set. So we only add instances that are semantically similar to instances in our original set (Moore and Lewis, 2010; Chambers and Jurafsky, 2011) . Also, to avoid changing the original sense distribution by adding too many instances, we only add a single instance for each original instance. As in \u00a76.1, for each instance in the original dataset, we find the most similar sentence in ukWac for each instance using word embeddings and add it into the dataset. Therefore, the number of instances is doubled, and we use the enriched dataset for our sense-topic model.",
"cite_spans": [
{
"start": 277,
"end": 300,
"text": "(Moore and Lewis, 2010;",
"ref_id": "BIBREF37"
},
{
"start": 301,
"end": 329,
"text": "Chambers and Jurafsky, 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Instances",
"sec_num": "6.2"
},
{
"text": "Results Similarly to \u00a76.1, on TRIAL, we find best performance for S = 10, so we run on TEST with this value. As shown in Table 2 (row 12), this improves fuzzy B-cubed by 5.4%, but fuzzy NMI is lower, making the AVG worse than the original model. A possible reason for this is that the sense distribution in the added instances disturbs that in the original set of instances, even though we picked the most semantically similar ones to add.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Adding Instances",
"sec_num": "6.2"
},
{
"text": "Another approach is inspired by the observation that each local context token is treated equally in terms of its contribution to the sense. However, our intuition is that certain tokens are more indicative than others. Consider the target word window. Since glass evokes a particular sense of window, we would like to weight it more highly than, say, day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting by Word Similarity",
"sec_num": "6.3"
},
{
"text": "To measure word relatedness, we use cosine similarity of word embeddings. We (softly) replicate each local context word according to its exponentiated cosine similarity to the target word. 9 The result is that the local context in each instance has been modified to contain fewer occurrences of unrelated words and more occurrences of related words. If each cosine similarity is 0, we obtain our original sense-topic model. During inference, the posterior sense distribution for instance d is now given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting by Word Similarity",
"sec_num": "6.3"
},
{
"text": "Pr(s = k|d, \u2022) = w\u2208d exp(sim(w, w * ))1 sw=k + \u03b1 w \u2208d exp(sim(w , w * )) + S\u03b1 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting by Word Similarity",
"sec_num": "6.3"
},
{
"text": "where d is the set of local context tokens in d, sim(w, w * ) is the cosine similarity between w and target word w * , and 1 sw=k is an indicator returning 1 when w is assigned to sense k and 0 otherwise. The posterior distribution of sampling a token of word w i from sense k becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting by Word Similarity",
"sec_num": "6.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C W S ik exp(sim(w i , w * )) + \u03b1 Ws i =1 C W S i k exp(sim(w i , w * )) + W s \u03b1",
"eq_num": "(9)"
}
],
"section": "Weighting by Word Similarity",
"sec_num": "6.3"
},
{
"text": "where C W S ik counts the number of times w i is assigned to sense k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting by Word Similarity",
"sec_num": "6.3"
},
{
"text": "We again use TRIAL to tune S (and still use T = 2S). We find best TRIAL performance at S = 3; this is unsurprising since this approach does not change the vocabulary. In Table 2 , we present results on TEST with S = 3 (row 13). We also report an additional baseline: \"word embedding product\" (row 8), where we represent each instance by multiplying (element-wise) the word vectors of all local context words, and then feed the instance vectors into the fuzzy c-means clustering algorithm (Pal and Bezdek, 1995) , c = 3. Compared to this baseline, our approach improves 4.36% on average; compared with results for the original sense-topic model (row 9), this approach improves 0.69% on average. In Table 4 we show the top-5 terms for each sense induced for image, both for the original sense-topic model and when additionally weighting by similarity. We find that the original model provides less distinguishable senses, as it is difficult to derive separate senses from these top terms. In contrast, senses learned from the model with weighted similarities are more distinct. Sense 1 relates to mental representation; sense 2 is about visual representation produced on a surface; and sense 3 is about the general impression that something presents to the public.",
"cite_spans": [
{
"start": 488,
"end": 510,
"text": "(Pal and Bezdek, 1995)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 697,
"end": 704,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "We presented a novel sense-topic model for the problem of word sense induction. We considered sense and topic as distinct latent variables, defining a model that generates global context words using topic variables and local context words using both topic and sense variables. Sense and topic are related using a bidirectional dependency with a robust parameterization based on deficient modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "We explored ways of enriching data using word embeddings from neural language models and external corpora. We found enriching context to be most effective, even when the original context of the instance is not available. Evaluating on the SemEval-2013 WSI dataset, we demonstrate that our model yields significant improvements over current stateof-the-art systems, giving 59.1% fuzzy B-cubed and 9.39% fuzzy NMI in our best setting. Moreover, we find that modeling both sense and topic is critical to enable us to effectively exploit broader context, showing that LDA does not improve when each instance is enriched by actual context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In future work, we plan to further explore the space of sense-topic models, including non-deficient models. One possibility is to use \"switching variables\" (Paul and Girju, 2009) to choose whether to generate each word from a topic or sense, with a stronger preference to generate from senses closer to the target word. Another possibility is to use locallynormalized log-linear distributions and include features pairing words with particular senses and topics, rather than redundant generative steps.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Paul and Girju, 2009)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 59-71, 2015. Action Editor: Hwee Tou Ng. Submission batch: 10/2014; Revision batch 12/2014; Revision batch 1/2015; Published 1/2015. c 2015 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The target word token may occur multiple times in an instance, but only one occurrence is chosen as the target word occurrence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\"Word Sense Induction for Graded and Non-Graded Senses,\" http://www.cs.york.ac.uk/ semeval-2013/task13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use Pr() for generic probability distributions without further qualifiers and P \u03b8 () for distributions parameterized by \u03b8.4 For clarity, we drop the (i) superscripts in these and the following equations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not use an arithmetic mean because the effective range of the two metrics is substantially different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://wacky.sslmit.unibo.it/doku.php? id=corpora",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is the gold standard sense label, where im-age%1:07:01:: indexes the wordnet senses, and 4 is the score assigned by the annotators.The possible range of a score is[1,5].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Cosine similarities range from -1 to 1, so we use exponentiation to ensure we always use positive counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the editor and the anonymous reviewers for their helpful comments. This research was partially supported by NIH LM010817. The opinions expressed in this work are those of the authors and do not necessarily reflect the views of the funding agency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The plate diagram for the complete sense-topic model is shown in Figure 2 . 2: Plate notation for the proposed sense-topic model with all variables (except \u03b1, the fixed Dirichlet hyperparameter used as prior for all multinomial distributions). Each instance has topic mixing proportions \u03b8 t and sense mixing proportions \u03b8 s . The instance set shares sense/topic parameter \u03b8 st , topic-sense distribution \u03b8 s|t , sense-topic distribution \u03b8 t|s , topic-word distribution \u03c8 t , and sense-word distribution \u03c8 s .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2007 Task 02: Evaluating word sense induction and discrimination systems",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of SemEval",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Agirre and A. Soroa. 2007. SemEval-2007 Task 02: Evaluating word sense induction and discrimination systems. In Proc. of SemEval, pages 7-12.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Utilizing semantic composition in distributional semantic models for word sense discrimination and word sense disambiguation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Akkaya",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of ICSC",
"volume": "",
"issue": "",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Akkaya, J. Wiebe, and R. Mihalcea. 2012. Utilizing semantic composition in distributional semantic mod- els for word sense discrimination and word sense dis- ambiguation. In Proc. of ICSC, pages 45-51.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bansal, K. Gimpel, and K. Livescu. 2014. Tailoring continuous word representations for dependency pars- ing. In Proc. of ACL, pages 809-815.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "AI-KU: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation",
"authors": [
{
"first": "O",
"middle": [],
"last": "Baskaya",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of SemEval",
"volume": "",
"issue": "",
"pages": "300--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Baskaya, E. Sert, V. Cirik, and D. Yuret. 2013. AI- KU: Using substitute vectors and co-occurrence mod- eling for word sense induction and disambiguation. In Proc. of SemEval, pages 300-306.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137-1155.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. La- tent Dirichlet allocation. J. Mach. Learn. Res., 3:993- 1022.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "PUTOP: Turning predominant senses into a topic model for word sense disambiguation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of SemEval",
"volume": "",
"issue": "",
"pages": "277--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Boyd-Graber and D. M. Blei. 2007. PUTOP: Turning predominant senses into a topic model for word sense disambiguation. In Proc. of SemEval, pages 277-281.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A topic model for word sense disambiguation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1024--1033",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Boyd-Graber, D. M. Blei, and X. Zhu. 2007. A topic model for word sense disambiguation. In Proc. of EMNLP-CoNLL, pages 1024-1033.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bayesian word sense induction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Brody and M. Lapata. 2009. Bayesian word sense induction. In Proc. of EACL, pages 103-111.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving word sense disambiguation using topic features",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Cai",
"suffix": ""
},
{
"first": "W",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Y",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1015--1023",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. F. Cai, W. S. Lee, and Y. W. Teh. 2007. Improving word sense disambiguation using topic features. In Proc. of EMNLP-CoNLL, pages 1015-1023.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word sense disambiguation vs. statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "387--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Carpuat and D. Wu. 2005. Word sense disambigua- tion vs. statistical machine translation. In Proc. of ACL, pages 387-394.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving statistical machine translation using word sense disambiguation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Carpuat and D. Wu. 2007. Improving statistical ma- chine translation using word sense disambiguation. In Proc. of EMNLP-CoNLL, pages 61-72.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Template-based information extraction without the templates",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "976--986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Chambers and D. Jurafsky. 2011. Template-based information extraction without the templates. In Proc. of ACL, pages 976-986.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural lan- guage processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Two Step CCA: A new spectral method for estimating vector models of words",
"authors": [
{
"first": "P",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rodu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2012,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "1551--1558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Dhillon, J. Rodu, D. Foster, and L. Ungar. 2012. Two Step CCA: A new spectral method for estimating vec- tor models of words. In ICML, pages 1551-1558.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discovering corpusspecific word senses",
"authors": [
{
"first": "B",
"middle": [],
"last": "Dorow",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "79--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Dorow and D. Widdows. 2003. Discovering corpus- specific word senses. In Proc. of EACL, pages 79-82.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An introduction to the bootstrap",
"authors": [
{
"first": "B",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "57",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Efron and R. J. Tibshirani. 1994. An introduction to the bootstrap, volume 57. CRC press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Graded word sense assignment",
"authors": [
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "440--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Erk and D. McCarthy. 2009. Graded word sense as- signment. In Proc. of EMNLP, pages 440-449.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Investigations on word senses and word usages",
"authors": [
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Gaylord",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Erk, D. McCarthy, and N. Gaylord. 2009. Investi- gations on word senses and word usages. In Proc. of ACL, pages 10-18.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images",
"authors": [
{
"first": "S",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Geman",
"suffix": ""
}
],
"year": 1984,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell",
"volume": "6",
"issue": "6",
"pages": "721--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Geman and D. Geman. 1984. Stochastic relax- ation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell., 6(6):721-741.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Finding scientific topics",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the National Academy of Sciences of the United States of America",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. L. Griffiths and M. Steyvers. 2004. Finding scien- tific topics. Proc. of the National Academy of Sciences of the United States of America, 101(Suppl 1):5228- 5235.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dependency networks for inference, collaborative filtering, and data visualization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Heckerman",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Chickering",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rounthwaite",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kadie",
"suffix": ""
}
],
"year": 2001,
"venue": "J. Mach. Learn. Res",
"volume": "1",
"issue": "",
"pages": "49--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Heckerman, D. M. Chickering, C. Meek, R. Roun- thwaite, and C. Kadie. 2001. Dependency networks for inference, collaborative filtering, and data visual- ization. J. Mach. Learn. Res., 1:49-75.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Parameter estimation for text analysis",
"authors": [
{
"first": "G",
"middle": [],
"last": "Heinrich",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Heinrich. 2005. Parameter estimation for text analy- sis. Technical report.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An empirical investigation of word representations for parsing the web",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hisamoto",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2013,
"venue": "ANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hisamoto, K. Duh, and Y. Matsumoto. 2013. An em- pirical investigation of word representations for pars- ing the web. In ANLP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The American National Corpus first release",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Suderman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "1681--1684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Ide and K. Suderman. 2004. The American National Corpus first release. In Proc. of LREC, pages 1681- 1684.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SemEval-2013 Task 13: Word sense induction for graded and non-graded senses",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Klapaftis",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of SemEval",
"volume": "",
"issue": "",
"pages": "290--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Jurgens and I. Klapaftis. 2013. SemEval-2013 Task 13: Word sense induction for graded and non-graded senses. In Proc. of SemEval, pages 290-299.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Embracing ambiguity: A comparison of annotation methodologies for crowdsourcing word sense labels",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "556--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Jurgens. 2013. Embracing ambiguity: A comparison of annotation methodologies for crowdsourcing word sense labels. In Proc. of NAACL, pages 556-562.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A generative constituent-context model for improved grammar induction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2002. A generative constituent-context model for improved grammar in- duction. In Proc. of ACL, pages 128-135.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Word sense induction for novel sense detection",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Lau",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "591--601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. H. Lau, P. Cook, D. McCarthy, D. Newman, and T. Baldwin. 2012. Word sense induction for novel sense detection. In Proc. of EACL, pages 591-601.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "unimelb: Topic modelling-based word sense induction",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Lau",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of SemEval",
"volume": "",
"issue": "",
"pages": "307--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. H. Lau, P. Cook, and T. Baldwin. 2013. unimelb: Topic modelling-based word sense induction. In Proc. of SemEval, pages 307-311.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Topic models for word sense disambiguation and token-based idiom detection",
"authors": [
{
"first": "L",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "1138--1147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Li, B. Roth, and C. Sporleder. 2010. Topic models for word sense disambiguation and token-based idiom detection. In Proc. of ACL, pages 1138-1147.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Sphere embedding: An application to part-of-speech induction",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Maron",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bienenstock",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "James",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in NIPS 23",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Maron, E. Bienenstock, and M. James. 2010. Sphere embedding: An application to part-of-speech induc- tion. In Advances in NIPS 23.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Syntactic re-alignment models for machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "360--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. May and K. Knight. 2007. Syntactic re-alignment models for machine translation. In Proc. of EMNLP- CoNLL, pages 360-368.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. In Proc. of ICLR.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "WordNet: An on-line lexical database",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "K",
"middle": [
"J"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "International Journal of Lexicography",
"volume": "3",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography, 3(4).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Three new graphical models for statistical language modelling",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Mnih and G. Hinton. 2007. Three new graphical models for statistical language modelling. In Proc. of ICML, pages 641-648.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Intelligent selection of language model training data",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "220--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. C. Moore and W. Lewis. 2010. Intelligent selection of language model training data. In Proc. of ACL, pages 220-224.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "On cluster validity for the fuzzy c-means model",
"authors": [
{
"first": "N",
"middle": [
"R"
],
"last": "Pal",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Bezdek",
"suffix": ""
}
],
"year": 1995,
"venue": "Trans. Fuz Sys",
"volume": "3",
"issue": "",
"pages": "370--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. R. Pal and J. C. Bezdek. 1995. On cluster validity for the fuzzy c-means model. Trans. Fuz Sys., 3:370-379.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Discovering word senses from text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of KDD",
"volume": "",
"issue": "",
"pages": "613--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Pantel and D. Lin. 2002. Discovering word senses from text. In Proc. of KDD, pages 613-619.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Word sense annotation of polysemous words by multiple annotators",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Salleb-Aoussi",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Bhardwaj",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. J. Passonneau, A. Salleb-Aoussi, V. Bhardwaj, and N. Ide. 2010. Word sense annotation of polysemous words by multiple annotators. In Proc. of LREC.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Cross-cultural analysis of blogs and forums with mixed-collection topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "1408--1417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Paul and R. Girju. 2009. Cross-cultural analysis of blogs and forums with mixed-collection topic models. In Proc. of EMNLP, pages 1408-1417.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Word sense discrimination by clustering contexts in vector and similarity spaces",
"authors": [
{
"first": "A",
"middle": [],
"last": "Purandare",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Purandare and T. Pedersen. 2004. Word sense dis- crimination by clustering contexts in vector and simi- larity spaces. In Proc. of CoNLL, pages 41-48.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "The word-space model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Sahlgren. 2006. The word-space model: Us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. Ph.D. dissertation, Stock- holm University.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Comput. Linguist",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Sch\u00fctze. 1998. Automatic word sense discrimination. Comput. Linguist., 24(1):97-123.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the Amer- ican Statistical Association, 101:1566-1581.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A Bayesian LDAbased model for semi-supervised part-of-speech tagging",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in NIPS 20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Toutanova and M. Johnson. 2007. A Bayesian LDA- based model for semi-supervised part-of-speech tag- ging. In Advances in NIPS 20.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Word representations: A simple and general method for semisupervised learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Turian, L. Ratinov, and Y. Bengio. 2010. Word rep- resentations: A simple and general method for semi- supervised learning. In Proc. of ACL, pages 384-394.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Hyperlex: lexical cartography for information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "V\u00e9ronis",
"suffix": ""
}
],
"year": 2004,
"venue": "Computer Speech & Language",
"volume": "18",
"issue": "3",
"pages": "223--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. V\u00e9ronis. 2004. Hyperlex: lexical cartography for in- formation retrieval. Computer Speech & Language, 18(3):223-252.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Word-sense disambiguation for machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Vickrey",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Biewald",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Teyssier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "771--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Vickrey, L. Biewald, M. Teyssier, and D. Koller. 2005. Word-sense disambiguation for machine translation. In Proc. of HLT-EMNLP, pages 771-778.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Using WordNet to disambiguate word senses for text retrieval",
"authors": [
{
"first": "M",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Voorhees. 1993. Using WordNet to disambiguate word senses for text retrieval. In Proc. of SIGIR, pages 171-180.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Nonparametric Bayesian word sense induction",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of TextGraphs-6: Graph-based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yao and B. Van Durme. 2011. Nonparamet- ric Bayesian word sense induction. In Proc. of TextGraphs-6: Graph-based Methods for Natural Lan- guage Processing, pages 10-14.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "k \u223c Mult(\u03b8 s ) 11: Choose a topic j \u223c Mult(\u03b8 t|s k ) 12: Choose a sense k \u223c Mult(\u03b8 s|tj ) 13: Choose topic/sense j , k \u223c Mult(\u03b8 st ) 14: until j = j = j and k = k = k 15: repeat 16: Choose a word w \u223c Mult(\u03c8 tj ) 17: Choose a word w \u223c Mult(\u03c8 s k ) 18:"
},
"TABREF2": {
"num": null,
"html": null,
"text": "Performance on TEST for baselines and our sense-topic model. Best score in each column is bold.",
"content": "<table><tr><td>Model</td><td colspan=\"3\">B-cubed(%) NMI(%) AVG</td></tr><tr><td>Drop s \u2192 t Drop t \u2192 s Full</td><td>52.1 51.1 53.5</td><td>6.84 6.78 6.96</td><td>18.88 18.61 19.30</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"text": "Performance on TEST for the sense-topic model with ablation of links between sense and topic variables.",
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "Sense Top-5 terms per sense Sense-Topic Model 1 include, depict, party, paint, visual 2 zero, manage, company, culture, figure 3 create, clinton, people, american, popular +weight by similarity ( \u00a76.3) 1 depict, create, culture, mental, include 2 picture, visual, pictorial, matrix, movie 3 public, means, view, american, story",
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "Top 5 terms for each sense induced for the noun image by the sense-topic model and when weighting local context words by similarity. S = 3 for both.",
"content": "<table/>",
"type_str": "table"
}
}
}
}