ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:55:21.567641Z"
},
"title": "Query Expansion with Locally-Trained Word Embeddings",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": "",
"affiliation": {},
"email": "fdiaz@microsoft.com"
},
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": "",
"affiliation": {},
"email": "bmitra@microsoft.com"
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Continuous space word embeddings have received a great deal of attention in the natural language processing and machine learning communities for their ability to model term similarity and other relationships. We study the use of term relatedness in the context of query expansion for ad hoc information retrieval. We demonstrate that word embeddings such as word2vec and GloVe, when trained globally, underperform corpus and query specific embeddings for retrieval tasks. These results suggest that other tasks benefiting from global embeddings may also benefit from local embeddings.",
"pdf_parse": {
"paper_id": "P16-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Continuous space word embeddings have received a great deal of attention in the natural language processing and machine learning communities for their ability to model term similarity and other relationships. We study the use of term relatedness in the context of query expansion for ad hoc information retrieval. We demonstrate that word embeddings such as word2vec and GloVe, when trained globally, underperform corpus and query specific embeddings for retrieval tasks. These results suggest that other tasks benefiting from global embeddings may also benefit from local embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Continuous space embeddings such as word2vec (Mikolov et al., 2013b) or GloVe (Pennington et al., 2014a) project terms in a vocabulary to a dense, lower dimensional space. Recent results in the natural language processing community demonstrate the effectiveness of these methods for analogy and word similarity tasks. In general, these approaches provide global representations of words; each word has a fixed representation, regardless of any discourse context. While a global representation provides some advantages, language use can vary dramatically by topic. For example, ambiguous terms can easily be disambiguated given local information in immediately surrounding words (Harris, 1954; Yarowsky, 1993) . The window-based training of word2vec style algorithms exploits this distributional property.",
"cite_spans": [
{
"start": 45,
"end": 68,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF28"
},
{
"start": 78,
"end": 104,
"text": "(Pennington et al., 2014a)",
"ref_id": "BIBREF32"
},
{
"start": 678,
"end": 692,
"text": "(Harris, 1954;",
"ref_id": "BIBREF16"
},
{
"start": 693,
"end": 708,
"text": "Yarowsky, 1993)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A global word embedding, even when trained using local windows, risks capturing only coarse representations of those topics dominant in the corpus. While a particular embedding may be appropriate for a specific word within a sentence-length context globally, it may be entirely inappropriate within a specific topic. Gale et al. refer to this as the 'one sense per discourse' property (Gale et al., 1992) . Previous work by Yarowsky demonstrates that this property can be successfully combined with information from nearby terms for word sense disambiguation (Yarowsky, 1995) . Our work extends this approach to word2vec-style training in the context word similarity. For many tasks that require topic-specific linguistic analysis, we argue that topic-specific representations should outperform global representations. Indeed, it is difficult to imagine a natural language processing task that would not benefit from an understanding of the local topical structure. Our work focuses on a query expansion, an information retrieval task where we can study different lexical similarity methods with an extrinsic evaluation metric (i.e. retrieval metrics). Recent work has demonstrated that similarity based on global word embeddings can be used to outperform classic pseudo-relevance feedback techniques (Sordoni et al., 2014; al Masri et al., 2016) .",
"cite_spans": [
{
"start": 317,
"end": 328,
"text": "Gale et al.",
"ref_id": null
},
{
"start": 385,
"end": 404,
"text": "(Gale et al., 1992)",
"ref_id": "BIBREF14"
},
{
"start": 559,
"end": 575,
"text": "(Yarowsky, 1995)",
"ref_id": "BIBREF49"
},
{
"start": 1301,
"end": 1323,
"text": "(Sordoni et al., 2014;",
"ref_id": "BIBREF39"
},
{
"start": 1324,
"end": 1346,
"text": "al Masri et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose that embeddings be learned on topically-constrained corpora, instead of large topically-unconstrained corpora. In a retrieval scenario, this amounts to retraining an embedding on documents related to the topic of the query. We present local embeddings which capture the nuances of topic-specific language better than global embeddings. There is substantial evidence that global methods underperform local methods for information re-trieval tasks such as query expansion (Xu and Croft, 1996) , latent semantic analysis (Hull, 1994; Sch\u00fctze et al., 1995; Singhal et al., 1997) , cluster-based retrieval (Tombros and van Rijsbergen, 2001; Tombros et al., 2002; Willett, 1985) , and term clustering (Attar and Fraenkel, 1977) . We demonstrate that the same holds true when using word embeddings for text retrieval.",
"cite_spans": [
{
"start": 481,
"end": 501,
"text": "(Xu and Croft, 1996)",
"ref_id": "BIBREF47"
},
{
"start": 529,
"end": 541,
"text": "(Hull, 1994;",
"ref_id": "BIBREF19"
},
{
"start": 542,
"end": 563,
"text": "Sch\u00fctze et al., 1995;",
"ref_id": "BIBREF36"
},
{
"start": 564,
"end": 585,
"text": "Singhal et al., 1997)",
"ref_id": "BIBREF38"
},
{
"start": 612,
"end": 646,
"text": "(Tombros and van Rijsbergen, 2001;",
"ref_id": "BIBREF40"
},
{
"start": 647,
"end": 668,
"text": "Tombros et al., 2002;",
"ref_id": "BIBREF41"
},
{
"start": 669,
"end": 683,
"text": "Willett, 1985)",
"ref_id": "BIBREF46"
},
{
"start": 706,
"end": 732,
"text": "(Attar and Fraenkel, 1977)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the purpose of motivating our approach, we will restrict ourselves to word2vec although other methods behave similarly (Levy and Goldberg, 2014) . These algorithms involve discriminatively training a neural network to predict a word given small set of context words. More formally, given a target word w and observed context c, the instance loss is defined as,",
"cite_spans": [
{
"start": 123,
"end": 148,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "(w, c) = log \u03c3(\u03c6(w) \u2022 \u03c8(c)) + \u03b7 \u2022 E w\u223c\u03b8 C [log \u03c3(\u2212\u03c6(w) \u2022 \u03c8(w))]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "where \u03c6 : V \u2192 k projects a term into a kdimensional embedding space, \u03c8 : V m \u2192 k projects a set of m terms into a k-dimensional embedding space, and w is a randomly sampled 'negative' context. The parameter \u03b7 controls the sampling of random negative terms. These matrices are estimated over a set of contexts sampled from a large corpus and minimize the expected loss,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L c = E w,c\u223cpc [ (w, c)]",
"eq_num": "(1)"
}
],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "where p c is the distribution of word-context pairs in the training corpus and can be estimated from corpus statistics. While using corpus statistics may make sense absent any other information, oftentimes we know that our analysis will be topically constrained. For example, we might be analyzing the 'sports' documents in a collection. The language in this domain is more specialized and the distribution over word-context pairs is unlikely to be similar to p c (w, c). In fact, prior work in information retrieval suggests that documents on subtopics in a collection have very different unigram distributions compared to the whole corpus (Cronen-Townsend et al., 2002) . Let p t (w, c) be the probability",
"cite_spans": [
{
"start": 641,
"end": 671,
"text": "(Cronen-Townsend et al., 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "log(weight) -1 0 1 2 3 4 5 0 50 100 150",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Figure 1: Importance weights for terms occurring in documents related to 'argentina pegging dollar' relative to frequency in gigaword.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "of observing a word-context pair conditioned on the topic t. The expected loss under this distribution is (Shimodaira, 2000) ,",
"cite_spans": [
{
"start": 106,
"end": 124,
"text": "(Shimodaira, 2000)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L t = E w,c\u223cpc p t (w, c) p c (w, c) (w, c)",
"eq_num": "(2)"
}
],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "In general, if our corpus consists of sufficiently diverse data (e.g. Wikipedia), the support of p t (w, c) is much smaller than and contained in that of p c (w, c). The loss, , of a context that occurs more frequently in the topic, will be amplified by the importance weight \u03c9 = pt(w,c) pc(w,c) . Because topics require specialized language, this is likely to occur; at the same time, these contexts are likely to be underemphasized in training a model according to Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "In order to quantify this, we took a topic from a TREC ad hoc retrieval collection (see Section 5 for details) and computed the importance weight for each term occurring in the set of on-topic documents. The histogram of weights \u03c9 is presented in Figure 1 . While larger probabilities are expected since the size of a topic-constrained vocabulary is smaller, there are a non-trivial number of terms with much larger importance weights. If the loss, (w), of a word2vec embedding is worse for these words with low p c (w), then we expect these errors to be exacerbated for the topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Of course, these highly weighted terms may have a low value for p t (w) but a very high value relative to the corpus. We can adjust the weights by considering the pointwise Kullback-Leibler divergence for each word w,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D w (p t p c ) = p t (w) log p t (w) p c (w)",
"eq_num": "(3)"
}
],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Words which have a much higher value of p t (w) than p c (w) and have a high absolute value of p t (w) will have high pointwise KL divergence. Figure 2 shows the divergences for the top 100 most frequent terms in p t (w). The higher ranked terms (i.e. good query expansion candidates) tend to have much higher probabilities than found in p c (w). If the loss on those words is large, this may result in poor embeddings for the most important words for the topic. A dramatic change in distribution between the corpus and the topic has implications for performance precisely because of the objective used by word2vec (i.e. Equation 1). The training emphasizes word-context pairs occurring with high frequency in the corpus. We will demonstrate that, even with heuristic downsampling of frequent terms in word2vec, these techniques result in inferior performance for specific topics.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Thus far, we have sketched out why using the corpus distribution for a specific topic may result in undesirable outcomes. However, it is even unclear that p t (w|c) = p c (w|c). In fact, we suspect that p t (w|c) = p c (w|c) because of the 'one sense per discourse' claim (Gale et al., 1992) . We can qualitatively observe the difference in p c (w|c) and p t (w|c) by training global local cutting tax squeeze deficit reduce vote slash budget reduction reduction spend house lower bill halve plan soften spend freeze billion Figure 3 : Terms similar to 'cut' for a word2vec model trained on a general news corpus and another trained only on documents related to 'gasoline tax'. two word2vec models: the first on the large, generic Gigaword corpus and the second on a topically-constrained subset of the gigaword. We present the most similar terms to 'cut' using both a global embedding and a topicspecific embedding in Figure 3 . In this case, the topic is 'gasoline tax'. As we can see, the 'tax cut' sense of 'cut' is emphasized in the topic-specific embedding.",
"cite_spans": [
{
"start": 272,
"end": 291,
"text": "(Gale et al., 1992)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 525,
"end": 533,
"text": "Figure 3",
"ref_id": null
},
{
"start": 919,
"end": 927,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "The previous section described several reasons why a global embedding may result in overgeneral word embeddings. In order to perform topic-specific training, we need a set of topicspecific documents. In information retrieval scenarios users rarely provide the system with examples of topic-specific documents, instead providing a small set of keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Word Embeddings",
"sec_num": "3"
},
{
"text": "Fortunately, we can use information retrieval techniques to generate a query-specific set of topical documents. Specifically, we adopt a language modeling approach to do so (Croft and Lafferty, 2003) . In this retrieval model, each document is represented as a maximum likelihood language model estimated from document term frequencies. Query language models are estimated similarly, using term frequency in the query. A document score then, is the Kullback-Leibler divergence between the query and document language models,",
"cite_spans": [
{
"start": 173,
"end": 199,
"text": "(Croft and Lafferty, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Word Embeddings",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(p q p d ) = w\u2208V p q (w) log p q (w) p d (w)",
"eq_num": "(4)"
}
],
"section": "Local Word Embeddings",
"sec_num": "3"
},
{
"text": "Documents whose language models are more similar to the query language model will have a lower KL divergence score. For consistency with prior work, we will refer to this as the query likelihood score of a document. The scores in Equation 4 can be passed through a softmax function to derive a multinomial over the entire corpus (Lavrenko and Croft, 2001) ,",
"cite_spans": [
{
"start": 329,
"end": 355,
"text": "(Lavrenko and Croft, 2001)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Word Embeddings",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(d) = exp(\u2212D(p q p d )) d exp(\u2212D(p q p d ))",
"eq_num": "(5)"
}
],
"section": "Local Word Embeddings",
"sec_num": "3"
},
{
"text": "Recall in Section 2 that training a word2vec model weights word-context pairs according to the corpus frequency. Our query-based multinomial, p(d), provides a weighting function capturing the documents relevant to this topic. Although an estimation of the topicspecific documents from a query will be imprecise (i.e. some nonrelevant documents will be scored highly), the language use tends to be consistent with that found in the known relevant documents. We can train a local word embedding using an arbitrary optimization method by sampling documents from p(d) instead of uniformly from the corpus. In this work, we use word2vec, although any method that operates on a sample of documents can be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Word Embeddings",
"sec_num": "3"
},
{
"text": "When using language models for retrieval, query expansion involves estimating an alternative to p q . Specifically, when each expansion term is associated with a weight, we normalize these weights to derive the expansion language model, p q + . This language model is then interpolated with the original query model,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion with Word Embeddings",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p 1 q (w) = \u03bbp q (w) + (1 \u2212 \u03bb)p q + (w)",
"eq_num": "(6)"
}
],
"section": "Query Expansion with Word Embeddings",
"sec_num": "4"
},
{
"text": "This interpolated language model can then be used with Equation 4 to rank documents (Abdul-Jaleel et al., 2004) . We will refer to this as the expanded query score of a document.",
"cite_spans": [
{
"start": 84,
"end": 111,
"text": "(Abdul-Jaleel et al., 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion with Word Embeddings",
"sec_num": "4"
},
{
"text": "Now we turn to using word embeddings for query expansion. Let U be an |V| \u00d7 k term embedding matrix. If q is a |V| \u00d7 1 column term vector for a query, then the expansion term weights are UU T q. We then take the top k terms, normalize their weights, and compute p q + (w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion with Word Embeddings",
"sec_num": "4"
},
{
"text": "We consider the following alternatives for U. The first approach is to use a global model trained by sampling documents uniformly. The second approach, which we propose in this paper, is to use a local model trained by sampling documents from p(d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion with Word Embeddings",
"sec_num": "4"
},
{
"text": "To evaluate the different retrieval strategies described in Section 3, we use the following datasets. Two newswire datasets, trec12 and robust, consist of the newswire documents and associated queries from TREC ad hoc retrieval evaluations. The trec12 corpus consists of Tipster disks 1 and 2; and the robust corpus consists of Tipster disks 4 and 5. Our third dataset, web, consists of the ClueWeb 2009 Category B Web corpus. For the Web corpus, we only retain documents with a Waterloo spam rank above 70. 1 We present corpus statistics in Table 1 .",
"cite_spans": [
{
"start": 508,
"end": 509,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 542,
"end": 549,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We consider several publicly available global embeddings. We use four GloVe embeddings of different dimensionality trained on the union of Wikipedia and Gigaword documents. 2 We use one publicly available word2vec embedding trained on Google News documents. 3 We also trained a global embedding for trec12 and robust using the entire corpus. Instead of training a global embedding on the large web collection, we use a GloVe embedding trained on Common Crawl data. 4 We train local embeddings with word2vec using one of three retrieval sources. First, we consider documents retrieved from the target corpus of the query (i.e. trec12, robust, or web). We also consider training a local embed- docs words queries trec12 469,949 438,338 150 robust 528,155 665,128 250 web 50,220,423 90,411,624 200 news 9,875,524 2,645,367 wiki 3,225,743 4,726,862 -Table 1 : Corpora used for retrieval and local embedding training.",
"cite_spans": [
{
"start": 173,
"end": 174,
"text": "2",
"ref_id": null
},
{
"start": 258,
"end": 259,
"text": "3",
"ref_id": null
},
{
"start": 465,
"end": 466,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 692,
"end": 874,
"text": "docs words queries trec12 469,949 438,338 150 robust 528,155 665,128 250 web 50,220,423 90,411,624 200 news 9,875,524 2,645,367 wiki 3,225,743 4,726,862 -Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "ding by performing a retrieval on large auxiliary corpora. We use the Gigaword corpus as a large auxiliary news corpus. We hypothesize that retrieving from a larger news corpus will provide substantially more local training data than a target retrieval. We also use a Wikipedia snapshot from December 2014. We hypothesize that retrieving from a large, high fidelity corpus will provide cleaner language than that found in lower fidelity target domains such as the web. Table 1 shows the relative magnitude of these auxiliary corpora compared to the target corpora. All corpora in Table 1 were stopped using the SMART stopword list 5 and stemmed using the Krovetz algorithm (Krovetz, 1993) . We used the Indri implementation for indexing and retrieval. 6",
"cite_spans": [
{
"start": 673,
"end": 688,
"text": "(Krovetz, 1993)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 469,
"end": 476,
"text": "Table 1",
"ref_id": null
},
{
"start": 580,
"end": 587,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We consider several standard retrieval evaluation metrics, including NDCG@10 and interpolated precision at standard recall points (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002; van Rijsbergen, 1979) . NDCG@10 provides insight into performance specifically at higher ranks. An interpolated precision recall graph describes system performance throughout the entire ranked list.",
"cite_spans": [
{
"start": 130,
"end": 161,
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 162,
"end": 183,
"text": "van Rijsbergen, 1979)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.2"
},
{
"text": "All retrieval experiments were conducted by performing 10-fold cross-validation across queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "Specifically, we cross-validate the number of expansion terms, k \u2208 {5, 10, 25, 50, 100, 250, 500}, and interpolation weight, \u03bb \u2208 [0, 1]. For local word2vec training, we cross-validate the learning rate \u03b1 \u2208 {10 \u22121 , 10 \u22122 , 10 \u22123 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "All word2vec training used the publicly available word2vec cbow implementation. 7 When training the local models, we sampled 1000 documents from p(d) with replacement. To compensate for the much smaller corpus size, we ran word2vec training for 80 iterations. Local word2vec models use a fixed embedding dimension of 400 although other choices did not significantly affect our results. Unless otherwise noted, default parameter settings were used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "In our experiments, expanded queries rescore the top 1000 documents from an initial query likelihood retrieval. Previous results have demonstrated that this approach results in performance nearly identical with an expanded retrieval at a much lower cost (Diaz, 2015) . Because publicly available embeddings may have tokenization inconsistent with our target corpora, we restricted the vocabulary of candidate expansion terms to those occurring in the initial retrieval. If a candidate term was not found in the vocabulary of the embedding matrix, we searched for the candidate in a stemmed version of the embedding vocabulary. In the event that the candidate term was still not found after this process, we removed it from consideration.",
"cite_spans": [
{
"start": 254,
"end": 266,
"text": "(Diaz, 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "We present results for retrieval experiments in Table 2 . We find that embedding-based query expansion outperforms our query likelihood baseline across all conditions. When using the global embedding, the news corpora benefit from the various embeddings in different situations. Interestingly, for trec12, using an embedding trained on the target corpus significantly outperforms all other global embeddings, despite using substantially less data to estimate the model. While this performance may be due to the embedding having a tokenization consistent with the target corpus, it may also come from the fact that the corpus is more representative of the target documents than other embeddings which rely on online news or are mixed with non-news content. To some extent this supports our desire to move training closer to the target distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Across all conditions, local embeddings sig- Figure 4 : Interpolated precision-recall curves for query likelihood, the best global embedding, and the best local embedding from Table 2. nificantly outperform global embeddings for query expansion. For our two news collections, estimating the local model using a retrieval from the larger Gigaword corpus led to substantial improvements. This effect is almost certainly due to the Gigaword corpus being similar in writing style to the target corpus but, at the same time, providing significantly more relevant content (Diaz and Metzler, 2006) . As a result, the local embedding is trained using a larger variety of topical material than if it were to use a retrieval from the smaller target corpus. An embedding trained with a retrieval from Wikipedia tended to perform worse most likely because the language is dissimilar from news content. Our web collection, on the other hand, benefitted more from embeddings trained using retrievals from the general Wikipedia corpus. The Gigaword corpus was less useful here because news-style language is almost certainly not representative of general web documents. Figure 4 presents interpolated precisionrecall curves comparing the baseline, the best global query expansion method, and the best local query expansion method. Interestingly, although global methods achieve strong performance for NDCG@10, these improvements over the baseline are not reflected in our precision-recall curves. Local methods, on the other hand, almost always strictly dominate both the baseline and global expansion across all recall levels.",
"cite_spans": [
{
"start": 566,
"end": 590,
"text": "(Diaz and Metzler, 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 4",
"ref_id": null
},
{
"start": 176,
"end": 184,
"text": "Table 2.",
"ref_id": "TABREF0"
},
{
"start": 1155,
"end": 1163,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results support the hypothesis that local embeddings provide better similarity measures than global embeddings for query expansion. In order to understand why, we first compare the performance differences between local and global embeddings. Figure 2 suggests that we should adopt a local embedding when the local unigram language model deviates from the corpus language model. To test this, we computed the KL divergence between the local unigram distribution, d p(w|d)p(d), and the corpus unigram language model (Cronen-Townsend et al., 2002) . We hypothesize that, when this value is high, the topic language is different from the corpus language and the Table 3 : Kendall's \u03c4 and Spearman's \u03c1 between improvement in NDCG@10 and local KL divergence with the corpus language model. The improvement is measured for the best local embedding over the best global embedding. \u03c4 \u03c1 trec12 0.0585 0.0798 robust 0.0545 0.0792 web 0.0204 0.0283 global embedding will be inferior to the local embedding. We tested the rank correlation between this KL divergence and the relative performance of the local embedding with respect to the global embedding. These correlations are presented in Table 3 . Unfortunately, we find that the correlation is low, although it is positive across collections. We can also qualitatively analyze the differences in the behavior of the embeddings. If we have access to the set of documents labeled relevant to a query, then we can compute the frequency of terms in this set and consider those terms with high frequency (after stopping and stemming) to be good query expansion candidates. We can then visualize where these terms lie in the global and local embeddings. In Figure 5 , we present a two-dimensional projection (van der Maaten and Hinton, 2008) of terms for the query 'ocean remote sensing', with those good candidates highlighted. Our projection includes the top 50 candidates by frequency and a sample of terms occurring in the query likelihood retrieval. We notice that, in the global embedding, the good candidates are spread out amongst poorer candidates. By contrast, the local embedding clusters the candidates in general but also situates them closely around the query. As a result, we suspect that the similar terms extracted from the local embedding are more likely to include these good candidates.",
"cite_spans": [
{
"start": 518,
"end": 548,
"text": "(Cronen-Townsend et al., 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 662,
"end": 669,
"text": "Table 3",
"ref_id": null
},
{
"start": 1183,
"end": 1190,
"text": "Table 3",
"ref_id": null
},
{
"start": 1697,
"end": 1705,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The success of local embeddings on this task should alarm natural language processing researchers using global embeddings as a representational tool. For one, the approach of learning from vast amounts of data is only ef- fective if the data is appropriate for the task at hand. And, when provided, much smaller high-quality data can provide much better performance. Beyond this, our results suggest that the approach of estimating global representations, while computationally convenient, may overlook insights possible at query time, or evaluation time in general. A similar local embedding approach can be adopted for any natural language processing task where topical locality is expected and can be estimated. Although we used a query to re-weight the corpus in our experiments, we could just as easily use alternative contextual information (e.g. a sentence, paragraph, or document) in other tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Despite these strong results, we believe that there are still some open questions in this work. First, although local embeddings provide effectiveness gains, they can be quite inefficient compared to global embeddings. We believe that there is opportunity to improve the efficiency by considering offline computation of local embeddings at a coarser level than queries but more specialized than the corpus. If the retrieval algorithm is able to select the appropriate embedding at query time, we can avoid training the local embedding. Second, although our supporting experiments (Table 3, Figure 5 ) add some insight into our intuition, the results are not strong enough to provide a solid explanation. Further theoretical and empirical analysis is necessary.",
"cite_spans": [],
"ref_spans": [
{
"start": 590,
"end": 598,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Topical adaptation of models The shortcomings of learning a single global vector representation, especially for polysemic words, have been pointed out before (Reisinger and Mooney, 2010b) . The problem can be addressed by training a global model with multiple vector embeddings per word (Reisinger and Mooney, 2010a; Huang et al., 2012) or topicspecific embeddings . The number of senses for each word may be fixed (Neelakantan et al., 2015) , or determined using class labels (Trask et al., 2015) . However, to the best of our knowledge, this is the first time that training topic-specific word embeddings has been explored.",
"cite_spans": [
{
"start": 158,
"end": 187,
"text": "(Reisinger and Mooney, 2010b)",
"ref_id": "BIBREF35"
},
{
"start": 287,
"end": 316,
"text": "(Reisinger and Mooney, 2010a;",
"ref_id": "BIBREF34"
},
{
"start": 317,
"end": 336,
"text": "Huang et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 415,
"end": 441,
"text": "(Neelakantan et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 477,
"end": 497,
"text": "(Trask et al., 2015)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Several methods exist in the language modeling community for topic-dependent adaptation of language models (Bellegarda, 2004) . These can lead to performance improvements in tasks such as machine translation (Zhao et al., 2004) and speech recognition (Nanjo and Kawahara, 2004) . Topic-specific data may be gathered in advance, by identifying corpus of topic-specific documents. It may also be gathered during the discourse, using multiple hypotheses from N-best lists as a source of topicspecific language. Then a topic-specific language model is trained (or the global model is adapted) online using the topic-specific training data. A topic-dependent model may be combined with the global model using linear interpolation (Iyer and Ostendorf, 1999) or other more sophisticated approaches (Fed-erico, 1996; Kuhn and De Mori, 1990) . Similarly to the adaptation work, we use topicspecific documents to train a topic-specific model. In our case the documents come from a first round of retrieval for the user's current query, and the word embedding model is trained based on sentences from the topicspecific document set. Unlike the past work, we do not focus on interpolating the local and global models, although this is a promising area for future work. In the current study we focus on a direct comparison between the local-only and global-only approach, for improving retrieval performance.",
"cite_spans": [
{
"start": 107,
"end": 125,
"text": "(Bellegarda, 2004)",
"ref_id": "BIBREF5"
},
{
"start": 208,
"end": 227,
"text": "(Zhao et al., 2004)",
"ref_id": "BIBREF50"
},
{
"start": 251,
"end": 277,
"text": "(Nanjo and Kawahara, 2004)",
"ref_id": "BIBREF30"
},
{
"start": 725,
"end": 751,
"text": "(Iyer and Ostendorf, 1999)",
"ref_id": "BIBREF20"
},
{
"start": 791,
"end": 808,
"text": "(Fed-erico, 1996;",
"ref_id": null
},
{
"start": 809,
"end": 832,
"text": "Kuhn and De Mori, 1990)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Retrieval has a long history of learning representations of words that are low-dimensional dense vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings for IR Information",
"sec_num": null
},
{
"text": "These approaches can be broadly classified into two families based on whether they are learnt based on a termdocument matrix or term co-occurence data. Using the term-document matrix for embedding leads to several well-studied approaches such as LSA (Deerwester et al., 1990) , PLSA (Hofmann, 1999) , and LDA (Blei et al., 2003; Wei and Croft, 2006) . The performance of these models varies depending on the task, for example they are known to perform poorly for retrieval tasks unless combined with lexical features (Atreya and Elkan, 2011a) . Term-cooccurence based embeddings, such as word2vec (Mikolov et al., 2013b; Mikolov et al., 2013a) and (Pennington et al., 2014b) , have recently been remarkably popular for many natural language processing and logical reasoning tasks. However, there are relatively less known successful applications of these models in IR. Ganguly et. al. (Ganguly et al., 2015) used the word similarity in the word2vec embedding space as a way to estimate term transformation probabilities in a language modelling setting for retrieval. More recently, Nalisnick et. al. (Nalisnick et al., 2016) proposed to model document about-ness by computing the similarity between all pairs of query and document terms using dual embedding spaces. Both these approaches estimate the semantic relatedness between two terms as the cosine distance between them in the embedding space(s). We adopt a similar notion of term relatedness but focus on demon-strating improved retrieval performance using locally trained embeddings.",
"cite_spans": [
{
"start": 246,
"end": 275,
"text": "LSA (Deerwester et al., 1990)",
"ref_id": null
},
{
"start": 283,
"end": 298,
"text": "(Hofmann, 1999)",
"ref_id": "BIBREF17"
},
{
"start": 309,
"end": 328,
"text": "(Blei et al., 2003;",
"ref_id": "BIBREF6"
},
{
"start": 329,
"end": 349,
"text": "Wei and Croft, 2006)",
"ref_id": "BIBREF45"
},
{
"start": 517,
"end": 542,
"text": "(Atreya and Elkan, 2011a)",
"ref_id": "BIBREF2"
},
{
"start": 597,
"end": 620,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF28"
},
{
"start": 621,
"end": 643,
"text": "Mikolov et al., 2013a)",
"ref_id": "BIBREF27"
},
{
"start": 648,
"end": 674,
"text": "(Pennington et al., 2014b)",
"ref_id": "BIBREF33"
},
{
"start": 869,
"end": 907,
"text": "Ganguly et. al. (Ganguly et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1082,
"end": 1124,
"text": "Nalisnick et. al. (Nalisnick et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings for IR Information",
"sec_num": null
},
{
"text": "Local latent semantic analysis Despite the mathematical appeal of latent semantic analysis, several experiments suggest that its empirical performance may be no better than that of ranking using standard term vectors (Deerwester et al., 1990; Dumais, 1995; Atreya and Elkan, 2011b) . In order to address the coarseness of corpus-level latent semantic analysis, Hull proposed restricting analysis to the documents relevant to a query (Hull, 1994) . This approach significantly improved over corpus-level analysis for routing tasks, a result that has been reproduced in consequent research (Sch\u00fctze et al., 1995; Singhal et al., 1997) . Our work can be seen as an extension of these results to more recent techniques such as word2vec.",
"cite_spans": [
{
"start": 217,
"end": 242,
"text": "(Deerwester et al., 1990;",
"ref_id": "BIBREF9"
},
{
"start": 243,
"end": 256,
"text": "Dumais, 1995;",
"ref_id": "BIBREF12"
},
{
"start": 257,
"end": 281,
"text": "Atreya and Elkan, 2011b)",
"ref_id": "BIBREF3"
},
{
"start": 433,
"end": 445,
"text": "(Hull, 1994)",
"ref_id": "BIBREF19"
},
{
"start": 588,
"end": 610,
"text": "(Sch\u00fctze et al., 1995;",
"ref_id": "BIBREF36"
},
{
"start": 611,
"end": 632,
"text": "Singhal et al., 1997)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings for IR Information",
"sec_num": null
},
{
"text": "We have demonstrated a simple and effective method for performing query expansion with word embeddings. Importantly, our results highlight the value of locally-training word embeddings in a query-specific manner. The strength of these results suggests that other research adopting global embedding vectors should consider local embeddings as a potentially superior representation. Instead of using a \"Sriracha sauce of deep learning,\" as embedding techniques like word2vec have been called, we contend that the situation sometimes requires, say, that we make a b\u00e9chamel or a mole verde or a sambal-or otherwise learn to cook.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "https://plg.uwaterloo.ca/~gvcormac/ clueweb09spam/ 2 http://nlp.stanford.edu/data/glove.6B.zip 3 https://code.google.com/archive/p/ word2vec/ 4 http://nlp.stanford.edu/data/glove.840B. 300d.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://jmlr.csail.mit.edu/papers/volume5/ lewis04a/a11-smart-stop-list/english.stop 6 http://www.lemurproject.org/indri/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Umass at trec 2004: Novelty and hard",
"authors": [
{
"first": "Nasreen",
"middle": [],
"last": "Abdul-Jaleel",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Larkey",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"D"
],
"last": "Smucker",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Strohman",
"suffix": ""
}
],
"year": 2004,
"venue": "Online Proceedings of 2004 Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Donald Metzler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. Umass at trec 2004: Novelty and hard. In Online Proceedings of 2004 Text REtrieval Con- ference.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A comparison of deep learning based query expansion with pseudo-relevance feedback and mutual information",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Mohannad Al Masri",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Berrut",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chevallet",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 38th European Conference on IR Research (ECIR 2016)",
"volume": "",
"issue": "",
"pages": "709--715",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohannad al Masri, Catherine Berrut, and Jean- Pierre Chevallet. 2016. A comparison of deep learning based query expansion with pseudo-relevance feedback and mutual informa- tion. In Nicola Ferro, Fabio Crestani, Marie- Francine Moens, Josiane Mothe, Fabrizio Sil- vestri, Maria Giorgio Di Nunzio, Claudia Hauff, and Gianmaria Silvello, editors, Proceedings of the 38th European Conference on IR Research (ECIR 2016), pages 709-715, Cham. Springer International Publishing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent semantic indexing (lsi) fails for trec collections",
"authors": [
{
"first": "Avinash",
"middle": [],
"last": "Atreya",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "12",
"issue": "2",
"pages": "5--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avinash Atreya and Charles Elkan. 2011a. La- tent semantic indexing (lsi) fails for trec collec- tions. ACM SIGKDD Explorations Newsletter, 12(2):5-10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent semantic indexing (lsi) fails for trec collections",
"authors": [
{
"first": "Avinash",
"middle": [],
"last": "Atreya",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2011,
"venue": "SIGKDD Explor. Newsl",
"volume": "12",
"issue": "2",
"pages": "5--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avinash Atreya and Charles Elkan. 2011b. Latent semantic indexing (lsi) fails for trec collections. SIGKDD Explor. Newsl., 12(2):5-10, March.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Local feedback in full-text retrieval systems",
"authors": [
{
"first": "R",
"middle": [],
"last": "Attar",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Fraenkel",
"suffix": ""
}
],
"year": 1977,
"venue": "J. ACM",
"volume": "24",
"issue": "3",
"pages": "397--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Attar and A. S. Fraenkel. 1977. Local feed- back in full-text retrieval systems. J. ACM, 24(3):397-417, July.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical language model adaptation: review and perspectives",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerome",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bellegarda",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "42",
"issue": "",
"pages": "93--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome R Bellegarda. 2004. Statistical lan- guage model adaptation: review and perspec- tives. Speech communication, 42(1):93-108.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jor- dan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language Modeling for Information Retrieval",
"authors": [
{
"first": "W",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Bruce Croft and John Lafferty. 2003. Language Modeling for Information Retrieval. Kluwer Academic Publishing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting query performance",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Cronen-Townsend",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2002,
"venue": "SIGIR '02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "299--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. 2002. Predicting query performance. In SIGIR '02: Proceedings of the 25th annual in- ternational ACM SIGIR conference on Research and development in information retrieval, pages 299-306, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society of Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of In- formation Science, 41(6):391-407.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving the estimation of relevance models using large external corpora",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
}
],
"year": 2006,
"venue": "SIGIR '06: Proceedings of the 29th annual international ACM SI-GIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "154--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Diaz and Donald Metzler. 2006. Im- proving the estimation of relevance models using large external corpora. In SIGIR '06: Proceed- ings of the 29th annual international ACM SI- GIR conference on Research and development in information retrieval, pages 154-161, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Condensed list relevance models",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 International Conference on The Theory of Information Retrieval, ICTIR '15",
"volume": "",
"issue": "",
"pages": "313--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Diaz. 2015. Condensed list relevance models. In Proceedings of the 2015 International Conference on The Theory of Information Re- trieval, ICTIR '15, pages 313-316, New York, NY, USA, May. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Latent semantic indexing (LSI): TREC-3 report",
"authors": [
{
"first": "T",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1995,
"venue": "Overview of the Third Text REtrieval Conference (TREC-3)",
"volume": "",
"issue": "",
"pages": "219--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan T. Dumais. 1995. Latent semantic in- dexing (LSI): TREC-3 report. In Overview of the Third Text REtrieval Conference (TREC-3), pages 219-230.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bayesian estimation methods for n-gram language model adaptation",
"authors": [
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 1996,
"venue": "Spoken Language, 1996. ICSLP 96. Proceedings",
"volume": "",
"issue": "",
"pages": "240--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcello Federico. 1996. Bayesian estimation methods for n-gram language model adaptation. In Spoken Language, 1996. ICSLP 96. Proceed- ings., Fourth International Conference on, vol- ume 1, pages 240-243. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "One sense per discourse",
"authors": [
{
"first": "William",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Workshop on Speech and Natural Language, HLT '91",
"volume": "",
"issue": "",
"pages": "233--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William A. Gale, Kenneth W. Church, and David Yarowsky. 1992. One sense per discourse. In Proceedings of the Workshop on Speech and Natural Language, HLT '91, pages 233-237, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word embedding based generalized language model for information retrieval",
"authors": [
{
"first": "Debasis",
"middle": [],
"last": "Ganguly",
"suffix": ""
},
{
"first": "Dwaipayan",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SI-GIR '15",
"volume": "",
"issue": "",
"pages": "795--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debasis Ganguly, Dwaipayan Roy, Mandar Mitra, and Gareth J.F. Jones. 2015. Word embedding based generalized language model for informa- tion retrieval. In Proceedings of the 38th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SI- GIR '15, pages 795-798, New York, NY, USA. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distributional structure. WORD",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1954. Distributional structure. WORD, 10(2-3):146-162.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Probabilistic latent semantic indexing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "SIGIR '99: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Hofmann. 1999. Probabilistic latent se- mantic indexing. In SIGIR '99: Proceedings of the 22nd annual international ACM SIGIR con- ference on Research and development in infor- mation retrieval, pages 50-57, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and mul- tiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Com- putational Linguistics: Long Papers-Volume 1, pages 873-882. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving text retrieval for the routing problem using latent semantic indexing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hull",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '94",
"volume": "",
"issue": "",
"pages": "282--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hull. 1994. Improving text retrieval for the routing problem using latent semantic indexing. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR '94, pages 282-291, New York, NY, USA. Springer- Verlag New York, Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modeling long distance dependence in language: topic mixtures versus dynamic cache models. Speech and Audio Processing",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Iyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 1999,
"venue": "IEEE Transactions on",
"volume": "7",
"issue": "1",
"pages": "30--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.M. Iyer and M. Ostendorf. 1999. Modeling long distance dependence in language: topic mixtures versus dynamic cache models. Speech and Audio Processing, IEEE Transactions on, 7(1):30-39, Jan.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Cumulated gain-based evaluation of ir techniques",
"authors": [
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Kek\u00e4l\u00e4inen",
"suffix": ""
}
],
"year": 2002,
"venue": "TOIS",
"volume": "20",
"issue": "4",
"pages": "422--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cu- mulated gain-based evaluation of ir techniques. TOIS, 20(4):422-446.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Viewing morphology as an inference process",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Krovetz",
"suffix": ""
}
],
"year": 1993,
"venue": "SIGIR '93: Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "191--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Krovetz. 1993. Viewing morphology as an inference process. In SIGIR '93: Proceedings of the 16th annual international ACM SIGIR con- ference on Research and development in infor- mation retrieval, pages 191-202, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A cachebased natural language model for speech recognition. Pattern Analysis and Machine Intelligence",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Renato",
"middle": [
"De"
],
"last": "Mori",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Transactions on",
"volume": "12",
"issue": "6",
"pages": "570--583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Kuhn and Renato De Mori. 1990. A cache- based natural language model for speech recog- nition. Pattern Analysis and Machine Intelli- gence, IEEE Transactions on, 12(6):570-583.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Relevance based language models",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Lavrenko and W. Bruce Croft. 2001. Rele- vance based language models. In Proceedings of the 24th annual international ACM SIGIR con- ference on Research and development in infor- mation retrieval, pages 120-127. ACM Press.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factoriza- tion. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Sys- tems 27, pages 2177-2185. Curran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Tat-Seng Chua, and Maosong Sun",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15",
"volume": "",
"issue": "",
"pages": "2418--2424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embed- dings. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15, pages 2418-2424. AAAI Press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Wein- berger, editors, Advances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving document ranking with dual word embeddings",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Nalisnick",
"suffix": ""
},
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. WWW. International World Wide Web Conferences Steering Committee",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. 2016. Improving document ranking with dual word embeddings. In Proc. WWW. International World Wide Web Confer- ences Steering Committee.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Language model and speaking rate adaptation for spontaneous presentation speech recognition. Speech and Audio Processing",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Nanjo",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Transactions on",
"volume": "12",
"issue": "4",
"pages": "391--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroaki Nanjo and Tatsuya Kawahara. 2004. Language model and speaking rate adaptation for spontaneous presentation speech recognition. Speech and Audio Processing, IEEE Transac- tions on, 12(4):391-400.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Efficient non-parametric estimation of multiple embeddings per word in vector space",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Jeevan",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.06654"
]
},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2015. Efficient non-parametric estimation of multiple embed- dings per word in vector space. arXiv preprint arXiv:1504.06654.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014a. Glove: Global vec- tors for word representation. In Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. EMNLP",
"volume": "12",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D Manning. 2014b. Glove: Global vec- tors for word representation. Proc. EMNLP, 12:1532-1543.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A mixture model with sharing for lexical semantics",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1173--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond Mooney. 2010a. A mixture model with sharing for lexical se- mantics. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1173-1182. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Multi-prototype vector-space models of word meaning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond J Mooney. 2010b. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 109-117. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A comparison of classifiers and document representations for the routing problem",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Hull",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '95",
"volume": "",
"issue": "",
"pages": "229--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze, David A. Hull, and Jan O. Peder- sen. 1995. A comparison of classifiers and doc- ument representations for the routing problem. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR '95, pages 229-237, New York, NY, USA. ACM.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improving predictive inference under covariate shift by weighting the log-likelihood function",
"authors": [
{
"first": "Hidetoshi",
"middle": [],
"last": "Shimodaira",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Statistical Planning and Inference",
"volume": "90",
"issue": "2",
"pages": "227--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hidetoshi Shimodaira. 2000. Improving predic- tive inference under covariate shift by weighting the log-likelihood function. Journal of Statisti- cal Planning and Inference, 90(2):227 -244.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning routing queries in a query zone",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1997,
"venue": "SIGIR Forum",
"volume": "31",
"issue": "SI",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Singhal, Mandar Mitra, and Chris Buckley. 1997. Learning routing queries in a query zone. SIGIR Forum, 31(SI):25-32, July.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Learning concept embeddings for query expansion by quantum entropy minimization",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI'14",
"volume": "",
"issue": "",
"pages": "1586--1592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Sordoni, Yoshua Bengio, and Jian-Yun Nie. 2014. Learning concept embeddings for query expansion by quantum entropy minimiza- tion. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI'14, pages 1586-1592. AAAI Press.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Query-sensitive similarity measures for the calculation of interdocument relationships",
"authors": [
{
"first": "Anastasios",
"middle": [],
"last": "Tombros",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 2001,
"venue": "CIKM '01: Proceedings of the tenth international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastasios Tombros and C. J. van Rijsbergen. 2001. Query-sensitive similarity measures for the calculation of interdocument relationships. In CIKM '01: Proceedings of the tenth interna- tional conference on Information and knowledge management, pages 17-24, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The effectiveness of query-specific hierarchic clustering in information retrieval",
"authors": [
{
"first": "Anastasios",
"middle": [],
"last": "Tombros",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Villa",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 2002,
"venue": "Inf. Process. Manage",
"volume": "38",
"issue": "4",
"pages": "559--582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastasios Tombros, Robert Villa, and C. J. Van Rijsbergen. 2002. The effectiveness of query-specific hierarchic clustering in informa- tion retrieval. Inf. Process. Manage., 38(4):559- 582, July.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "sense2vec-a fast and accurate method for word sense disambiguation in neural word embeddings",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Trask",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Michalak",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06388"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec-a fast and accurate method for word sense disambiguation in neural word embed- dings. arXiv preprint arXiv:1511.06388.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Visualizing high-dimensional data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey E. Hinton. 2008. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "LDA-based document models for ad-hoc retrieval",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2006,
"venue": "SI-GIR '06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "178--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wei and W. Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In SI- GIR '06: Proceedings of the 29th annual inter- national ACM SIGIR conference on Research and development in information retrieval, pages 178-185, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Query-specific automatic document classification",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Willett",
"suffix": ""
}
],
"year": 1985,
"venue": "International Forum on Information and Documentation",
"volume": "10",
"issue": "",
"pages": "28--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Willett. 1985. Query-specific automatic doc- ument classification. In International Forum on Information and Documentation, volume 10, pages 28-32.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Query expansion using local and global document analysis",
"authors": [
{
"first": "Jinxi",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '96",
"volume": "",
"issue": "",
"pages": "4--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinxi Xu and W. Bruce Croft. 1996. Query expan- sion using local and global document analysis. In Proceedings of the 19th Annual International ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR '96, pages 4-11, New York, NY, USA. ACM.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "One sense per collocation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Workshop on Human Language Technology, HLT '93",
"volume": "",
"issue": "",
"pages": "266--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1993. One sense per colloca- tion. In Proceedings of the Workshop on Human Language Technology, HLT '93, pages 266-271, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, ACL '95",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting on As- sociation for Computational Linguistics, ACL '95, pages 189-196, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Language model adaptation for statistical machine translation with structured query models",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics, COL-ING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Language model adaptation for statisti- cal machine translation with structured query models. In Proceedings of the 20th International Conference on Computational Linguistics, COL- ING '04, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Pointwise Kullback-Leibler divergence for terms occurring in documents related to 'argentina pegging dollar' relative to frequency in gigaword.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Global versus local embedding of highly relevant terms. Each point represents a candidate expansion term. Red points have high frequency in the relevant set of documents. White points have low or no frequency in the relevant set of documents. The blue point represents the query. Contours indicate distance from the query.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"text": "Retrieval results comparing query expansion based on various global and local embeddings. Bolded numbers indicate the best expansion in that class of embeddings. Wilcoxon signed rank test between bolded numbers indicates statistically significant improvements (p < 0.05) for all collections.",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td/><td>global</td><td/><td/><td/><td>local</td></tr><tr><td/><td/><td/><td colspan=\"2\">wiki+giga</td><td/><td colspan=\"3\">gnews target target</td><td>giga</td><td>wiki</td></tr><tr><td/><td>QL</td><td>50</td><td>100</td><td>200</td><td>300</td><td>300</td><td>400</td><td>400</td><td>400</td><td>400</td></tr><tr><td colspan=\"10\">trec12 0.514 0.518 0.518 0.530 0.531 0.530 0.545 0.535 0.563 *</td><td>0.523</td></tr><tr><td colspan=\"10\">robust 0.467 0.470 0.463 0.469 0.468 0.472 0.465 0.475 0.517 *</td><td>0.476</td></tr><tr><td>web</td><td colspan=\"8\">0.216 0.227 0.229 0.230 0.232 0.218 0.216 0.234</td><td colspan=\"2\">0.236 0.258 *</td></tr></table>",
"num": null
}
}
}
}