ACL-OCL / Base_JSON /prefixN /json /N09 /N09-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:11.281447Z"
},
"title": "Hierarchical Dirichlet Trees for Information Retrieval",
"authors": [
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Simon Fraser University",
"location": {}
},
"email": "ghaffar1@cs.sfu.ca"
},
{
"first": "Yee",
"middle": [],
"last": "Whye Teh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Neuroscience University College London",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a principled probabilisitc framework which uses trees over the vocabulary to capture similarities among terms in an information retrieval setting. This allows the retrieval of documents based not just on occurrences of specific query terms, but also on similarities between terms (an effect similar to query expansion). Additionally our principled generative model exhibits an effect similar to inverse document frequency. We give encouraging experimental evidence of the superiority of the hierarchical Dirichlet tree compared to standard baselines.",
"pdf_parse": {
"paper_id": "N09-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a principled probabilisitc framework which uses trees over the vocabulary to capture similarities among terms in an information retrieval setting. This allows the retrieval of documents based not just on occurrences of specific query terms, but also on similarities between terms (an effect similar to query expansion). Additionally our principled generative model exhibits an effect similar to inverse document frequency. We give encouraging experimental evidence of the superiority of the hierarchical Dirichlet tree compared to standard baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information retrieval (IR) is the task of retrieving, given a query, the documents relevant to the user from a large quantity of documents (Salton and McGill, 1983) . IR has become very important in recent years, with the proliferation of large quantities of documents on the world wide web. Many IR systems are based on some relevance score function R(j, q) which returns the relevance of document j to query q. Examples of such relevance score functions include term frequency-inverse document frequency (tf-idf) and Okapi BM25 (Robertson et al., 1992) .",
"cite_spans": [
{
"start": 139,
"end": 164,
"text": "(Salton and McGill, 1983)",
"ref_id": "BIBREF13"
},
{
"start": 530,
"end": 554,
"text": "(Robertson et al., 1992)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides the effect that documents containing more query terms should be more relevant (term frequency), the main effect that many relevance scores try to capture is that of inverse document frequency: the importance of a term is inversely related to the number of documents that it appears in, i.e. the popularity of the term. This is because popular terms, e.g. common and stop words, are often uninformative, while rare terms are often very informative. Another important effect is that related or co-occurring terms are often useful in determining the relevance of documents. Because most relevance scores do not capture this effect, IR systems resort to techniques like query expansion which includes synonyms and other morphological forms of the original query terms in order to improve retrieval results; e.g. (Riezler et al., 2007; Metzler and Croft, 2007) .",
"cite_spans": [
{
"start": 816,
"end": 838,
"text": "(Riezler et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 839,
"end": 863,
"text": "Metzler and Croft, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we explore a probabilistic model for IR that simultaneously handles both effects in a principled manner. It builds upon the work of (Cowans, 2004) who proposed a hierarchical Dirichlet document model. In this model, each document is modeled using a multinomial distribution (making the bag-of-words assumption) whose parameters are given Dirichlet priors. The common mean of the Dirichlet priors is itself assumed random and given a Dirichlet hyperprior. (Cowans, 2004) showed that the shared mean parameter induces sharing of information across documents in the corpus, and leads to an inverse document frequency effect. We generalize the model of (Cowans, 2004) by replacing the Dirichlet distributions with Dirichlet tree distributions (Minka, 2003) , thus we call our model the hierarchical Dirichlet tree. Related terms are placed close by in the vocabulary tree, allowing the model to take this knowledge into account when determining document relevance. This makes it unnecessary to use ad-hoc query expansion methods, as related words such as synonyms will be taken into account by the retrieval rule. The structure of the tree is learned from data in an unsupervised fashion, us-ing a variety of agglomerative clustering techniques.",
"cite_spans": [
{
"start": 469,
"end": 483,
"text": "(Cowans, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 663,
"end": 677,
"text": "(Cowans, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 753,
"end": 766,
"text": "(Minka, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We review the hierarchical Dirichlet document (HDD) model in section 2, and present our proposed hierarchical Dirichlet tree (HDT) document model in section 3. We describe three algorithms for constructing the vocabulary tree in section 4, and give encouraging experimental evidence of the superiority of the hierarchical Dirichlet tree compared to standard baselines in section 5. We conclude the paper in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The probabilistic approach to IR assumes that each document in a collection can be modeled probabilistically. Given a query q, it is further assumed that relevant documents j are those with highest generative probability p(q|j) for the query. Thus given q the relevance score is R(j, q) = p(q|j) and the documents with highest relevance are returned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "Assume that each document is a bag of words, with document j modeled as a multinomial distribution over the words in j. Let V be the terms in the vocabulary, n jw be the number of occurrences of term w \u2208 V in document j, and \u03b8 flat jw be the probability of w occurring in document j (the superscript \"flat\" denotes a flat Dirichlet as opposed to our proposed Dirichlet tree). (Cowans, 2004) assumes the following hierarchical Bayesian model for the document collection:",
"cite_spans": [
{
"start": 376,
"end": 390,
"text": "(Cowans, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "\u03b8 flat 0 = (\u03b8 flat 0w ) w\u2208V \u223c Dirichlet(\u03b3u) (1) \u03b8 flat j = (\u03b8 flat jw ) w\u2208V \u223c Dirichlet(\u03b1\u03b8 flat 0 ) n j = (n jw ) w\u2208V \u223c Multinomial(\u03b8 flat j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "In the above, bold face a = (a w ) w\u2208V means that a is a vector with |V | entries indexed by w \u2208 V , and u is a uniform distribution over V . The generative process is as follows (Figure 1(a) ). First a vector \u03b8 flat 0 is drawn from a symmetric Dirichlet distribution with concentration parameter \u03b3. Then we draw the parameters \u03b8 flat j for each document j from a common Dirichlet distribution with mean \u03b8 flat 0 and concentration parameter \u03b1. Finally, the term frequencies of the document are drawn from a multinomial distribution with parameters \u03b8 flat j . The insight of (Cowans, 2004) is that because the common mean parameter \u03b8 flat 0 is random, it induces dependencies across the document models in the collection, and this in turn is the mechanism for information sharing among documents. (Cowans, 2004) proposed a good estimate of \u03b8 flat 0 :",
"cite_spans": [
{
"start": 574,
"end": 588,
"text": "(Cowans, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 796,
"end": 810,
"text": "(Cowans, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 179,
"end": 191,
"text": "(Figure 1(a)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "u n jw n j J \u03b8 flat j \u03b8 flat 0 \u03b3 \u03b1 \u03b8 k 0 \u03b8 k j J u k \u03b3 k \u03b1 k (a) (b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "\u03b8 flat 0w = \u03b3/|V | + n 0w \u03b3 + w\u2208V n 0w (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "where n 0w is simply the number of documents containing term w, i.e. the document frequency. Integrating out the document parameters \u03b8 flat j , we see that the probability of query q being generated from document j is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "p(q|j) = x\u2208q \u03b1\u03b8 flat 0x + n jx \u03b1 + w\u2208V n jw (3) = Const \u2022 x\u2208q Const + n jx \u03b3/|V |+n 0x \u03b1 + w\u2208V n jw",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "Where Const are terms not depending on j. We see that n jx is term frequency, its denominator \u03b3/|V | + n 0x is an inverse document frequency factor, and \u03b1 + w\u2208V n jw normalizes for document length. The inverse document frequency factor is directly related to the shared mean parameter, in that popular terms x will have high \u03b8 flat 0x value, causing all documents to assign higher probability to x, and down weighting the term frequency. This effect will be inherited by our model in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Document Model",
"sec_num": "2"
},
{
"text": "Apart from the constraint that the parameters should sum to one, the Dirichlet priors in the HDD model do not impose any dependency among the parameters of the resulting multinomial. In other words, the document models cannot capture the notion that related terms tend to co-occur together. For example, this model cannot incorporate the knowledge that if the word 'computer' is seen in a document, it is likely to observe the word 'software' in the same document. We relax the independence assumption of the Dirichlet distribution by using Dirichlet tree distributions (Minka, 2003) , which can capture some dependencies among the resulting parameters. This allows relationships among terms to be modeled, and we will see that it improves retrieval performance.",
"cite_spans": [
{
"start": 570,
"end": 583,
"text": "(Minka, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Dirichlet Trees",
"sec_num": "3"
},
{
"text": "Let us assume that we have a tree over the vocabulary whose leaves correspond to vocabulary terms. Each internal node k of the tree has a multinomial distribution over its children C(k). Words are drawn by starting at the root of the tree, recursively picking a child l \u2208 C(k) whenever we are in an internal node k, until we reach a leaf of the tree which corresponds to a vocabulary term (see Figure 2 (b)). The Dirichlet tree distribution is the product of Dirichlet distributions placed over the child probabilities of each internal node, and serves as a (dependent) prior over the parameters of multinomial distributions over the vocabulary (the leaves).",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 402,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Our model generalizes the HDD model by replacing the Dirichlet distributions in (1) by Dirichlet tree distributions. At each internal node k, define a hierarchical Dirichlet prior over the choice of the children:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 0k = (\u03b8 0l ) l\u2208C(k) \u223c Dirichlet(\u03b3 k u k )",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "\u03b8 jk = (\u03b8 jl ) l\u2208C(k) \u223c Dirichlet(\u03b1 k \u03b8 0k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "where u k is a uniform distribution over the children of node k, and each internal node has its own hyperparameters \u03b3 k and \u03b1 k . \u03b8 jl is the probability of choosing child l if we are at internal node k. If the tree is degenerate with just one internal node (the root) and all leaves are direct children of the root we recover the \"flat\" HDD model in the previous section. We call our model the hierarchical Dirichlet tree (HDT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Given a term, the path from the root to the corresponding leaf is unique. Thus given the term frequencies n j of document j as defined in (1), the number of times n jl child l \u2208 C(k) was picked at node k is known and fixed. The probability of all words in document j, given the parameters, is then a product of multinomials probabilities over internal nodes k:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(n j |{\u03b8 jk }) = k n jk ! Q l\u2208C(k) n jl ! l\u2208C(k) \u03b8 n jl jl",
"eq_num": "(5)"
}
],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "The probability of the documents, integrating out the \u03b8 jk 's, is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "p({n j }|{\u03b8 0k }) = (6) j k n jk ! Q l\u2208C(k) n jl ! \u0393(\u03b1 k ) \u0393(\u03b1 k +n jk ) l\u2208C(k) \u0393(\u03b1 k \u03b8 0l +n jl ) \u0393(\u03b1 k \u03b8 0l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "The probability of a query q under document j, i.e. the relevance score, follows from 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(q|j) = x\u2208q (kl) \u03b1 k \u03b8 0l +n jl \u03b1 k +n jk",
"eq_num": "(7)"
}
],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "where the second product is over pairs (kl) where k is a parent of l on the path from the root to x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "The hierarchical Dirichlet tree model we proposed has a large number of parameters and hyperparameters (even after integrating out the \u03b8 jk 's), since the vocabulary trees we will consider later typically have large numbers of internal nodes. This over flexibility might lead to overfitting or to parameter regimes that do not aid in the actual task of IR. To avoid both issues, we constrain the hierarchical Dirichlet tree to be centered over the flat hierarchical Dirichlet document model, and allow it to learn only the \u03b1 k hyperparameters, integrating out the \u03b8 jk parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "We set {\u03b8 0k }, the hyperparameters of the global tree, so that it induces the same distribution over vocabulary terms as \u03b8 flat 0 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 0l = \u03b8 flat 0l \u03b8 0k = l\u2208C(k) \u03b8 0l",
"eq_num": "(8)"
}
],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "The hyperparameters of the local trees \u03b1 k 's are estimated using maximum a posteriori learning with likelihood given by (6), and a gamma prior with informative parameters. The density function of a Gamma(a, b) distribution is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "g(x; a, b) = x a\u22121 b a e \u2212bx \u0393(a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "where the mode happens at x = a\u22121 b . We set the mode of the prior such that the hierarchical Dirichlet tree reduces to the hierarchical Dirichlet document model at these values:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "\u03b1 flat l = \u03b1\u03b8 flat 0l \u03b1 flat k = l\u2208C(k) \u03b1 flat l (9) \u03b1 k \u223c Gamma(b\u03b1 flat k + 1, b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "and b > 0 is an inverse scale hyperparameter to be tuned, with large values giving a sharp peak around \u03b1 flat k . We tried a few values 1 of b and have found that the results we report in the next section are not sensitive to b. This prior is constructed such that if there is insufficient information in (6) the MAP value will simply default back to the hierarchical Dirichlet document model. We used LBFGS 2 which is a gradient based optimization method to find the MAP values, where the gradient of the likelihood part of the objective function (6) is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "\u2202 log p({n j }|{\u03b8 0j }) \u2202\u03b1 k = j \u03a8(\u03b1 k ) \u2212 \u03a8(\u03b1 k + n jk ) + l\u2208C(k) \u03b8 0l \u03a8(\u03b1 k \u03b8 0l + n jl ) \u2212 \u03a8(\u03b1 k \u03b8 0l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "where \u03a8(x) := \u2202 log \u0393(x)/\u2202x is the digamma function. Because each \u03b1 k can be optimized separately, the optimization is very fast (approximately 15-30 minutes in the experiments to follow on a Linux machine with 1.8 GH CPU speed).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "3.2"
},
{
"text": "The structure of the vocabulary tree plays an important role in the quality of the HDT document model, Merge the two clusters with highest similarity, resulting in one less cluster 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Tree Structure Learning",
"sec_num": "4"
},
{
"text": "If there still are unincluded words, pick one and place it in a singleton cluster, resulting in one more cluster 5: until all words have been included and there is only one cluster left since it encapsulates the similarities among words captured by the model. In this paper we explored using trees learned in an unsupervised fashion from the training corpus. The three methods are all agglomerative clustering algorithms (Duda et al., 2000) with different similarity functions. Initially each vocabulary word is placed in its own cluster; each iteration of the algorithm finds the pair of clusters with highest similarity and merges them, continuing until only one cluster is left. The sequence of merges determines a binary tree with vocabulary words as its leaves.",
"cite_spans": [
{
"start": 421,
"end": 440,
"text": "(Duda et al., 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Tree Structure Learning",
"sec_num": "4"
},
{
"text": "Using a heap data structure, this basic agglomerative clustering algorithm requires O(n 2 log(n) + sn 2 ) computations where n is the size of the vocabulary and s is the amount of computation needed to compute the similarity between two clusters. Typically the vocabulary size n is large; to speed up the algorithm, we use a greedy version described in Algorithm 1 which restricts the number of cluster candidates to at most m \u226a n. This greedy version is faster with complexity O(nm(log m + s)). In the experiments we used m = 500.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Tree Structure Learning",
"sec_num": "4"
},
{
"text": "Distributional clustering (Dcluster) (Pereira et al., 1993) measures similarity among words in terms of the similarity among their local contexts. Each word is represented by the frequencies of various words in a window around each occurrence of the word. The similarity between two words is computed to be a symmetrized KL divergence between the distributions over neighboring words associated with the two words. For a cluster of words the neighboring words are the union of those associated with each word in the cluster. Dcluster has been used extensively in text classification (Baker and McCallum, 1998) .",
"cite_spans": [
{
"start": 37,
"end": 59,
"text": "(Pereira et al., 1993)",
"ref_id": "BIBREF9"
},
{
"start": 583,
"end": 609,
"text": "(Baker and McCallum, 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Tree Structure Learning",
"sec_num": "4"
},
{
"text": "Probabilistic hierarchical clustering (Pcluster) (Friedman, 2003) . Dcluster associates each word with its local context, as a result it captures both semantic and syntactic relationships among words. Pcluster captures more relevant semantic relationships by instead associating each word with the documents in which it appears. Specifically, each word is associated with a binary vector indexed by documents in the corpus, where a 1 means the word appears in the corresponding document. Pcluster models a cluster of words probabilistically, with the binary vectors being iid draws from a product of Bernoulli distributions. The similarity of two clusters c 1 and c 2 of words is P (c 1 \u222a c 2 )/P (c 1 )P (c 2 ), i.e. two clusters of words are similar if their union can be effectively modeled using one cluster, relative to modeling each separately. Conjugate beta priors are placed over the parameters of the Bernoulli distributions and integrated out so that the similarity scores are comparable. Brown's algorithm (Bcluster) (Brown et al., 1990) was originally proposed to build class-based language models. In the 2-gram case, words are clustered such that the class of the previous word is most predictive of the class of the current word. Thus the similarity between two clusters of words is defined to be the resulting mutual information between adjacent classes corrresponding to a sequence of words.",
"cite_spans": [
{
"start": 49,
"end": 65,
"text": "(Friedman, 2003)",
"ref_id": "BIBREF4"
},
{
"start": 1029,
"end": 1049,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Tree Structure Learning",
"sec_num": "4"
},
{
"text": "Trees constructed using the agglomerative hierarchical clustering algorithms described in this section suffer from a few drawbacks. Firstly, because they are binary trees they have large numbers of internal nodes. Secondly, many internal nodes are simply not informative in that the two clusters of words below a node are indistinguishable. Thirdly, Pcluster and Dcluster tend to produce long chain-like branches which significantly slows down the computation of the relevance score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Simplify Trees",
"sec_num": "4.1"
},
{
"text": "To address these issues, we considered operations to simplify trees by contracting internal edges of the tree while preserving as much of the word relationship information as possible. Let L be the set of tree leaves and \u03c4 (a) be the distance from node or edge a to the leaves: In the experiments we considered either contracting edges 3 close to the leaves \u03c4 (a) = 1 (thus removing many of the long branches described above), or edges further up the tree \u03c4 (a) \u2265 2 (preserving the informative subtrees closer to the leaves while removing many internal nodes). See Figure 2 . (Miller et al., 2004) cut the BCluster tree at a certain depth k to simplify the tree, meaning every leaf descending from a particular internal node at level k is made an immediate child of that node. They use the tree to get extra features for a discriminative model to tackle the problem of sparsity-the features obtained from the new tree do not suffer from sparsity since each node has several words as its leaves. This technique did not work well for our application so we will not report results using it in our experiments.",
"cite_spans": [
{
"start": 576,
"end": 597,
"text": "(Miller et al., 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 565,
"end": 573,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Operations to Simplify Trees",
"sec_num": "4.1"
},
{
"text": "In this section we present experimental results on two IR datasets: Cranfield and Medline 4 . The Cranfield dataset consists of 1,400 documents and 225 queries; its vocabulary size after stemming and removing stop words is 4,227. The Medline dataset contains 1,033 documents and 30 queries with the vocabulary size of 8,800 after stemming and removing stop words. We compare HDT with the flat HDD model and Okapi BM25 (Robertson et al., 1992) . Since one of our motivations has been to 3 Contracting an edge means removing the edge and the adjacent child node and connecting the grandchildren to the parent.",
"cite_spans": [
{
"start": 418,
"end": 442,
"text": "(Robertson et al., 1992)",
"ref_id": "BIBREF12"
},
{
"start": 486,
"end": 487,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "4 Both datasets can be downloaded from http://www.dcs.gla.ac.uk/idom/ir resources/test collections. Table 1 : Average precision and Top-10 precision scores of HDT with different trees versus flat model and BM25. The statistics for each tree shows its average/maximum depth of its leaf nodes as well as the number of its total internal nodes. The bold numbers highlight the best results in the corresponding columns.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "get away from query expansion, we also compare against Okapi BM25 with query expansion. The new terms to expand each query are chosen based on Robertson-Sparck Jones weights (Robertson and Sparck Jones, 1976 ) from the pseudo relevant documents. The comparison criteria are (i) top-10 precision, and (ii) average precision.",
"cite_spans": [
{
"start": 174,
"end": 207,
"text": "(Robertson and Sparck Jones, 1976",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "All the hierarchical clustering algorithms mentioned in section 4 are used to generate trees, each of which is further post-processed by tree simplification operators described in section 4.1. We consider (i) contracting nodes at higher levels of the hierarchy (\u03c4 \u2265 2), and (ii) contracting nodes right above the leaves (\u03c4 = 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HDT vs Baselines",
"sec_num": "5.1"
},
{
"text": "The statistics of the trees before and after postprocessing are shown in Table 1 . Roughly, the Dcluster and BCluster trees do not have long chains with leaves hanging directly off them, which is why their average depths are reduced significantly by the \u03c4 \u2265 2 simplification, but not by the \u03c4 = 1 simplification. The converse is true for Pcluster: the trees have many chains with leaves hanging directly off them, which is why average depth is not reduced as much as the previous trees based on the \u03c4 \u2265 2 simplification. However the average depth is still reduced significantly compared to the original trees. Table 1 presents the performance of HDT with different trees against the baselines in terms of the top-10 and average precision (we have bold faced the performance values which are the maximum of each column). HDT with every tree outperforms significantly the flat model in both datasets. More specifically, HDT with (original) BCluster and PCluster trees significantly outperforms the three baselines in terms of both performance measure for the Cranfield. Similar trends are observed on the Medline except here the baseline Okapi BM25 with query expansion is pretty strong 5 , which is still outperformed by HDT with BCluster tree. To further highlight the differences among the methods, we have shown the precision at particular recall points on Medline dataset in Figure 4 for HDT with PCluster tree vs the baselines. As the recall increases, the precision of the PCluster tree significantly outperforms the flat model and BM25. We attribute this to the ability of PCluster tree to give high scores to documents which have words relevant to a query word (an effect similar to query expansion).",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 1",
"ref_id": null
},
{
"start": 610,
"end": 617,
"text": "Table 1",
"ref_id": null
},
{
"start": 1378,
"end": 1386,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "HDT vs Baselines",
"sec_num": "5.1"
},
{
"text": "It is interesting to contrast the learned \u03b1 k 's for each of the clustering methods. These \u03b1 k 's impose cor- relations on the probabilities of the children under k in an interesting fashion. In particular, if we compare \u03b1 k to \u03b8 0k \u03b1 parent(k) , then a larger value of \u03b1 k implies that the probabilities of picking one of the children of k (from among all nodes) are positively correlated, while a smaller value of \u03b1 k implies negative correlation. Roughly speaking, this is because drawn values of \u03b8 jl for l \u2208 C(k) are more likely to be closer to uniform (relative to the flat Dirichlet) thus if we had picked one child of k we will likely pick another child of k. Figure 3 shows scatter plots of \u03b1 k values versus \u03b8 0k \u03b1 parent(k) for the internal nodes of the trees. Firstly, smaller values for both tend to be associated with lower levels of the trees, while large values are with higher levels of the trees. Thus we see that PCluster tend to have subtrees of vocabulary terms that are positively correlated with each other-i.e. they tend to co-occur in the same docu-ments. The converse is true of DCluster and BCluster because they tend to put words with the same meaning together, thus to express a particular concept it is enough to select one of the words and not to choose the rest. Figure 5 show some fragments of the actual trees including the words they placed together and \u03b1 k parameters learned by HDT model for their internal nodes. Moreover, visual inspection of the trees shows that DCluster can easily misplace words in the tree, which explains its lower performance compared to the other tree construction methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 676,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1295,
"end": 1303,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Secondly, we observed that for higher nodes of the tree (corresponding generally to larger values of \u03b1 k and \u03b8 0k \u03b1 parent(k) ) PCluster \u03b1 k 's are smaller, thus higher levels of the tree exhibit negative correlation. This is reasonable, since if the subtrees capture positively correlated words, then higher up the tree the different subtrees correspond to clusters of words that do not co-occur together, i.e. negatively correlated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "We presented a hierarchical Dirichlet tree model for information retrieval which can inject (semantical or syntactical) word relationships as the domain knowledge into a probabilistic model for information retrieval. Using trees to capture word relationships, the model is highly efficient while making use of both prior information about words and their occurrence statistics in the corpus. Furthermore, we investigated the effect of different tree construction algorithms on the model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "On the Cranfield dataset, HDT achieves 26.85% for average-precision and 32.40% for top-10 preci- Figure 5 : Small parts of the trees learned by clustering algorithms for the Cranfield dataset where the learned \u03b1 k for each internal node is written close to it. sion, and outperforms all baselines including BM25 which gets 25.66% and 31.24% for these two measures. On the Medline dataset, HDT is competitive with BM25 with Query Expansion and outperforms all other baselines. These encouraging results show the benefits of HDT as a principled probabilistic model for information retrieval.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "An interesting avenue of research is to construct the vocabulary tree based on WordNet, as a way to inject independent prior knowledge into the model. However WordNet has a low coverage problem, i.e. there are some words in the data which do not exist in it. One solution to this low coverage problem is to combine trees generated by the clustering algorithms mentioned in this paper and WordNet, which we leave as a future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Of the form 10 i for i \u2208 {\u22122, \u22121, 0, 1}.2 We used a C++ re-implementation of Jorge Nocedal's LBFGS library(Nocedal, 1980) from the ALGLIB website: http://www.alglib.net.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that we tuned the parameters of the baselines BM25 with/without query expansion with respect to their performance on the actual retrieval task, which in a sense makes them appear better than they should.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Distributional clustering of words for text classification",
"authors": [
{
"first": "L",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 1998,
"venue": "SIGIR '98: Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "96--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Douglas Baker and Andrew Kachites McCallum. 1998. Distributional clustering of words for text classification. In SIGIR '98: Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 96-103.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, V. J. Della Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1990. Class-based n-gram models of natural language. Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Information retrieval using hierarchical dirichlet processes",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Cowans",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th Annual International Conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. J. Cowans. 2004. Information retrieval using hierar- chical dirichlet processes. In Proceedings of the 27th Annual International Conference on Research and De- velopment in Information Retrieval (SIGIR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pcluster: Probabilistic agglomerative clustering of gene expression profiles",
"authors": [
{
"first": "N",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Friedman. 2003. Pcluster: Probabilistic agglomera- tive clustering of gene expression profiles. Available from http://citeseer.ist.psu.edu/668029.html.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent concept expansion using markov random fields",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Metzler and W. Bruce Croft. 2007. Latent con- cept expansion using markov random fields. In Pro- ceedings of the 30th annual international ACM SIGIR conference on Research and development in informa- tion retrieval.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Name tagging with word clusters and discriminative training",
"authors": [
{
"first": "S",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Guinness",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zamanian",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics -Human Language Technologies conference (NAACL HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Miller, J. Guinness, and A. Zamanian. 2004. Name tagging with word clusters and discriminative training. In Proceedings of North American Chapter of the As- sociation for Computational Linguistics -Human Lan- guage Technologies conference (NAACL HLT).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The dirichlet-tree distribution",
"authors": [
{
"first": "T",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Minka. 2003. The dirichlet-tree distribu- tion. Available from http://research.microsoft.com/ minka/papers/dirichlet/minka-dirtree.pdf.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Updating quasi-newton matrices with limited storage. Mathematics of Computation",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nocedal. 1980. Updating quasi-newton matrices with limited storage. Mathematics of Computation, 35.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distributional clustering of english words",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1993,
"venue": "31st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of english words. In 31st Annual Meeting of the Association for Computational Linguistics, pages 183-190.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical machine translation for query expansion in answer retrieval",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Vasserman",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "Vibhu",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Riezler, Alexander Vasserman, Ioannis Tsochan- taridis, Vibhu Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer re- trieval. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Relevance weighting of search terms",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Sparck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1976,
"venue": "Journal of the American Society for Information Science",
"volume": "27",
"issue": "3",
"pages": "129--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Robertson and K. Sparck Jones. 1976. Relevance weighting of search terms. Journal of the American Society for Information Science, 27(3):129-146.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Okapi at trec",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hancock-Beaulieu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gull",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 1992,
"venue": "Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Robertson, S. Walker, M. Hancock-Beaulieu, A. Gull, and M. Lau. 1992. Okapi at trec. In Text REtrieval Conference, pages 21-30.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An Introduction to Modern Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Mcgill",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton and M.J. McGill. 1983. An Introduction to Modern Information Retrieval. McGraw-Hill, New York.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "(a) The graphical model representation of the hierarchical Dirichlet document model. (b) The global tree and local trees in hierarchical Dirichlet tree document model. Triangles stand for trees with the same structure, but different parameters at each node. The generation of words in each document is not shown.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "(a) := min l\u2208L #{edges between a and l} (10) a b \u03c4 (root) = 2, while \u03c4 (v) = 1 for shaded vertices v. Contracting a and b results in both child of b being direct children of a while b is removed.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "The plots showing the contribution of internal nodes in trees constructed by the three clustering algorithms for the Cranfield dataset. In each plot, a point represent an internal node showing a positive exponent in the node's contribution (i.e. positive correlation among its children) if the point is below x = y line. From left to the right plots, the fraction of nodes below the line is 0.9044, 0.7977, and 0.3344 for a total of 4,226 internal nodes.",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "The precision of all methods at particular recall points for the Medline dataset.",
"type_str": "figure",
"uris": null
}
}
}
}