ACL-OCL / Base_JSON /prefixW /json /W07 /W07-0202.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W07-0202",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:38:49.072299Z"
},
"title": "Multi-level Association Graphs -A New Graph-Based Model for Information Retrieval",
"authors": [
{
"first": "Hans",
"middle": [
"Friedrich"
],
"last": "Witschel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NLP department University of Leipzig",
"location": {
"postBox": "P.O. Box 100920",
"postCode": "04009",
"settlement": "Leipzig"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper introduces multi-level association graphs (MLAGs), a new graph-based framework for information retrieval (IR). The goal of that framework is twofold: First, it is meant to be a meta model of IR, i.e. it subsumes various IR models under one common representation. Second, it allows to model different forms of search, such as feedback, associative retrieval and browsing at the same time. It is shown how the new integrated model gives insights and stimulates new ideas for IR algorithms. One of these new ideas is presented and evaluated, yielding promising experimental results.",
"pdf_parse": {
"paper_id": "W07-0202",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper introduces multi-level association graphs (MLAGs), a new graph-based framework for information retrieval (IR). The goal of that framework is twofold: First, it is meant to be a meta model of IR, i.e. it subsumes various IR models under one common representation. Second, it allows to model different forms of search, such as feedback, associative retrieval and browsing at the same time. It is shown how the new integrated model gives insights and stimulates new ideas for IR algorithms. One of these new ideas is presented and evaluated, yielding promising experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Developing formal models for information retrieval has a long history. A model of information retrieval \"predicts and explains what a user will find relevant given the user query\" (Hiemstra, 2001) . Most IR models are firmly grounded in mathematics and thus provide a formalisation of ideas that facilitates discussion and makes sure that the ideas can be implemented. More specifically, most IR models provide a so-called retrieval function f (q, d) , which returns -for given representations of a document d and of a user information need q -a so-called retrieval status value by which documents can be ranked according to their presumed relevance w.r.t. to the query q.",
"cite_spans": [
{
"start": 180,
"end": 196,
"text": "(Hiemstra, 2001)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to understand the commonalities and differences among IR models, this paper introduces the notion of meta modeling. Since the word \"meta model\" is perhaps not standard terminology in IR, it should be explained what is meant by it: a meta model is a model or framework that subsumes other IR models, such that they are derived by specifying certain parameters of the meta model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In terms of IR theory, such a framework conveys what is common to all IR models by subsuming them. At the same time, the differences between models are highlighted in a conceptually simple way by the different values of parameters that have to be set in order to arrive at this subsumption. It will be shown that a graph-based representation of IR data is very well suited to this problem. IR models concentrate on the matching process, i.e. on measuring the degree of overlap between a query q and a document representation d. On the other hand, there are the problems of finding suitable representations for documents (indexing) and for users' information needs (query formulation). Since users are often not able to adequately state their information need, some interactive and associative procedures have been developed by IR researchers that help to overcome this problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Associative retrieval, i.e. retrieving information which is associated to objects known or suspected to be relevant to the user -e.g. query terms or documents that have been retrieved already.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Feedback, another method for boosting recall, either relies on relevance information given by the user (relevance feedback) or assumes top-ranked documents to be relevant (pseudo feedback) and learns better query formulations from this information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Browsing, i.e. exploring a document collection interactively by following links between objects such as documents, terms or concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Again, it will be shown that -using a graph-based representation -these forms of search can be subsumed easily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the literal sense of the definition above, there is a rather limited number of meta models for IR, the most important of which will be described here very shortly. Most research about how to subsume various IR models in a common framework has been done in the context of Bayesian networks and probabilistic inference (Turtle and Croft, 1990) . In this approach, models are subsumed by specifying certain probability distributions. In (Wong and Yao, 1995) , the authors elaborately show how all major IR models known at that time can be subsumed using probabilistic inference. Language modeling, which was not known then was later added to the list by (Metzler and Croft, 2004) .",
"cite_spans": [
{
"start": 320,
"end": 344,
"text": "(Turtle and Croft, 1990)",
"ref_id": "BIBREF21"
},
{
"start": 437,
"end": 457,
"text": "(Wong and Yao, 1995)",
"ref_id": "BIBREF25"
},
{
"start": 667,
"end": 679,
"text": "Croft, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meta modeling",
"sec_num": "2.1"
},
{
"text": "Another graph-based meta modeling approach uses the paradigm of spreading activation (SA) as a simple unifying framework. Given semantic knowledge in the form of a (directed) graph, the idea of spreading activation is that a measure of relevance -w.r.t. a current focus of attention -is spread over the graph's edges in the form of activation energy, yielding for each vertex in the graph a degree of relatedness with that focus (cf. (Anderson and Pirolli, 1984) ). It is easy to see how this relates to IR: using a graph that contains vertices for both terms and documents and appropriate links between the two, we can interpret a query as a focus of attention and spread that over the network in order to rank documents by their degree of relatedness to that focus.",
"cite_spans": [
{
"start": 434,
"end": 462,
"text": "(Anderson and Pirolli, 1984)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meta modeling",
"sec_num": "2.1"
},
{
"text": "A very general introduction of spreading activation as a meta model is given in the early work by (Preece, 1981) All later models are hence special cases of Preece's work, including the multi-level association graphs introduced in section 3. Preece's model subsumes the Boolean retrieval model, coordination level matching and vector space processing.",
"cite_spans": [
{
"start": 98,
"end": 112,
"text": "(Preece, 1981)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meta modeling",
"sec_num": "2.1"
},
{
"text": "Finally, an interesting meta model is described by (van Rijsbergen, 2004) who uses a Hilbert space as an information space and connects the geometry of that space to probability and logics. In particular, he manages to give the familiar dot product between query and document vector a probabilistic interpretation.",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "(van Rijsbergen, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meta modeling",
"sec_num": "2.1"
},
{
"text": "The spreading activation paradigm is also often used for associative retrieval. The idea is to reach vertices in the graph that are not necessarily directly linked to query nodes, but are reachable from query nodes via a large number of short paths along highly weighted edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based models for associative retrieval and browsing",
"sec_num": "2.2"
},
{
"text": "Besides (Preece, 1981) , much more work on SA was done, a good survey of which can be found in (Crestani, 1997) . A renewed interest in SA was later triggered with the advent of the WWW where hyperlinks form a directed graph. In particular, variants of the PageRank (Brin and Page, 1998) algorithm that bias a random searcher towards some starting nodes (e.g. an initial result set of documents) bear close resemblance to SA (Richardson and Domingos, 2002; White and Smyth, 2003) .",
"cite_spans": [
{
"start": 8,
"end": 22,
"text": "(Preece, 1981)",
"ref_id": "BIBREF13"
},
{
"start": 95,
"end": 111,
"text": "(Crestani, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 266,
"end": 287,
"text": "(Brin and Page, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 425,
"end": 456,
"text": "(Richardson and Domingos, 2002;",
"ref_id": "BIBREF14"
},
{
"start": 457,
"end": 479,
"text": "White and Smyth, 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based models for associative retrieval and browsing",
"sec_num": "2.2"
},
{
"text": "Turning to browsing, we can distinguish three types of browsing w.r.t. to the vertices of the graph: index term browsing, which supports the user in formulating his query by picking related terms (Doyle, 1961; Beaulieu, 1997) , document browsing which serves to expand result sets by allowing access to similar documents or by supporting web browsing (Smucker and Allan, 2006; Olston and Chi, 2003) and combined approaches where both index terms and documents are used simultaneously for browsing.",
"cite_spans": [
{
"start": 196,
"end": 209,
"text": "(Doyle, 1961;",
"ref_id": "BIBREF7"
},
{
"start": 210,
"end": 225,
"text": "Beaulieu, 1997)",
"ref_id": "BIBREF2"
},
{
"start": 351,
"end": 376,
"text": "(Smucker and Allan, 2006;",
"ref_id": "BIBREF20"
},
{
"start": 377,
"end": 398,
"text": "Olston and Chi, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based models for associative retrieval and browsing",
"sec_num": "2.2"
},
{
"text": "In this last category, many different possibilities arise for designing interfaces. A common guiding principle of many graph-based browsing approahces is that of interactive spreading activation (Oddy, 1977; Croft and Thompson, 1987) . Another approach, which is very closely related to MLAGs, is a multi-level hypertext (MLHT), as proposed in (Agosti and Crestani, 1993 ) -a data structure consisting of three levels, for documents, index terms and concepts. Each level contains objects and links among them. There are also connections between objects of two adjacent levels. An MLHT is meant to be used for interactive query formulation, browsing and search, although (Agosti and Crestani, 1993) give no precise specification of the processing procedures.",
"cite_spans": [
{
"start": 195,
"end": 207,
"text": "(Oddy, 1977;",
"ref_id": "BIBREF11"
},
{
"start": 208,
"end": 233,
"text": "Croft and Thompson, 1987)",
"ref_id": "BIBREF6"
},
{
"start": 344,
"end": 370,
"text": "(Agosti and Crestani, 1993",
"ref_id": "BIBREF0"
},
{
"start": 670,
"end": 697,
"text": "(Agosti and Crestani, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based models for associative retrieval and browsing",
"sec_num": "2.2"
},
{
"text": "Compared to Preece's work, the MLAG framework makes two sorts of modifications in order to reach the goals formulated in the introduction: in order to subsume more IR models, the flexibility and power of Preece's model is increased by adding real-valued edge weights. On the other hand, a clearer distinction is made between local and global information through the explicit introduction of \"level graphs\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of this work",
"sec_num": "2.3"
},
{
"text": "With the introduction of levels, the MLAG data structure becomes very closely related to the MLHT paradigm of (Agosti and Crestani, 1993) , MLAGs, however, generalise MLHTs by allowing arbitrary types of levels, not only the three types proposed in (Agosti and Crestani, 1993) . Additionally, links in MLAGs are weighted and the spreading activation processing defined in the next section makes extensive use of these weights.",
"cite_spans": [
{
"start": 110,
"end": 137,
"text": "(Agosti and Crestani, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 249,
"end": 276,
"text": "(Agosti and Crestani, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of this work",
"sec_num": "2.3"
},
{
"text": "All in all, the new model combines the data structure of multi-level hypertexts (Agosti and Crestani, 1993) with the processing paradigm of spreading activation as proposed by Preece (Preece, 1981) , refining both with an adequate edge weighting. The framework is an attempt to be as general as necessary for subsuming all models and allowing for different forms of search, while at the same time being as specific as possible about the things that are really common to all IR models.",
"cite_spans": [
{
"start": 80,
"end": 107,
"text": "(Agosti and Crestani, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 183,
"end": 197,
"text": "(Preece, 1981)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of this work",
"sec_num": "2.3"
},
{
"text": "3 The MLAG model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of this work",
"sec_num": "2.3"
},
{
"text": "Formally, the basis of a multi-level association graph (MLAG) is a union of n level graphs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data structure",
"sec_num": "3.1"
},
{
"text": "L 1 , ..., L n . Each of these n directed graphs L i = G(V L i , EL i , W L i ) consists of a set of vertices V L i , a set EL i \u2286 V L i \u00d7 V L i of edges and a func- tion W L i : EL i \u2192 R returning edge weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data structure",
"sec_num": "3.1"
},
{
"text": "In order to connect the levels, there are n \u2212 1 connecting bipartite graphs (or inverted lists) I 1,2 , ..., I n\u22121,n where each inverted list I j,j+1 consists of vertices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data structure",
"sec_num": "3.1"
},
{
"text": "V I j,j+1 = V L j \u222a V L j+1 , edges EI j,j+1 \u2286 (V L j \u00d7 V L j+1 ) \u222a (V L j+1 \u00d7 V L j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data structure",
"sec_num": "3.1"
},
{
"text": "and weights W I j,j+1 : EI j,j+1 \u2192 R. Figure 1 depicts a simple example multi-level association graph with two levels L d and L t for documents and terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data structure",
"sec_num": "3.1"
},
{
"text": "Inverted list Assuming that the vertices on a given level correspond to objects of the same type and vertices in different levels to objects of different types, this data structure has the following general interpretation: Each level represents associations between objects of a given type, e.g. term-term or documentdocument similarities. The inverted lists, on the other hand, represent associations between different types of objects, e.g. occurrences of terms in documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document level",
"sec_num": null
},
{
"text": "t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 8 t 9 \u00a8\u00a8@ @ 2 2 2 \u00a3 \u00a3 7 7 d d i i i & & $ $ $ $ $ d5 d1 d3 d9 d8 d6 d4 d7 d2 2 2 2 d d & & $ $ $ $ $ \u00a3 \u00a3 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 f f f f f f f f f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document level",
"sec_num": null
},
{
"text": "The simplest version of a multi-level association graph consists of just two levels -a term level L t and a document level L d . This is the variant depicted in figure 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard retrieval",
"sec_num": "3.2.1"
},
{
"text": "The graph I td that connects L t and L d is an inverted list in the traditional sense of the word, i.e. a term is connected to all documents that it occurs in and the weight W I(t, d) of an edge (t, d) connecting term t and document d conveys the degree to which t is representative of d's content, or to which d is about t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard retrieval",
"sec_num": "3.2.1"
},
{
"text": "The level graphs L t and L d can be computed in various ways. As for documents, a straight-forward way would be to calculate document similarities, e.g. based on the number of terms shared by two documents. However, other forms of edges are possible, such as hyperlinks or citations -if available. Term associations, on the other hand, can be computed using co-occurrence information. An alternative would be to use relations from manually created thesauri or ontologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard retrieval",
"sec_num": "3.2.1"
},
{
"text": "In order to (partly) take document structure into account, it can be useful to introduce a level for document parts (e.g. headlines and/or passages) in between the term and the document level. This can be combined with text summarisation methods (cf. e.g. (Brandow et al., 1995) ) in order to give higher weights to more important passages in the inverted list connecting passages to documents.",
"cite_spans": [
{
"start": 256,
"end": 278,
"text": "(Brandow et al., 1995)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "More levels",
"sec_num": "3.2.2"
},
{
"text": "In distributed or peer-to-peer environments, databases or peers may be modeled in a separate layer above the document level, inverted lists indicating where documents are held. Additionally, a peer's neighbours in an overlay network may be modeled by directed edges in the peer level graph. More extensions are possible and the flexibility of the MLAG framework allows for the insertion of arbitrary layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More levels",
"sec_num": "3.2.2"
},
{
"text": "The operating mode of an MLAG is based on the spreading activation principle. However, the spread of activation between nodes of two different levels is not iterated. Rather, it is carefully controlled, yet allowing non-linear modifications at some points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "In order to model spreading activation in an MLAG, we introduce an activation function A i : V L i \u2192 R which returns the so-called activation energies of vertices on a given level L i . The default value of the activation function is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "A i (v) = 0 for all vertices v \u2208 V L i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "In the following, it is assumed that the MLAG processing is invoked by activating a set of vertices A on a given level L i of the MLAG by modifying the activation function of that level so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "A i (v) = w v for each v \u2208 A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "A common example of such activation is a query being issued by a user. The initial activation is the result of the query formulation process, which selects some vertices v \u2208 A and weights them according to their presumed importance w v . This weight is then the initial activation energy of the vertex. Once we have an initial set of activated vertices, the following general procedure is executed until some stopping criterion is met:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "1. Collect activation values on current level L i , i.e. determine A i (u) for all u \u2208 V L i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "2. (Optionally) apply a transformation to the activation energies of L i -nodes, i.e. alter A i (u) by using a -possibly non-linear -transformation function or procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "3. Spread activation to the next level L i+1 along the links connecting the two levels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "A i+1 (v) = (u,v)\u2208I i,i+1 A i (u) \u2022 W I(u, v) (1) 4. Set A i (u) = 0 for all u \u2208 V L i , i.e. \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "forget\" about the old activation energies 5. (Optionally) apply a transformation to the activation energies of L i+1 -nodes (see step 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "6. Go to 1, increment i (or decrement, depending on its value and the configuration of the MLAG)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "If we take a vector space view of this processing mode and if we identify level L i with terms and level L i+1 with documents, we can interpret the activation energies A i (u) as a query vector and the edge weights",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "W I(u, v) of edges arriving at vertex v \u2208 V L i+1 as a document vector for document v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "This shows that the basic retrieval function realised by steps 1, 3 and 4 of this process is a simple dot product. We will later see that retrieval functions of most IR models can actually be written in that form, provided that the initial activation of query terms and the edge weights of I i,i+1 are chosen correctly (section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "For some models, however, we additionally need the possibility to perform nonlinear transformations on result sets in order to subsume them. Steps 2 and 5 of the algorithm allow for arbitrary modifications of the activation values based on whatever evidence may be available on the current level or globallybut not in the inverted list. This will later also allow to include feedback and associative retrieval techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing paradigm",
"sec_num": "3.3"
},
{
"text": "In this section, examples will be shown that demonstrate how existing IR models of ranked retrieval 1 can be subsumed using the simple MLAG of figure 1 and the processing paradigm from the last section. This is done by specifying the following parameters of that paradigm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MLAG as a meta model",
"sec_num": "4"
},
{
"text": "1. How nodes are activated in the very first step 2. How edges of the inverted list are weighted 3. Which transformation is used in 2 and 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MLAG as a meta model",
"sec_num": "4"
},
{
"text": "For each model, the corresponding retrieval function will be given and the parameter specification will be discussed shortly. The specification of the above parameters will be given in the form of triplets activation init , edge weights, transf orm .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MLAG as a meta model",
"sec_num": "4"
},
{
"text": "In the case of the vector space model (Salton et al., 1975) , the retrieval function to be mimicked is as follows:",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "(Salton et al., 1975)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector space model",
"sec_num": "4.1"
},
{
"text": "f (q, d) = t\u2208q\u2229d w tq w td (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector space model",
"sec_num": "4.1"
},
{
"text": "where w tq and w td are a term's weight in the query q and the current document d, respectively. This can be achieved by specifying the parameter triplet w tq , w td , none . This simple representation reflects the closeness of the MLAG paradigm to the vector space model that has been hinted at above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector space model",
"sec_num": "4.1"
},
{
"text": "For the probabilistic relevance model (Robertson and Sparck-Jones, 1976) , the MLAG has to realise the following retrieval function",
"cite_spans": [
{
"start": 38,
"end": 72,
"text": "(Robertson and Sparck-Jones, 1976)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (q, d) = i d i log p i (1 \u2212 r i ) r i (1 \u2212 p i )",
"eq_num": "(3)"
}
],
"section": "Probabilistic model",
"sec_num": "4.2"
},
{
"text": "where d i \u2208 {0, 1} indicates whether term i is contained in document d, p i is the probability that a relevant document will contain term i and r i is the probability that an irrelevant document will contain it. This retrieval function is realised by the parameter triplet log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "4.2"
},
{
"text": "p i (1\u2212r i ) r i (1\u2212p i ) , d i , none",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "4.2"
},
{
"text": ". Now there is still the question of how the estimates of p i and r i are derived. This task involves the use of relevance information which can be gained via feedback, described in section 6.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "4.2"
},
{
"text": "The general language modeling retrieval function (cf. e.g. )) is -admittedlynot in the linear form of equation 1. But using logarithms, products can be turned into sums without changing the ranking -the logarithm being a monotonic function (note that this is what also happened in the case of the probabilistic relevance models).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language models",
"sec_num": "4.3"
},
{
"text": "In particular, we will use the approach of comparing query and document language models by Kullback-Leibler divergence (KLD) ) which results in the equation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language models",
"sec_num": "4.3"
},
{
"text": "KLD(M q ||M d ) = t\u2208q P (t|M q ) log P (t|M q ) P (t|M d ) \u221d \u2212 t\u2208q P (t|M q ) log P (t|M d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language models",
"sec_num": "4.3"
},
{
"text": "where P (t|M q ) and P (t|M d ) refer to the probability that term t will be generated by the unigram language model of query q or document d, respectively. Note that we have simplified the equation by dropping a term t P (t|M q ) log P (t|M q ), which depends only on the query, not on the documents to be ranked. Now, the triplet P (t|M q ), \u2212 log P (t|M d ), t can be used to realise this retrieval function where t stands for a procedure that adds \u2212P (t|M q ) log P (t|M d ) to the document node's activation level for terms t not occurring in d and sorts documents by increasing activation values afterwards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language models",
"sec_num": "4.3"
},
{
"text": "As can be seen from the last equation above, the language model retrieval function sums over all terms in the query. Each term -regardless of whether it appears in the document d or not -contributes something that may be interpreted as a \"penalty\" for the document. The magnitude of this penalty depends on the smoothing method used (cf. )). A popular smoothing method uses so-called Dirichlet priors to estimate document language models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "P (t|M d ) = tf + \u00b5p(t|C) \u00b5 + |d| (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "where tf is t's frequency in d, p(t|C) is the term's relative frequency in the whole collection and \u00b5 is a free parameter. This indicates that if a rare term is missing from a document, the penalty will be large, P (t|M d ) being very small because tf = 0 and p(t|C) small. Conceptually, it is unproblematic to model the retrieval function by making I td a complete bipartite graph, i.e. specifying a (non-zero) value for P (t|M d ), even if t does not occur in d. In a practical implementation, this is not feasible, which is why we add the contribution of terms not contained in a document, i.e. \u2212P (t|M q ) log P (t|M d ), for terms that do not occur in d. 2 This transformation indicates an important difference between language modeling and all other IR models: language models penalise documents for the absence of rare (i.e. informative) terms whereas the other models reward them for the presence of these terms.",
"cite_spans": [
{
"start": 660,
"end": 661,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "These considerations suggest a combination of both approaches: starting with an arbitrary \"presence rewarding\" model -e.g. the vector space model -we may integrate the \"absence penalising\" philosophy by subtracting from a document's score, for each missing term, the contribution that one occurrence of that term would have earned (cf. (Witschel, 2006) ).",
"cite_spans": [
{
"start": 336,
"end": 352,
"text": "(Witschel, 2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "For the vector space model, this yields the following retrieval function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "f (q, d) = t\u2208q\u2229d w tq w td \u2212 \u03b1 |q| t\u2208q\\d w td (tf = 1)w tq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "where \u03b1 is a free parameter regulating the relative influence of penalties, comparable to the \u00b5 parameter of language models above. Table 2 shows retrieval results for combining two weighting schemes, BM25 (Robertson et al., 1992) and Lnu.ltn (Singhal et al., 1996) , with penalties. Both of them belong to the family of tf.idf weighting schemes and can hence be regarded as representing the vector space model, although BM25 was developed out of the probabilistic model. Combining them with the idea of \"absence penalties\" works as indicated above, i.e. weights are accumulated for each document using the tf.idf -like retrieval functions. Then, from each score, the contributions that one occurrence of each missing term would have earned is subtracted. More precisely, what is subtracted consists of the usual tf.idf weight for the missing term, where tf = 1 is substituted in the tf part of the formula.",
"cite_spans": [
{
"start": 206,
"end": 230,
"text": "(Robertson et al., 1992)",
"ref_id": "BIBREF16"
},
{
"start": 243,
"end": 265,
"text": "(Singhal et al., 1996)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combining IR models",
"sec_num": "5"
},
{
"text": "Experiments were run with queries from TREC-7 and TREC-8. In order to study the effect of query length, very short queries (using only the title field of TREC queries), medium ones (using title and description fields) and long ones (using all fields) were used. Table 2 shows that both weighting schemes can be significantly improved by using penalties, especially for short queries, reaching and sometimes surpassing the performance of retrieval with language models. This holds even when the parameter \u03b1 is not tuned and confirms that interesting insights are gained from a common representation of IR models in a graph-based environment. 3 Table 2 : Mean average precision of BM25 and Lnu.ltn and their corresponding penalty schemes (+ P) for TREC-7 and TREC-8. Asterisks indicate statistically significant deviations (using a paired Wilcoxon test on a 95% confidence level) from each baseline, whereas the best run for each query length is marked with bold font. Performance of language models (LM) is given for reference, where the value of the smoothing parameter \u00b5 was set to the average document length.",
"cite_spans": [],
"ref_spans": [
{
"start": 262,
"end": 269,
"text": "Table 2",
"ref_id": null
},
{
"start": 643,
"end": 650,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.1"
},
{
"text": "In order to complete the goals stated in the introduction of this paper, this section will briefly explain how feedback, associative retrieval and browsing can be modeled within the MLAG framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different forms of search with MLAGs",
"sec_num": "6"
},
{
"text": "Using the simple term-document MLAG of figure 1, feedback can be implemented by the following procedure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "1. Perform steps 1 -4 of the basic processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "2. Apply a transformation to the activation values of L d -nodes, e.g. let the user pick relevant documents and set their activation to some positive constant \u03b2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "3. Perform step 3 of the basic processing with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "L i = L d and L i+1 = L t , i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": ".e. let activation flow back to term level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "4. Forget about activation levels of documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "5. Apply transformation on the term level L t , e.g. apply thresholding to obtain a fixed number of expansion terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "6. Spread activation back to the document level to obtain the final retrieval status values of documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "lower than MAP scores achieved by systems actually participating in TREC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "In order to instantiate a particular feedback algorithm, there are three parameters to be specified:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "\u2022 The transformation to be applied in step 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "\u2022 The weighting of document-term edges (if different from term-document edges) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "\u2022 The transformation applied in step 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "Unfortunately, due to space constraints, it is out of the scope of this paper to show how different specifications lead to well-known feedback algorithms such as Rocchio (Rocchio, 1971) or the probabilistic model above.",
"cite_spans": [
{
"start": 170,
"end": 185,
"text": "(Rocchio, 1971)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6.1"
},
{
"text": "Associative retrieval in MLAGs exploits the information encoded in level graphs: expanding queries with related terms can be realised by using the term level graph L t of a simple MLAG (cf. figure 1) in step 2 of the basic processing, whereas the expansion of document result sets takes place in step 5 on the document level L d . In order to exploit the relations encoded in the level graphs, one may again use spreading activation, but also simpler mechanisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Associative retrieval",
"sec_num": "6.2"
},
{
"text": "Since relations are used directly, dimensionality reduction techniques such as LSI cannot and need not be modeled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Associative retrieval",
"sec_num": "6.2"
},
{
"text": "Since the MLAG framework is graph-based, it is easy to grasp and to be visualised, which makes it a suitable data structure for browsing. The level graphs can be used as a flat graphical representation of the data, which can be exploited directly for browsing. Depending on their information need, users can choose to browse either on the term level L t or on the document level L d and they can switch between both types of levels at any time using the inverted list I td . This applies, of course, also to passage or any other type of levels if they exist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Browsing",
"sec_num": "6.3"
},
{
"text": "In this paper, a new graph-based framework for information retrieval has been introduced that allows to subsume a wide range of IR models and algorithms. It has been shown how this common representation can be an inspiration and lead to new insights and algorithms that outperform the original ones. Future work will aim at finding similar forms of synergies for the different forms of search, e.g. new combinations of feedback and associative retrieval algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "This excludes the Boolean model, which can, however, also be subsumed as shown in section 5.5 of(Preece, 1981)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In order to do this, we only need to know |d| and the relative frequency of t in the collection p(t|C), i.e. information that is available outside the inverted list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that these figures were obtained without any refinements such as query expansion and are hence substantially",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A methodology for the automatic construction of a hypertext for information retrieval",
"authors": [
{
"first": "M",
"middle": [],
"last": "Agosti",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Crestani",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of SAC 1993",
"volume": "",
"issue": "",
"pages": "745--753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Agosti and F. Crestani. 1993. A methodology for the au- tomatic construction of a hypertext for information retrieval. In Proceedings of SAC 1993, pages 745-753.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spread of activation",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "P",
"middle": [
"L"
],
"last": "Pirolli",
"suffix": ""
}
],
"year": 1984,
"venue": "Journal of Experimental Psychology: Learning, Memory and Cognition",
"volume": "10",
"issue": "",
"pages": "791--799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Anderson and P. L. Pirolli. 1984. Spread of activa- tion. Journal of Experimental Psychology: Learning, Mem- ory and Cognition, 10:791-799.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Experiments of interfaces to support query expansion",
"authors": [
{
"first": "M",
"middle": [],
"last": "Beaulieu",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Documentation",
"volume": "1",
"issue": "53",
"pages": "8--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Beaulieu. 1997. Experiments of interfaces to support query expansion. Journal of Documentation, 1(53):8-19.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic condensation of electronic publications by sentence selection. Information Processing and Management",
"authors": [
{
"first": "R",
"middle": [],
"last": "Brandow",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mitze",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "Rau",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "31",
"issue": "",
"pages": "675--685",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Brandow, K. Mitze, and L. F. Rau. 1995. Automatic con- densation of electronic publications by sentence selection. Information Processing and Management, 31(5):675-685.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The anatomy of a large-scale hypertextual Web search engine",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Page",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of WWW7",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Brin and L. Page. 1998. The anatomy of a large-scale hyper- textual Web search engine. In Proceedings of WWW7, pages 107-117.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Application of spreading activation techniques in information retrieval",
"authors": [
{
"first": "F",
"middle": [],
"last": "Crestani",
"suffix": ""
}
],
"year": 1997,
"venue": "Artificial Intelligence Review",
"volume": "11",
"issue": "6",
"pages": "453--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Crestani. 1997. Application of spreading activation tech- niques in information retrieval. Artificial Intelligence Re- view, 11(6):453-482.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "I3R : a new approach to the design of document retrieval systems",
"authors": [
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
},
{
"first": "R",
"middle": [
"H"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of the american society for information science",
"volume": "38",
"issue": "6",
"pages": "389--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. B. Croft and R. H. Thompson. 1987. I3R : a new approach to the design of document retrieval systems. Journal of the american society for information science, 38(6):389-404.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semantic Road Maps for Literature Searchers",
"authors": [
{
"first": "L",
"middle": [
"B"
],
"last": "Doyle",
"suffix": ""
}
],
"year": 1961,
"venue": "Journal of the ACM",
"volume": "8",
"issue": "4",
"pages": "553--578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. B. Doyle. 1961. Semantic Road Maps for Literature Searchers. Journal of the ACM, 8(4):553-578.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using language models for information retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hiemstra",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Hiemstra. 2001. Using language models for information retrieval. Ph.D. thesis, University of Twente.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Document language models, query models, and risk minimization for information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR 2001",
"volume": "",
"issue": "",
"pages": "111--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty and C. Zhai. 2001. Document language mod- els, query models, and risk minimization for information re- trieval. In Proceedings of SIGIR 2001, pages 111-119.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining the language model and inference network approaches to retrieval. Information Processing and Management",
"authors": [
{
"first": "D",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "40",
"issue": "",
"pages": "735--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Metzler and W. B. Croft. 2004. Combining the language model and inference network approaches to retrieval. Infor- mation Processing and Management, 40(5):735-750.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Information retrieval through man-machine dialogue",
"authors": [
{
"first": "R",
"middle": [
"N"
],
"last": "Oddy",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of Documentation",
"volume": "33",
"issue": "1",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. N. Oddy. 1977. Information retrieval through man-machine dialogue. Journal of Documentation, 33(1):1-14.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ScentTrails: Integrating browsing and searching on the Web",
"authors": [
{
"first": "C",
"middle": [],
"last": "Olston",
"suffix": ""
},
{
"first": "E",
"middle": [
"H"
],
"last": "Chi",
"suffix": ""
}
],
"year": 2003,
"venue": "ACM Transactions on Computer-Human Interaction",
"volume": "10",
"issue": "3",
"pages": "177--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Olston and E. H. Chi. 2003. ScentTrails: Integrating browsing and searching on the Web. ACM Transactions on Computer-Human Interaction, 10(3):177-197.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A spreading activation network model for information retrieval",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Preece",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Preece. 1981. A spreading activation network model for information retrieval. Ph.D. thesis, Universtiy of Illinois at Urbana-Champaign.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The intelligent surfer: Probabilistic combination of link and content information in pagerank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Richardson and P. Domingos. 2002. The intelligent surfer: Probabilistic combination of link and content information in pagerank. In Proceedings of Advances in Neural Informa- tion Processing Systems.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Relevance Weighting of Search Terms",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sparck-Jones",
"suffix": ""
}
],
"year": 1976,
"venue": "JASIS",
"volume": "27",
"issue": "3",
"pages": "129--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Robertson and K. Sparck-Jones. 1976. Relevance Weight- ing of Search Terms. JASIS, 27(3):129-146.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Okapi at TREC-3",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hancock-Beaulieu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gull",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of TREC",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Robertson, S. Walker, M. Hancock-Beaulieu, A. Gull, and M. Lau. 1992. Okapi at TREC-3. In Proceedings of TREC, pages 21-30.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The SMART Retrieval System : Experiments in Automatic Document Processing",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Rocchio",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.J. Rocchio. 1971. Relevance feedback in information re- trieval. In G. Salton, editor, The SMART Retrieval System : Experiments in Automatic Document Processing. Prentice Hall Inc., Englewood Cliffs, New Jersey.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A vector space model for automatic indexing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "C",
"middle": [
"S"
],
"last": "Yang",
"suffix": ""
}
],
"year": 1975,
"venue": "Communications of the ACM",
"volume": "18",
"issue": "11",
"pages": "613--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pivoted document length normalization",
"authors": [
{
"first": "A",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of SIGIR 1996",
"volume": "",
"issue": "",
"pages": "21--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Singhal, C. Buckley, and M. Mitra. 1996. Pivoted document length normalization. In Proceedings of SIGIR 1996, pages 21-29.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Find-similar: similarity browsing as a search tool",
"authors": [
{
"first": "M",
"middle": [
"D"
],
"last": "Smucker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of SIGIR 2006",
"volume": "",
"issue": "",
"pages": "461--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. D. Smucker and J. Allan. 2006. Find-similar: similarity browsing as a search tool. In Proceedings of SIGIR 2006, pages 461-468.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inference networks for document retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Turtle",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of SIGIR 1990",
"volume": "",
"issue": "",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Turtle and W. B. Croft. 1990. Inference networks for docu- ment retrieval. In Proceedings of SIGIR 1990, pages 1-24.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The Geometry of Information Retrieval",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. J. van Rijsbergen. 2004. The Geometry of Information Re- trieval. Cambridge University Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Algorithms for estimating relative importance in networks",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of KDD 2003",
"volume": "",
"issue": "",
"pages": "266--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott White and Padhraic Smyth. 2003. Algorithms for esti- mating relative importance in networks. In Proceedings of KDD 2003, pages 266-275.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Carrot and stick: combining information retrieval models",
"authors": [
{
"first": "H",
"middle": [
"F"
],
"last": "Witschel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of DocEng",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. F. Witschel. 2006. Carrot and stick: combining information retrieval models. In Proceedings of DocEng 2006, page 32.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On modeling information retrieval with probabilistic inference",
"authors": [
{
"first": "S",
"middle": [
"K M"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Y",
"middle": [
"Y"
],
"last": "Yao",
"suffix": ""
}
],
"year": 1995,
"venue": "ACM Transactions on Information Systems",
"volume": "13",
"issue": "1",
"pages": "38--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. K. M. Wong and Y. Y. Yao. 1995. On modeling information retrieval with probabilistic inference. ACM Transactions on Information Systems, 13(1):38-68.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A study of smoothing methods for language models applied to Ad Hoc information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR 2001",
"volume": "",
"issue": "",
"pages": "334--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Zhai and J. Lafferty. 2001. A study of smoothing methods for language models applied to Ad Hoc information retrieval. In Proceedings of SIGIR 2001, pages 334-342.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "A simple example MLAG",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "shows an example TREC query.",
"content": "<table><tr><td>&lt; top&gt;</td></tr><tr><td>&lt; num&gt; Number: 441</td></tr><tr><td>&lt; title&gt; Lyme disease</td></tr><tr><td>&lt; desc&gt; Description:</td></tr><tr><td>How do you prevent and treat Lyme disease?</td></tr><tr><td>&lt; narr&gt; Narrative:</td></tr><tr><td>Documents that discuss current prevention and</td></tr><tr><td>treatment techniques for Lyme disease are relevant [...]</td></tr><tr><td>&lt; /top&gt;</td></tr></table>"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "A sample TREC query",
"content": "<table/>"
}
}
}
}