| { |
| "paper_id": "X98-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:06:05.393352Z" |
| }, |
| "title": "DYNAMIC DATA FUSION", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Diamond", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [ |
| "D" |
| ], |
| "last": "Liddv", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "X98-1018", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "Information retrieval researchers have long appreciated the value of combining, or ffilsing, multiple retrieval systems' relevance scores for a set of documents to improve retrieval performance. However, it is only recently that researchers have begun to consider adjusting the score fusion method to the user's topic and initial results. This study explores the value of fusing multiple retrieval systems' scores in a manner that adjusts to: the semantic and syntactic features of the user's natural language query, the various systems' biases toward long or short documents, and the extent to which the scores produced by the multiple systems are statistically independent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "The ability to improve retrieval performance by using multiple retrieval systems has been documented extensively (e.g,, [1] , [31, [41) . It is only recently, however, that researchers have turned their attention to the possibility of adjusting the manner in which results are combined to the specific query at hand. Researchers have reported success in using initial relevance judgments to adjust the way in which results are combined [3] , and have also reported success in using the joint distribution of relevance scores from multiple marchers (among other things)", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 123, |
| "text": "[1]", |
| "ref_id": null |
| }, |
| { |
| "start": 126, |
| "end": 135, |
| "text": "[31, [41)", |
| "ref_id": null |
| }, |
| { |
| "start": 436, |
| "end": 439, |
| "text": "[3]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PREVIOUS WORK", |
| "sec_num": null |
| }, |
| { |
| "text": "to predict when to combine the results of multiple systems [6] . The purpose of the current research is to explore the use of the joint distribution of relevance scores, semantic and syntactic features of queries. and the length of retrieved documents to predict how to combine the results of several retrieval systems.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 62, |
| "text": "[6]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PREVIOUS WORK", |
| "sec_num": null |
| }, |
| { |
| "text": "We define a q,ery as a natural language expression of a user's need. For sonic query and some collection of documents, it is possible for a human to attribute the relevance of the document to the query.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "A retrieral system is a machine that accepts a query and full texts of documents, and produces, for each document, a relevance score for the query-document pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "A measure of the effectiveness of a retrieval system for a query and a collection is precision, the proportion of the N documents with the highest relevance scores that are relevant (in our study, N is 5, 10, or 30).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "Using multiple retrieval systems produces multiple retrieval scores for a query-document pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "A fitsion fimction accepts these scores as its inputs, and produces a single relevance score as its output for the query-document pair. A staticfiisionfimction has only the relevance scores for a single query-document pair as its inputs. A dynamic filsion fimction can have more inputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "We are concerned with the following two questions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "If we allow each query its own static fusion function, can we achieve higher precision than if we force all queries to have the same static fusion function?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": ". If we can achieve higher precision by allowing each query its own static fusion function, then what inputs or tkmtures would enable us to construct a dynamic fusion function that adjusts to the query, the documents retrieved by the retrieval systems, and the distribution of scores produced by the retrieval systems'?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND RESEARCH QUESTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "We used 247 queries, including TREC I-6 training queries, and queries developed bv business analysts for TcxtWisc's internal use.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Queries, Documents, and Relevance Judgments", |
| "sec_num": null |
| }, |
| { |
| "text": "We appticd these queries to the TREC Wall Street Journal collection fronl (1986) (1987) (1988) (1989) (1990) (1991) (1992) . For the TREC queries. we used only TREC relevance judgments. Relevance judgments for the TextWise queries were initially made on a 5-point scale, which we mapped to the binary judgments used by TREC.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 80, |
| "text": "(1986)", |
| "ref_id": null |
| }, |
| { |
| "start": 81, |
| "end": 87, |
| "text": "(1987)", |
| "ref_id": null |
| }, |
| { |
| "start": 88, |
| "end": 94, |
| "text": "(1988)", |
| "ref_id": null |
| }, |
| { |
| "start": 95, |
| "end": 101, |
| "text": "(1989)", |
| "ref_id": null |
| }, |
| { |
| "start": 102, |
| "end": 108, |
| "text": "(1990)", |
| "ref_id": null |
| }, |
| { |
| "start": 109, |
| "end": 115, |
| "text": "(1991)", |
| "ref_id": null |
| }, |
| { |
| "start": 116, |
| "end": 122, |
| "text": "(1992)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Queries, Documents, and Relevance Judgments", |
| "sec_num": null |
| }, |
| { |
| "text": "Several of the retrieval systems described below used a document segmentation scheme to split compound documents into their components, resulting in a collection size of 222,525. For these systems, retrieval scores were calculated separately for the components of compound documents, and then merged by taking the maximum component score, thus mapping back to the original document space of 173,252.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Queries, Documents, and Relevance Judgments", |
| "sec_num": null |
| }, |
| { |
| "text": "We used five retrieval systems to generate relevance scores for query-document pairs:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "Fuzzy Boolean (FB). This system translates a query into a Boolean expression in which the terminals are single terms, compound nominals, and proper nouns; instantiates the terminals in the expression with the document's tfidfweights; and applies fuzzy Boolean semantics to resolve the instantiated expression into a scalar relevance score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "Probabilistic (PRB). This system applies a match formula that sums term frequencies of query terms in the document, weighted by terms' inverse document frequencies, and adjusts tor document length. We applied this formula to a vocabulary of single terms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "Subiect Field Code (SFC). This system applies a vector similarity metric to query and document representations in TextWise's Subject Field Code space to obtain relevance scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "This system applies a vector similarity metric to query and document representations obtained by counting the occurrences of 3-letter sequences (after squeezing out blanks, newlines, and other non-alphabetic characters I.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "N-gram (NG3).", |
| "sec_num": null |
| }, |
| { |
| "text": "Latent Semantic Indexing (LSI). This system obtains query and document representations by applying a translation matrix to single terms (excluding compound nominals and proper nouns). We obtained the translation matrix by singular value decomposition of a matrix of (.idf weights for single terms from a 1/3 sample of the Wall Street Journal. We used a vector similarity metric to obtain relevance scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "N-gram (NG3).", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the following procedures to process the queries and documents into tbrms that enabled application of matching formulae to produce relevance scores: Document Segmentation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query and Document Representations", |
| "sec_num": null |
| }, |
| { |
| "text": "We used either the original document segmentation from the TREC data or a more aggressive segmentation that split compound documents into their components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query and Document Representations", |
| "sec_num": null |
| }, |
| { |
| "text": "For all but one retrieval system, we removed stopwords.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stop Word Removal.", |
| "sec_num": null |
| }, |
| { |
| "text": "For the various retrieval systems, we used the Xerox stemmer, the Stone stemmer, or we obtained word roots as a byproduct of constructing trigrams.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stemming.", |
| "sec_num": null |
| }, |
| { |
| "text": "Phrase Reco,~nition. For some retrieval systems, we used a set of part-of-speech-based rules to detect and aggregate sequences of tokens into compound nominal phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stemming.", |
| "sec_num": null |
| }, |
| { |
| "text": "Proper Nouns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stemming.", |
| "sec_num": null |
| }, |
| { |
| "text": "For some retrieval systems, we detected proper nouns, and normalized multiple expressions of the same proper noun entity to a canonical form.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stemming.", |
| "sec_num": null |
| }, |
| { |
| "text": "Term Weit~htin~. In documents, weights represented the frequency of terms in the document, conditioned by the number of documents in which the terms Dimension Reduction. We used single words to translate into weightings in a 900-dimensional feature space using TextWise's Subject Field Coder (SFC), or into a 167-dimensional feature space using Latent Semantic Indexing (LSI). Table 1 summarizes the query representations, document representations, and matching semantics used by the five matchers.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 377, |
| "end": 384, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stemming.", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition to the five relevance score inputs to the dynamic fusion function, we used the following inputs:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic Fusion Function Input Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Several items of information might be available about the query independently of any particular retrieval approach or its representation of the query, the documents, or their similarity:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Query Length (QLEN). The number of tokens in the natural language query.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Query Terms' Specificity (QTSP).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query Features", |
| "sec_num": null |
| }, |
| { |
| "text": "The average inverse document frequency (IDF) of the quartile of the query's terms with the highest IDF's.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Query Terms\" Synonymy (QTSY). Over all terms in the query, the average of the number of words in the svnset for the correct sense of the query term in WordNet. WordNet is a semantic knowledge base that distinguishes words by their senses, and groups word:senses that are synonymous to each other into synsets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Number of Compound Nominals (QNCN).", |
| "sec_num": null |
| }, |
| { |
| "text": "Query Terms' Polyscmv (QTPL), Over all terms in query, the average number of senses for the query term in WordNet,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Number of Compound Nominals (QNCN).", |
| "sec_num": null |
| }, |
| { |
| "text": "There is currently one document feature, instantiated separately for each query, for each retrieval system S:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Length of Top-Ranked Documents Retrieved by System (DLEN[S] ). This is the average of the number of tokens in the top 5 documents scored by system S.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 51, |
| "end": 59, |
| "text": "(DLEN[S]", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Document Features", |
| "sec_num": null |
| }, |
| { |
| "text": "The following features are instantiated once for each retrieval system S, for each query:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Score Distributions", |
| "sec_num": null |
| }, |
| { |
| "text": "Maximum Score Assigned by Approach (SMAX[S] ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 43, |
| "text": "(SMAX[S]", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Score Distributions", |
| "sec_num": null |
| }, |
| { |
| "text": "Scores Assigned by Approach (SVAR[S1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variance of", |
| "sec_num": null |
| }, |
| { |
| "text": "The lbllowing input to the dynamic fusion function is instantiated once for each pair of retrieval systems S~ and S,:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variance of", |
| "sec_num": null |
| }, |
| { |
| "text": "Correlation of Ranks Assi,~ned to Documents by Two Approaches (SCOR [St, Sz] ..~ For documents ranked in the top 1,000 by any of the retrieval systems for the query, the correlation of the documents' ranks in systems S~ and $2.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 76, |
| "text": "[St, Sz]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variance of", |
| "sec_num": null |
| }, |
| { |
| "text": "For a sample of 50 queries from our 297, we lbund, separately for each query, an optimal static fusion function. We then lound the single optimal static fusion function that gave the best precision over all 50 queries. Table 2 shows the precision for the 50 queries using the 5 retrieval systems", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 226, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 1: OPPORTUNITY FOR IMPROVING RETRIEVAL", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 separately, using a single overall static fusion function, and using 50 (possibly) different queryspecific static functions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 1: OPPORTUNITY FOR IMPROVING RETRIEVAL", |
| "sec_num": null |
| }, |
| { |
| "text": "At first glance, our results suggest that allowing query-specific fusion functions substantially improves retrieval. For instance, by using queryspecific static fusion functions, we achieved precision at 5 of .5960, compared to .3840 when applying the same static fusion function to all queries. However, this comparison is overly optimistic, since it allows query-specific fusion functions to be trained and evaluated on exactly the same data, while forcing the overall fusion function to be trained on a large set of data, but then evaluated on a small subset of that data. To provide a more pessimistic comparison, we partitioned the data for our 50 queries into equallysized training and test sets. We trained each queryspecific fusion function on the query's training data, and evaluated it on the test data. (Although our goal is to improve retrospective retrieval, this arrangement resembles the TREC routing scenario.) Table 3 shows a considerably weaker, but still appreciable improvement due to using query-specific fusion functions. For instance, we achieved precision at 5 of .4160 when allowing each query its own static fusion function, compared to .3400 when forcing all queries to use the same function.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 927, |
| "end": 934, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 1: OPPORTUNITY FOR IMPROVING RETRIEVAL", |
| "sec_num": null |
| }, |
| { |
| "text": "dimensions are the relevance scores from the set of matchers. We constructed a fused score for a test document by summing the relevance judgments for the test document's K nearest training documents (where K was 5, 10, 15 or 20). We tried weighting the sums by an inverse function of the distance between the test document and the training document. We also tried scaling the dimensions' contribution to the distance metric with a weight reflecting the corresponding matcher's precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 1: OPPORTUNITY FOR IMPROVING RETRIEVAL", |
| "sec_num": null |
| }, |
| { |
| "text": "To our surprise, none of these experiments produced K-NN-based fusion functions that performed consistently better than a linear fusion function. On closer inspection, it appears that at least part of the poor performance of K-NN as a fusion function can be attributed to instances in which the probability distribution of relevance for the training documents for the query did not resemble the probability distribution of relevance for all the documents in the query. In this sort of situation, the linear model appears to be more robust than K-NN. It may be that a more careful selection of the training set would result in more reasonable performance from K-NN-based fusion functions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 1: OPPORTUNITY FOR IMPROVING RETRIEVAL", |
| "sec_num": null |
| }, |
| { |
| "text": "For the linear fusion function, we found the optimal vector of coefficients by selecting the coefficients that produce the greatest precision at 5 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 1: OPPORTUNITY FOR IMPROVING RETRIEVAL", |
| "sec_num": null |
| }, |
| { |
| "text": "We constrained our fusion functions to be weighted linear combinations of the five retrieval scores lk)r a query-document pair. We considered the possibility of more complex non-linear fusion models through exploration of K-Ncarest Neighbor (K-NN) classifiers. (The use of K-NN tbr selecting a single retrieval system has been documented in [5] . By contrast, we sought to use K-NN to fuse relevance scores.) In this approach, training documents and their rclevance judgments populatcd a space whose (the proportion of the five top-ranked documents that are relevantl. To date, we have lk~und the optimal vector using an exhaustive search over the set of vectors whose elements are non-negative, evenly divisible by 0.1. and whose elements sum to 1.0. (We had tried using logistic regression to find the coefficients, but the coefficients we found in this manner yielded considerably lower precision than those we found using the exhaustive search method.)", |
| "cite_spans": [ |
| { |
| "start": 341, |
| "end": 344, |
| "text": "[5]", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ".2230 (+13%) .2533 (+34%)", |
| "sec_num": "1253" |
| }, |
| { |
| "text": "In sum, it appears that for our selection of retrieval systems, there is a potential for improving retrieval through query-specific fusion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ".2230 (+13%) .2533 (+34%)", |
| "sec_num": "1253" |
| }, |
| { |
| "text": "One way to exploit this opportunity is to use initially-retrieved documents to adjust the weights of the single overall static fusion function, as in [3] . Although we tried several ways of updating fusion function coefficients with relevance feedback, we were unable to exploit any of the apparent potential to improve retrieval performance in this way. distribution of the retrieval systems' retrieval scores for the query enumerated above. We are currently working on building such a dynamic fusion function.", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 153, |
| "text": "[3]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ".2230 (+13%) .2533 (+34%)", |
| "sec_num": "1253" |
| }, |
| { |
| "text": "We chose to implement the dynamic fusion function as a hybrid of a \"mixture expert\" and the static linear fusion models used in Research Question I. The mixture expert attempts to predict the best coefficients to use for the linear fusion function. Figure I shows ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 263, |
| "text": "Figure I shows", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dynamic Fusion Function Architecture", |
| "sec_num": null |
| }, |
| { |
| "text": "So far, optimal fusion coefficients for a query have been determined using full knowledge of the relevance of the documents for the query. In the retrospective retrieval setting, these relevance judgments will not be available beforehand, and thus cannot be used to adjust the fusion model to the query. For the retrospective setting, we seek to construct a dynamic fusion function that can adjust the way it fuses the five systems' relevance scores for a query-document pair using additional inputs. These inputs include the Icatures of the query, features of the retrieved documents, and features of the joint to the linear fusion model and the individual retrieval systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESEARCH QUESTION 2: THE DYNAMIC FUSION FUNCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the remaining 197 queries lor training. For these queries, we have used all the documents to find coefficient vectors for optimal linear static fusion models. These coefficient vectors constitute the \"target\" outputs the mixture expert will be trained to reproduce.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": null |
| }, |
| { |
| "text": "We also fit a single linear static fusion function to the 197 training queries, again using all the data from those queries. The performance of this static fusion function on all of the documents for the 50 test queries constitutes the baseline for the second research question. To answer this research question, we will compare the performance of the dynamic fusion function for the 50 test queries to this baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": null |
| }, |
| { |
| "text": "So far, our results suggest that, for our choice of retrieval systems, there is an opportunity to improve retrieval performance by using dynamic fusion functions instead of using a single static fusion function ['or all queries. One possible qualification to these results is that limiting ourselves to a linear form for the static fusion models may result in artificially low baseline retrieval for the single overall static function. The volatility of the K-NN technique in the context of our data made it difficult to say whether or not a non-linear form for the fusion model is necessary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DISCUSSION", |
| "sec_num": null |
| }, |
| { |
| "text": "Our preferred implementation of the mixture expert in the dynamic fusion function is a multilayer feedforward neural network, with output nodes corresponding to the linear weights of the linear fusion function. However, given that our real goal is to maximize precision, rather than to replicate the weights exactly, a straightforward application of backpropogation to train such a network to replicate the target weights is inappropriate. The optimal linear weights are likely to be on \"plateaus\" with respect to precision, with little change in precision in response to large changes in linear weights. We are currently investigating alternative ways of training the mixture expert in the dynamic fusion model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DISCUSSION", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Automatic combination of multiple ranked systems", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Bartell", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "W" |
| ], |
| "last": "Cottrell", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "K" |
| ], |
| "last": "Belew", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the Seventeenth Annual International ACM-SIGIR Conterence on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "173--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bartell, B., Cottrell, G.W., Belew, R.K. Automatic combination of multiple ranked systems. Proceedings of the Seventeenth Annual International ACM-SIGIR Conterence on Research and Development in Information Retrieval. 173-181, 1994.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Combining the evidence of multiple query representations for inlormation retrieval, lnlbrmation Processing and Management", |
| "authors": [], |
| "year": 1995, |
| "venue": "", |
| "volume": "31", |
| "issue": "", |
| "pages": "431--448", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Combining the evidence of multiple query representations for inlormation retrieval, lnlbrmation Processing and Management 31(3), 431-448, 1995.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Combination of muhiplc searches", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [ |
| "J" |
| ], |
| "last": "Shaw", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "The Second Text Retrieval Conlerence (TREC-2)", |
| "volume": "", |
| "issue": "", |
| "pages": "242--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fox, E., Shaw. J. Combination of muhiplc searches. In The Second Text Retrieval Conlerence (TREC-2), D. Harman (ed), NIST Special Publications 500-215. Gaithersburg. MD. 242-252, 1994.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Method combination for document filtering", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [ |
| "D" |
| ], |
| "last": "Hull", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schuetze", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 19th Annual Internation ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "279--288", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hull. D., Pedersen, J., Schuetze, H. Method combination for document filtering. Proceedings of the 19th Annual Internation ACM SIGIR Conference on Research and Development in Information Retrieval, Zurich, 279-288, 1996.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Report on the TREC-4 experiment: combining probabilistic and vector-space schemes", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Savoy", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ndarugendawmo", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Vrajitoru", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Savoy, J., Ndarugendawmo, M., Vrajitoru, D. Report on the TREC-4 experiment: combining probabilistic and vector-space schemes. [TREC-4 WWW site], 1996.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Predicting the Performance of Linearly Combined IR Systems", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Vogt", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "W" |
| ], |
| "last": "Cottrell", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Twenty First Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vogt., C., Cottrell, G.W. Predicting the Performance of Linearly Combined IR Systems. Proceedings of the Twenty First Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval. 1998.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"3\">Features of Retrieval Systcms</td><td/><td/></tr><tr><td/><td/><td/><td>Retrieval Systems</td><td/><td/></tr><tr><td>FEATURE</td><td>FB</td><td>PRB</td><td>SFC</td><td>NG3</td><td>LSI</td></tr><tr><td>Doc. Segmentation</td><td>Aggressive</td><td>Aggressive</td><td>Aggressivc</td><td>Standard</td><td>Ageressive</td></tr><tr><td>Stop Word Removal</td><td>Yes</td><td>Yes</td><td>Yes</td><td>No</td><td>Yes</td></tr><tr><td>Stemming</td><td>Xerox</td><td>Xerox</td><td>Stone</td><td>Trigram</td><td>Xerox</td></tr><tr><td>Phrase Recognition</td><td>Yes</td><td>No</td><td>No</td><td>No</td><td>No</td></tr><tr><td>Proper Nouns</td><td>Yes</td><td>Yes</td><td>No</td><td>No</td><td>No</td></tr><tr><td>Tcrm Weighting</td><td>q: i,!f</td><td>q: it!t</td><td>tf</td><td>{f i~!f</td><td>t t: idf</td></tr><tr><td>Dimension Reduction</td><td>None</td><td>None</td><td>SFC</td><td>None</td><td>LSI</td></tr><tr><td>Match Semantics</td><td>Fuzzv Boolean</td><td>Probabilistic</td><td>Vector</td><td>Vector</td><td>Vector</td></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF1": { |
| "text": "Precision of Five Systems, Overall Static Fusion Functions. and Query-Specific Fusion Functions, When Training and Testing on Same Data lor Each Query", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"3\">Single Retrieval Systems</td><td/><td colspan=\"2\">Static Fusion Functions</td></tr><tr><td/><td/><td/><td/><td/><td/><td>Single Overall</td><td>Query-Specific</td></tr><tr><td>Prec. at</td><td>FB</td><td>SFC</td><td>PROB</td><td>NG3</td><td>LSI</td><td>(vs. FB)</td><td>(vs. overall)</td></tr><tr><td>5</td><td>.3360</td><td>.0080</td><td>.2080</td><td>.1760</td><td>.1640</td><td>.3840 (+14%)</td><td>.5960 (+55%7</td></tr><tr><td>I0</td><td>.2680</td><td>.0060</td><td>.1800</td><td>.1560</td><td>.1440</td><td>.3280 (+22%7</td><td>.5040 (+54%)</td></tr><tr><td>30</td><td>.2240</td><td>.(7127</td><td>I .I 193</td><td>.14[4</td><td>.1273</td><td>.2547 (+14%)</td><td>.3427 (+35%)</td></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF2": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"5\">Precision of Five Systems, Overall Static Fusion Functions,</td><td/></tr><tr><td/><td/><td/><td colspan=\"3\">and Query-Specific Fusion Functions,</td><td/><td/></tr><tr><td/><td/><td colspan=\"5\">When Training and Testing on Different Data for Each Query</td><td/></tr><tr><td/><td/><td colspan=\"3\">Single Retrieval Systems</td><td/><td colspan=\"2\">Static Fusion Functions</td></tr><tr><td/><td/><td/><td/><td/><td/><td>Single Overall</td><td>Query-Speci tic</td></tr><tr><td>Prec. at</td><td>FB</td><td>SFC</td><td>PROB</td><td>NG3</td><td>LSI</td><td>(vs. FB)</td><td>(vs. overall)</td></tr><tr><td>5</td><td>.3360</td><td>.0040</td><td>.1960</td><td>.1920</td><td>.1600</td><td>.3400(+01%)</td><td>.4160 (+22%)</td></tr><tr><td>10</td><td>.2680</td><td>.0100</td><td>.1680</td><td>.1600</td><td>.1340</td><td>.3120(+16%)</td><td>.3620(+16%)</td></tr><tr><td>30</td><td>.1967</td><td>.0140</td><td>.1187</td><td>.1313</td><td/><td/><td/></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF3": { |
| "text": "the relationship of the mixture expert", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>Figure 1</td></tr><tr><td/><td colspan=\"2\">Dynamic Fusion Functions</td></tr><tr><td/><td/><td/><td>Score</td></tr><tr><td/><td/><td>Query</td><td>Document</td><td>Correlati~</td></tr><tr><td/><td/><td>\"e atu re s</td><td>Fe a ttt re s</td><td>Feature:</td></tr><tr><td>Matchers</td><td>Relevance Scores</td><td>Weights</td></tr><tr><td/><td/><td/><td>Mixture Expert</td></tr><tr><td>Query</td><td>+</td><td/></tr><tr><td>Document</td><td/><td/></tr><tr><td/><td colspan=\"2\">Fused Score</td></tr></table>", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |