| { |
| "paper_id": "O01-2005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:09:40.298708Z" |
| }, |
| "title": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion", |
| "authors": [ |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jfgao@microsoft.com" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "mingzhou@microsoft.com" |
| }, |
| { |
| "first": "Jiaxing", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Fusion and clustering are two approaches to improving the effectiveness of information retrieval. In fusion, ranked lists are combined together by various means. The motivation is that different IR systems will complement each other, because they usually emphasize different query features when determining relevance and retrieve different sets of documents. In clustering, documents are clustered either before or after retrieval. The motivation is that similar documents tend to be relevant to the same query so that this approach is likely to retrieve more relevant documents by identifying clusters of similar documents. In this paper, we present a novel fusion technique that can be combined with clustering to achieve consistent improvements over conventional approaches. Our method involves three steps: (1) clustering similar documents, (2) re-ranking retrieval results, and (3) combining retrieval results.", |
| "pdf_parse": { |
| "paper_id": "O01-2005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Fusion and clustering are two approaches to improving the effectiveness of information retrieval. In fusion, ranked lists are combined together by various means. The motivation is that different IR systems will complement each other, because they usually emphasize different query features when determining relevance and retrieve different sets of documents. In clustering, documents are clustered either before or after retrieval. The motivation is that similar documents tend to be relevant to the same query so that this approach is likely to retrieve more relevant documents by identifying clusters of similar documents. In this paper, we present a novel fusion technique that can be combined with clustering to achieve consistent improvements over conventional approaches. Our method involves three steps: (1) clustering similar documents, (2) re-ranking retrieval results, and (3) combining retrieval results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In terms of the overall performance on a large query set, none of the typical IR systems outperform others substantially, while for each individual query, the performance that different systems achieve varies greatly [Voorhees 1997 ]. This observation leads to the idea of combining results obtained by different IR systems to improve overall performance.", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 231, |
| "text": "[Voorhees 1997", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Fusion is a technique that combines retrieval results (or ranked lists) obtained by different systems. However, conventional fusion techniques only consider retrieval results, while the information embedded in the document collection (e.g. the similarity between documents) is ignored. On the other hand, document clustering applies the structure of a document collection, but it usually considers each individual ranked list separately and is not able to take advantage of multiple ranked lists.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, we present a novel fusion technique that can be combined with clustering. Given multiple retrieval results obtained by different IR systems, we first perform clustering on each ranked list and obtain a set of clusters. We then identify the clusters that contain the most relevant documents. Each of these clusters is evaluated based on a metric called reliability. Documents in reliable clusters are re-ranked. That is, we set higher scores for these documents. Finally, a conventional fusion method is applied to combine multiple retrieval results, which are re-ranked. Our experiments on the TREC-5 Chinese collection show that the above approach achieves consistent improvements over conventional approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The remainder of this paper is organized as follows. Section 2 gives a brief survey of related work. In Section 3, we describe our method in detail. In Section 4, a series of experiments are presented to show the effectiveness of our approach. Finally, we present our conclusions in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Fusion and clustering have been important research topics for many researchers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Fox and Shaw [Fox 1994 ] reported on their work on result sets fusion. Their method for combining the evidence from multiple retrieval runs is based on document-query similarities in different sets. Five combining strategies were investigated, as summarized in Table 1 . In their experiments, CombSUM and CombMNZ were better than the others. Thompson's work [Thompson 1990 ] includes assigning to each ranked list a variable weight based on the prior performance of the system. His idea is that a retrieval system should be considered preferable to others if its prior performance is better. Thompson's results were slightly better than Fox's.", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 22, |
| "text": "[Fox 1994", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 358, |
| "end": 372, |
| "text": "[Thompson 1990", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 261, |
| "end": 268, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Bartell [Bartell 1994 ] used numerical optimization techniques to determine optimal scalars (weights) for a linear combination of results. The idea is similar to Thompson's except that Bartell obtained the optimal scalars from training data, while Thompson constructed scalars based on their prior performance. Bartell achieved good results on a relatively small collection (less than 50MB).", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 21, |
| "text": "[Bartell 1994", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "To perform fusion more effectively, researchers began to investigate whether two result sets are suitable for fusion by examining some critical characteristics. Lee [Lee 1997 ] found that the overlap of the result sets was an important factor for fusion. Overlap ratios of relevant and non-relevant documents are calculated as follows:", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 174, |
| "text": "[Lee 1997", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": ", 2 , 2 B A common overlap B A common overlap N N N N R R R R + \u00d7 = + \u00d7 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "where A R and A N are, respectively, the numbers of relevant and irrelevant documents in result set into our fusion approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Vogt [Vogt 1998 [Vogt , 1999 tested different linear combinations of several results from TREC-5. 36,600 result pairs were tested. A linear regression of several potential indicators was performed to determine the potential improvement for result sets to be fused. Thirteen factors including measures of individual inputs, such as average precision/recall, and some pairwise factors, such as overlap and unique document counts, were considered. Vogt concluded that the characteristics for effective fusion are: (1) at least one result has high precision/recall; (2) a high overlap of relevant documents and a low overlap of non-relevant documents;", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 15, |
| "text": "[Vogt 1998", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 16, |
| "end": 28, |
| "text": "[Vogt , 1999", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "(3) similar distributions of relevance scores; and (4) each retrieval system ranks relevant documents differently. Conclusion (1) and(2) are also confirmed by our experiments, as will be shown in Section 4.3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Clustering is now considered to be a useful information retrieval method for not only documents categorization but also interactive retrieval. The use of clustering in information retrieval is based on the Clustering Hypothesis [Rijsbergen, 1979] : \"closely associated documents tend to be relevant to the same requests\". Hearst [Hearst 1996] showed that this hypothesis holds for a set of documents returned by a retrieval system. According to this hypothesis, if we do a good job of clustering the retrieved documents, we will likely separate the relevant and non-relevant documents into different groups. If we can direct the user to the correct group of documents, we can enhance the likelihood of finding interesting information for the user. Previous works [Cutting et al, 1992] , [Leuski 1999] and [Leuski 2000 ] focused on clustering documents and let users select the clusters they were interested in. Their approaches are interactive. Most of the clustering methods mentioned above work on individual ranked lists and do not take advantage of multiple ranked lists.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 246, |
| "text": "[Rijsbergen, 1979]", |
| "ref_id": null |
| }, |
| { |
| "start": 329, |
| "end": 342, |
| "text": "[Hearst 1996]", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 763, |
| "end": 784, |
| "text": "[Cutting et al, 1992]", |
| "ref_id": null |
| }, |
| { |
| "start": 787, |
| "end": 800, |
| "text": "[Leuski 1999]", |
| "ref_id": null |
| }, |
| { |
| "start": 805, |
| "end": 817, |
| "text": "[Leuski 2000", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "In this paper, we combine clustering with fusion. Our approach differs from interactive approaches in three ways. First, we use two or more ranked lists, while others usually use one in clustering. Second, user interactive input is not needed in our approach. Third, we provide a ranked list of documents to the user instead of a set of clusters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Our method is based on two hypotheses:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion with Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Clustering Hypothesis: Documents that are relevant to the same query can be clustered together since they tend to be more similar to each other than to non-relevant documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion with Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Fusion Hypothesis: Different ranked lists usually have a high overlap of relevant documents and a low overlap of non-relevant documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion with Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The Clustering Hypothesis suggests that we might be able to roughly separate relevant documents from non-relevant documents with a proper clustering algorithm. Relevant documents can be clustered into one or several clusters, and these clusters will contain more relevant documents than others. We call such a cluster a reliable cluster.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "The Fusion hypothesis presents the idea of identifying reliable clusters. The reliable clusters from different ranked lists usually have a high overlap. Therefore, the more relevant documents a cluster contains, the more reliable the cluster is. We will describe the computation of reliability in detail in Section 3.3. Our approach consists of three steps. First, we cluster each ranked list. Then, we identify the reliable clusters and adjust the relevance value of each document according to the reliability of the cluster. Finally, we use CombSUM to combine the adjusted ranked lists and present the result to user.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "In the following sections, we will describe our approach in more detail. For conciseness, we will use some symbols to present our approach, which are listed in Table 2 with their explanations. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 160, |
| "end": 167, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "q A query d A document A RL , B RL", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "Ranked list returned by retrieval systems A and B, respectively", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "i A C , i th cluster in A RL ) , ( _ , , j B i A C C CC Sim Similarity between i A C , and j B C , ) , ( _ ,i A C q qC Sim Similarity between query q and i A C , ) , ( _ j i d d dd Sim Similarity between two documents, i d and j d ) ( ,i A C r Reliability of cluster i A C , ) (d rel A Relevance score of document d given by retrieval system A ) ( * d rel A Adjusted relevance score of document d ) (d rel", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "Final relevance score of document d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 113", |
| "sec_num": null |
| }, |
| { |
| "text": "The goal of clustering is to separate relevant documents from non-relevant documents. To accomplish this, we need to define a measure for the similarity between documents and design a corresponding clustering algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In our experiments, we used the vector space model to represent documents. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between documents", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": ", )] / log( ) 0 . 1 ) [(log( ) / log( ] 0 . 1 ) [log( 2 \u2211 \u00d7 + \u00d7 + = j j ij k ik ik n N f n N f w (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between documents", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "where ik f is the occurrence frequency of term k t in document i d , N is the total number of documents in the collection and k n is the number of documents that contain term k t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between documents", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Actually, this is one of the most frequently used tf*idf weighting schemes in IR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between documents", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "For any two documents i d and j d , the cosine measure as given below is used to determine their similarity:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 115", |
| "sec_num": null |
| }, |
| { |
| "text": ". ) ( ) , ( _ 2 2 \u2211 \u2211 \u2211 \u00d7 \u00d7 = k jk k ik k jk ik j i w w w w d d dd Sim (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 115", |
| "sec_num": null |
| }, |
| { |
| "text": "There are many clustering algorithms for document clustering. Our goal is to cluster a small collection of documents returned by an individual retrieval system. Since the size of the collection was 1,000 in our experiments, the complexity of the clustering algorithm was not a serious problem. The ideal result is obtained when clustering gathers all relevant documents into one cluster and all non-relevant documents into the other cluster. However, this is unlikely to happen. In fact, relevant documents are usually distributed in several clusters. After clustering, each ranked list is composed of a set of clusters, say 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "C , 2 C \u2026 n C .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "The size of a cluster is the number of documents in the cluster. The clustering algorithm shown in Fig.2 cannot guarantee that the clusters will be of identical size. This causes many problems because the overlap depends on the size of each cluster.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 99, |
| "end": 104, |
| "text": "Fig.2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Size of a cluster", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "To solve this problem, we force the clusters to have the same size using the following approach. For clusters that contain a larger number of documents than the average, we remove the documents that are far from the cluster's centroid. These removed documents are added to clusters that are smaller than average 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Size of a cluster", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "Since all the clusters are of the same size, the size of a cluster becomes a parameter in our algorithm. Thus, we need to set this parameter to an optimal value to achieve the best performance. We will report experiments conducted to determine this value in Section 4.3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Size of a cluster", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "After clustering each ranked list, we obtain a group of clusters, each of which contains more or less relevant documents. Through re-ranking, we expect to determine reliable clusters and adjust the relevance scores of the documents in each ranked list such that the relevance scores become more reasonable. To identify reliable clusters, we assign to each cluster a reliability score. According to the Fusion Hypothesis, we use the overlap between clusters to compute the reliability of a cluster. The reliability", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ") ( ,i A C r of cluster i A C", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ", is computed as follows (see Table 2 for definitions of the symbols):", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 30, |
| "end": 37, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ", ) , ( _ ) , ( _ ) , ( _ ) ( , , , , , \u2211 \u2211 \u23a5 \u23a5 \u23a5 \u23a6 \u23a4 \u23a2 \u23a2 \u23a2 \u23a3 \u23a1 = j j B i A t t B j B i A C C CC Sim C q qC Sim C q qC Sim C r (3) where , ) , ( _ , , , , j B i A j B i A C C C C CC Sim \u2229 = (4) . ) ( ) , ( _ , , , i A C d A i A C d rel C q qC Sim i A \u2211 \u2208 = (5) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The size of a cluster and the number of clusters are critical issues in clustering and have been studied by many researchers. This paper focuses on how to combine fusion and clustering together and shows the potential of this combination approach. Therefore, we use a very simple method to solve the problem. Our clustering algorithm is also very simple. Our future work will be to investigate the impacts of different algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In equation 4, the similarity of two clusters is estimated based on the common documents they both contain. In equation 5, the similarity between a query and a cluster is estimated based on the average relevance score of the documents that the cluster contains. In equation 3, for each cluster", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "i A C , in A RL , its reliability ) ( ,i A C r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "is defined as the weighted sum of the similarity between cluster Ai C and all the clusters in B RL . The intuition underlying this formula is that the more similar two clusters are, the more reliable they are, as illustrated in Fig.1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 228, |
| "end": 233, |
| "text": "Fig.1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Since reliability represents the precision of a cluster, we use it to adjust the relevance score of the documents in each cluster. Formula (6) adjusts the relevance score of a document in a highly reliable cluster:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ")], ( 1 [ ) ( ) ( , * t A A A C r d rel d rel + \u00d7 = (6) where t A C d , \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ranking", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "So far, each original ranked list has been adjusted by means of clustering and re-ranking. We next combine these improved ranked lists together using the following formula (i.e. CombSUM in [Fox 1994 ]):", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 198, |
| "text": "[Fox 1994", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "). ( ) ( ) (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "* * d rel d rel d rel B A + = (7)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In equation 7, the combined relevance of document d is the sum of all the adjusted relevance values that have been computed in the previous steps.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In this section, we will present the results of our experiments. We will first describe our experimental settings in Section 4.1. In Section 4.2, we will verify the two hypotheses described in Section 3 using the results of some experiments. In Section 4.3, we will compare our approach with the other three conventional fusion methods. Finally, we will examine the impact of cluster size.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We used several retrieval results from the TREC-5 Chinese information retrieval track in our fusion experiments. The document collection contains articles published in the People's Daily and news released by the Xinhua News Agency. Some statistical characteristics of the collection are summarized in Tables 3. The 10 groups who took part in TREC-5 Chinese provided 20 retrieval results. We randomly picked seven ranked lists for our fusion experiments. The tags and average precision are listed in Table 4 . It is noted that the average precision is similar except for HIN300. Since the ranges of similarity values of the different retrieval results were quite different, we normalized each retrieval result before combining them. The bound of each retrieval result was mapped to [0,1] using the following formula [Lee 1997 ", |
| "cite_spans": [ |
| { |
| "start": 815, |
| "end": 824, |
| "text": "[Lee 1997", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 499, |
| "end": 506, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment settings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We will first examine the two hypotheses we mentioned in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Examining the hypotheses", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In relation to Clustering Hypothesis, we clustered each ranked list into 10 clusters using our clustering algorithm. Table 5 shows some statistical information for the clustering results. The first row lists four kinds of clusters containing no, 1, 2-10 and more than 10 relevant document(s). The second row shows the corresponding percentage of each kind of cluster.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 124, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Examining the hypotheses", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The third row shows the percentage of relevant documents in each kind of cluster.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 119", |
| "sec_num": null |
| }, |
| { |
| "text": "From Table 5 , we can make two observations. First, about 50% of the clusters contain 1 or no relevant document. Second, most relevant documents (more than 60%) are in a small number of clusters (about 7%). According to these observations, we can draw the conclusion that relevant documents are concentrated in a few clusters.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 119", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, in our experiments, the Clustering Hypothesis holds in terms of the initial retrieval result when a proper algorithm is adopted. for each combination pair. Table 6 lists some results. The last row shows that the average overlap R is 0.7688, while the corresponding average overlap N is 0.3351. It turns out that the Fusion Hypothesis holds for the retrieval results we obtained. Table 6 will also be used in Section 4.3 to confirm that overlap R is the most important factor determining the performance of fusion. We mark those rows whose overlap R scores are higher than 0.80 with the character *. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 162, |
| "end": 169, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 385, |
| "end": 392, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 119", |
| "sec_num": null |
| }, |
| { |
| "text": "First, we studied three combination methods that were proposed by Fox, namely, CombMAX, CombSUM, and CombMNZ. Their fusion results for the same data set are listed in Table 7 . The last row lists the average precision of each combination strategy. Since the average precision of the individual retrieval systems is 0.3086 (see Table 4 ), each of these three fusion methods has improved significantly in terms of the average precision. CombSUM appears to be the best one among them. This confirms the observation in [Fox 1994 ].", |
| "cite_spans": [ |
| { |
| "start": 515, |
| "end": 524, |
| "text": "[Fox 1994", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 174, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 327, |
| "end": 334, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with conventional fusion methods", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Then, we compared the performance of our approach with that of the other three methods, as shown in the last row in Table 7 . Our new approach achieved 3% improvement over CombSUM. We also find that among all the 21 combination pairs, 17 of them are improved, compared to the results obtained using the CombSUM approach. We mark these rows with the character *. Comparing the results shown in Table 7 with those listed in Table 6 , we find that the pairs with a overlap R of over 0.80 correspond to better combination performance. We call this kind of pair a combinable pair. For example, BrklyCH1 & CLCHNA is a combinable pair. Although the average combination performance is 0.3654 (using our approach), almost all the combinable pairs exceed the average performance 3 . This again confirms the conclusion in both [Lee 1997] and [Vogt 1998 ] that the performance of fusion heavily depends on overlap R . It also reveals the limitation of our approach and of other linear fusion techniques in that a high overlap of relevant documents is a pre-requisite for performance enhancement. For those pairs that don't satisfy this pre-requisite, normal fusion may even decrease retrieval performance.", |
| "cite_spans": [ |
| { |
| "start": 816, |
| "end": 826, |
| "text": "[Lee 1997]", |
| "ref_id": null |
| }, |
| { |
| "start": 831, |
| "end": 841, |
| "text": "[Vogt 1998", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 116, |
| "end": 123, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 393, |
| "end": 400, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 422, |
| "end": 429, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with conventional fusion methods", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We also compared our approach with the optimal linear combination. Since ranked lists 3 \"gmu96ca1 & gmu96cm1\" is an exception because their related overlap N score is very high.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 121", |
| "sec_num": null |
| }, |
| { |
| "text": "J. Zhang et al.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "122", |
| "sec_num": null |
| }, |
| { |
| "text": "are combined linearly, only the ratio of the two weights affects the final performance:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "122", |
| "sec_num": null |
| }, |
| { |
| "text": ". B A combined wRL RL RL + =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "122", |
| "sec_num": null |
| }, |
| { |
| "text": "CombSUM can be taken as a special case of linear combination where w is set to be 1. When the relevant documents are known, the weight w can be optimized using some numerical method. In our experiment, the weight w was optimized using golden section search [Press 1992 ]. This approach was adopted in [Vogt 1998 ]. The average precision for the optimal linear combination we obtained is 0.3714. As shown in Fig.3 , our approach performs better than CombSUM and CombMAX and is very close to CombBest. To summarize, we can draw three conclusions from the above experiments. First, in most cases, our new approach shows better performance than most of the conventional methods, including CombSUM and CombMNZ. Second, overlap R strongly affects the performance of linear fusion. Third, the performance of our approach is very close to that of the optimal linear combination approach.", |
| "cite_spans": [ |
| { |
| "start": 257, |
| "end": 268, |
| "text": "[Press 1992", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 301, |
| "end": 311, |
| "text": "[Vogt 1998", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 407, |
| "end": 412, |
| "text": "Fig.3", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "122", |
| "sec_num": null |
| }, |
| { |
| "text": "We also studied the impact of cluster size. Table 8 shows the experimental results. When the cluster size varied from 200 to 5, the average precision did not change much. The maximum value was 0.3675 when the cluster size was 25 and the minimum value was 0.3621", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 8", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Impact of cluster size", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "when the cluster size was 200. This shows that the cluster size setting has very little impact in our approach. Another interesting question is what will happen when the cluster size is set to 1000 or 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 123", |
| "sec_num": null |
| }, |
| { |
| "text": "When the cluster size is set to 1000, each ranked list becomes a single cluster. Then, the reliability of A C and B C can be computed as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 123", |
| "sec_num": null |
| }, |
| { |
| "text": ". ) , ( _ ) ( ) ( B A B A B A C C C C CC Sim C r C r \u2229 = = = Since ) ( A C r and ) ( B C r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 123", |
| "sec_num": null |
| }, |
| { |
| "text": "are equal, the re-ranking and fusion step becomes a normal CombSUM approach, and the average precision is equal to that of the CombSUM approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 123", |
| "sec_num": null |
| }, |
| { |
| "text": "When the cluster size is set to 1, each document forms a cluster by itself. Those documents appearing in both ranked lists will be improved. For those documents that only appear in one ranked list, their relevance will remain unchanged. On the other hand, the relevance score of those documents that appear in both ranked lists will be improved with a . The final result will be close to that of the CombSUM approach because this factor is close to 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 123", |
| "sec_num": null |
| }, |
| { |
| "text": "The impact of the cluster size setting is illustrated in Fig.4 . From this figure, we find that fusion combined with clustering is consistently better than the approaches that do not include clustering (where cluster size = 1000). We find that a setting size to 25 gives the best combination when the ranked list has a size of 1,000. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 62, |
| "text": "Fig.4", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improving the Effectiveness of Information Retrieval with Clustering and Fusion 123", |
| "sec_num": null |
| }, |
| { |
| "text": "Combining multiple retrieval results is certainly a practical technique for improving the overall performance of information retrieval systems. In this paper, we have proposed a novel fusion method that can be combined with document clustering to improve retrieval performance. Our approach consists of three steps. First, we apply clustering to the initial ranked document lists to obtain a list of document clusters. Then, we identify reliable clusters and adjust each ranked list separately using our re-ranking approach. Finally, conventional fusion is carried out to produce an adjusted ranked list.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Since our approach is based on two hypotheses, we first verified them by means of experiments. We also compared our approach with other conventional approaches. The results show that each of them achieves some improvement, and that our approach compares favorably with them. We also investigated the impact of cluster size. We found that our approach is rather stable under variation in the size of clusters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Although our method showed good performance in our experiments, we believe it still can be improved further. A better clustering algorithm for identifying more reliable clusters and more elaborate formula for re-ranking ranked lists should lead to further improvement. These will be topics for our future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Automatic Combination of Multiple Ranked Retrieval Systems", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [ |
| "T" |
| ], |
| "last": "Bartell", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "W" |
| ], |
| "last": "Cottrell", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "K" |
| ], |
| "last": "Belew", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "173--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bartell,B.T., Cottrell,G.W., and Belew,R.K., \"Automatic Combination of Multiple Ranked Retrieval Systems,\" Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, 1994, pp. 173-181.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Scatter/gather: A Cluster-based Approach to Browsing Large Document Collections", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "R" |
| ], |
| "last": "Cutting", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "R" |
| ], |
| "last": "Karger", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "O" |
| ], |
| "last": "Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "W" |
| ], |
| "last": "Tukey", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 15th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "126--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.R.Cutting, D.R.Karger, J.O.Pedersen, and J.W.Tukey, \"Scatter/gather: A Cluster-based Approach to Browsing Large Document Collections,\" Proceedings of the 15th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, 1992, pp. 126-135.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Combination of Multiple Searches", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shaw", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "The Second Text Retrieval Conference (TREC2), NIST Special Publication 500-215", |
| "volume": "", |
| "issue": "", |
| "pages": "243--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fox,E. and Shaw,J., \"Combination of Multiple Searches,\" The Second Text Retrieval Conference (TREC2), NIST Special Publication 500-215, 1994, pp. 243-252.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Reexamining the Cluster Hypothesis: Scatter/Gather on Retrieval Results", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Hearst", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "O" |
| ], |
| "last": "Pedersen", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 19th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "76--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hearst,M.A., and Pedersen,J.O., \"Reexamining the Cluster Hypothesis: Scatter/Gather on Retrieval Results,\" Proceedings of the 19th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, 1996, pp. 76-82.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Analyses of Multiple Evidence Combination", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "H" |
| ], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 20th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "267--276", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.H.Lee. \"Analyses of Multiple Evidence Combination.,\" Proceedings of the 20th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, 1997, pp. 267-276.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The Best of Both Worlds: Combining Ranked List and Clustering", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Leuski", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A.Leuski and J.Allan, \"The Best of Both Worlds: Combining Ranked List and Clustering,\" CIIR Technical Report IR-172, 1999, http://cobar.cs.umass.edu/pubfiles/ir-172.ps.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Improving Interactive Retrieval by Combining Ranked List and Clustering", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Leuski", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Informations Assistee par Ordinateur = Computer-Assisted Information Retrieval)", |
| "volume": "", |
| "issue": "", |
| "pages": "665--681", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A.Leuski and J.Allan, \"Improving Interactive Retrieval by Combining Ranked List and Clustering,\" Proceedings of RIAO(Recherche d'Informations Assistee par Ordinateur = Computer-Assisted Information Retrieval) 2000 Conference, 2000, pp. 665-681.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A Combination of Expert Opinion Approach to Probabilistic Information Retrieval, part I: The Conceptual Model", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Information Processing and Management", |
| "volume": "26", |
| "issue": "3", |
| "pages": "371--382", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thompson,P., \"A Combination of Expert Opinion Approach to Probabilistic Information Retrieval, part I: The Conceptual Model,\" Information Processing and Management, 26(3) 1990, pp. 371-382.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Using Relevance to Train a Linear Mixture of Experts", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Vogt", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Cottrell", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Belew", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Bartell", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 5th Text Retrieval Conference (TREC5), NIST Special Publication 500-238", |
| "volume": "", |
| "issue": "", |
| "pages": "503--516", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vogt,C., Cottrell,G., Belew,R. and Bartell,B., \"Using Relevance to Train a Linear Mixture of Experts,\" Proceedings of the 5th Text Retrieval Conference (TREC5), NIST Special Publication 500-238, 1997, pp. 503-516.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Predicting the Performance of Linearly Combined IR Systems", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Vogt", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Cottrell", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 21st Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "190--196", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vogt,C. and G.Cottrell., \"Predicting the Performance of Linearly Combined IR Systems,\" Proceedings of the 21st Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, 1998, pp. 190-196.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Fusion Via a Linear Combination of Scores", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Vogt", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Cottrell", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Information Retrieval", |
| "volume": "1", |
| "issue": "2-3", |
| "pages": "151--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vogt,C. and Cottrell,G., \"Fusion Via a Linear Combination of Scores,\" Information Retrieval, 1(2-3), 1999, pp. 151-173.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Overview of the Sixth Text Retrieval Conference (TREC-6)", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Voorhees", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Harman", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "NIST Special Publication", |
| "volume": "", |
| "issue": "", |
| "pages": "1--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E.Voorhees, D.Harman, \"Overview of the Sixth Text Retrieval Conference (TREC-6),\" NIST Special Publication 500-240, 1997. pp. 1-24.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Numerical Recipes in C -The Art of Scientific Computing", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "H" |
| ], |
| "last": "Press", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Teukolsky", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "T" |
| ], |
| "last": "Vetterling", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "P" |
| ], |
| "last": "Flannery", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Press,W.H., Teukolsky,S.A., Vetterling,W.T., and Flannery,B.P., Numerical Recipes in C - The Art of Scientific Computing, Cambridge University Press, 1992.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "shows the basis idea behind our approach. Two clusters (a1 and b1) from different ranked lists that have the largest overlap are identified as reliable clusters." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Clustering results of two ranked lists." |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "shows our clustering algorithm. The LoopThreshold and ShiftThreshold value were set to 10 in our experiments.Randomly set document i d to cluster j C ; LoopCount =0; ShiftCount = 1000; While (LoopCount < LoopThreshold and ShiftCount > ShiftThreshold) Do Construct the centroid of each cluster, i.e. to its nearest cluster(the distance is determined by the similarity between i d and the centroid of the cluster); ShiftCount = the number of documents shifted to other cluster;LoopCount++; Algorithm for document clustering." |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Performance of different approaches." |
| }, |
| "FIGREF6": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Impact of cluster size." |
| }, |
| "TABREF0": { |
| "text": "Formulas proposed byFox & Shaw.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Name</td><td colspan=\"4\">Combined Similarity =</td></tr><tr><td>CombMAX</td><td colspan=\"4\">MAX(Individual Similarities)</td></tr><tr><td>CombMIN</td><td colspan=\"4\">MIN(Individual Similarities)</td></tr><tr><td>CombSUM</td><td colspan=\"4\">SUM(Individual Similarities)</td></tr><tr><td>CombANZ</td><td colspan=\"3\">dual SUM(Indivi</td><td>es) Similariti</td></tr><tr><td/><td>Number</td><td>of</td><td colspan=\"2\">Nonzero</td><td>es Similariti</td></tr><tr><td>CombMNZ</td><td colspan=\"4\">SUM(Individual Similarities) * Number of Nonzero</td></tr><tr><td/><td/><td colspan=\"3\">Similarities</td></tr></table>" |
| }, |
| "TABREF1": { |
| "text": "Lee observed that fusion works well for result sets that have a high overlap R", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"3\">112</td><td/><td>J. Zhang et al.</td></tr><tr><td/><td/><td/><td/><td>and a low</td></tr><tr><td colspan=\"2\">N</td><td>overlap</td><td colspan=\"2\">. Inspired by this observation, we also incorporate common R</td></tr><tr><td/><td/><td/><td>A RL 1 . common R</td><td>is the number of common relevant documents in</td><td>A RL and B RL .</td></tr><tr><td colspan=\"2\">N</td><td>common</td><td colspan=\"2\">is the number of common irrelevant documents in</td><td>A RL and B RL .</td></tr><tr><td>1</td><td colspan=\"4\">A RL means ranked list returned by retrieval system A.</td></tr></table>" |
| }, |
| "TABREF2": { |
| "text": "Notations.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Symbol</td><td>Explanation</td></tr></table>" |
| }, |
| "TABREF4": { |
| "text": "Characteristics of the TREC-5 Chinese collection.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Number of docs</td><td>164,811</td></tr><tr><td>Total size (Mega Bytes)</td><td>170</td></tr><tr><td>Average doc length (Characters)</td><td>507</td></tr><tr><td>Number of queries</td><td>28</td></tr><tr><td>Average query length (Characters)</td><td>119</td></tr><tr><td>Average number of relevant docs/query</td><td>93</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "Average precision of individual retrieval system", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Ranked list</td><td>AvP (11 pt)</td></tr><tr><td>BrklyCH1</td><td>0.3568</td></tr><tr><td>CLCHNA</td><td>0.2702</td></tr><tr><td>Cor5C1vt</td><td>0.3647</td></tr><tr><td>HIN300</td><td>0.1636</td></tr><tr><td>City96c1</td><td>0.3256</td></tr><tr><td>Gmu96ca1</td><td>0.3218</td></tr><tr><td>gmu96cm1</td><td>0.3579</td></tr><tr><td>Average :</td><td>0.3086</td></tr></table>" |
| }, |
| "TABREF7": { |
| "text": "Distribution of relevant docs.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Different kinds of</td><td>Containing</td><td>Containing</td><td>Containing</td><td>Containing</td></tr><tr><td>clusters</td><td>no relevant</td><td>1 relevant</td><td>2-10 relevant</td><td>>10 relevant</td></tr><tr><td/><td>doc</td><td>doc</td><td>docs</td><td>docs</td></tr><tr><td>Percentage of each</td><td/><td/><td/><td/></tr><tr><td>kind of cluster</td><td>38.3%</td><td>15.0%</td><td>35.0%</td><td>7.0%</td></tr><tr><td>Percentage of</td><td/><td/><td/><td/></tr><tr><td>relevant docs</td><td/><td/><td/><td/></tr><tr><td>contained in this kind of cluster</td><td>0%</td><td>3.7%</td><td>35.8%</td><td>60.5%</td></tr><tr><td colspan=\"3\">To test the Fusion Hypothesis, we computed overlap R</td><td>and overlap N</td><td/></tr></table>" |
| }, |
| "TABREF8": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>overlap R</td><td>and overlap N</td><td colspan=\"3\">values of combination pairs.</td></tr><tr><td colspan=\"2\">Combination pair</td><td>overlap R</td><td>N</td><td>overlap</td></tr><tr><td colspan=\"2\">BrklyCH1 & CLCHNA</td><td>* 0.8542</td><td/><td>0.3398</td></tr><tr><td colspan=\"2\">BrklyCH1 & Cor5C1vt</td><td>* 0.9090</td><td/><td>0.4393</td></tr><tr><td colspan=\"2\">BrklyCH1 & HIN300</td><td>0.4985</td><td/><td>0.2575</td></tr><tr><td colspan=\"2\">BrklyCH1 & City96c1</td><td>* 0.8996</td><td/><td>0.4049</td></tr><tr><td colspan=\"2\">BrklyCH1 & Gmu96ca1</td><td>* 0.8784</td><td/><td>0.3259</td></tr><tr><td colspan=\"2\">BrklyCH1 & gmu96cm1</td><td>* 0.8871</td><td/><td>0.3292</td></tr><tr><td colspan=\"2\">CLCHNA & Cor5C1vt</td><td>* 0.8728</td><td/><td>0.4118</td></tr><tr><td colspan=\"2\">CLCHNA & HIN300</td><td>0.4652</td><td/><td>0.2172</td></tr><tr><td colspan=\"2\">CLCHNA & City96c1</td><td>* 0.8261</td><td/><td>0.2668</td></tr><tr><td colspan=\"2\">CLCHNA & Gmu96ca1</td><td>* 0.8447</td><td/><td>0.3090</td></tr><tr><td colspan=\"2\">CLCHNA & gmu96cm1</td><td>* 0.8585</td><td/><td>0.3412</td></tr><tr><td colspan=\"2\">Cor5C1vt & HIN300</td><td>0.4961</td><td/><td>0.2392</td></tr><tr><td colspan=\"2\">Cor5C1vt & City96c1</td><td>* 0.8763</td><td/><td>0.2943</td></tr><tr><td colspan=\"2\">Cor5C1vt & Gmu96ca1</td><td>* 0.9193</td><td/><td>0.4742</td></tr><tr><td colspan=\"2\">Cor5C1vt & gmu96cm1</td><td>* 0.9185</td><td/><td>0.4525</td></tr><tr><td colspan=\"2\">HIN300 & City96c1</td><td>0.4813</td><td/><td>0.1555</td></tr><tr><td colspan=\"2\">HIN300 & Gmu96ca1</td><td>0.4636</td><td/><td>0.1854</td></tr><tr><td colspan=\"2\">HIN300 & gmu96cm1</td><td>0.4701</td><td/><td>0.2004</td></tr><tr><td colspan=\"2\">City96c1 & Gmu96ca1</td><td>* 0.8698</td><td/><td>0.2854</td></tr><tr><td colspan=\"2\">City96c1 & gmu96cm1</td><td>* 0.8860</td><td/><td>0.3005</td></tr><tr><td colspan=\"2\">Gmu96ca1 & gmu96cm1</td><td>* 0.9687</td><td/><td>0.8064</td></tr><tr><td>Average</td><td/><td>0.7688</td><td/><td>0.3351</td></tr></table>" |
| }, |
| "TABREF9": { |
| "text": "Average precision of each combination pair.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Combination pair</td><td>Comb MAX</td><td>Comb SUM</td><td>Comb MNZ</td><td>Our Approach (Cluster size=100)</td></tr><tr><td>BrklyCH1 & CLCHNA</td><td>0.3401</td><td>0.3627</td><td>0.3549</td><td>* 0.3755</td></tr><tr><td>BrklyCH1 & Cor5C1vt</td><td>0.3832</td><td>0.3976</td><td>0.3961</td><td>* 0.4107</td></tr><tr><td>BrklyCH1 & HIN300</td><td>0.3560</td><td>0.3243</td><td>0.2618</td><td>0.3107</td></tr><tr><td>BrklyCH1 & city96c1</td><td>0.3650</td><td>0.3833</td><td>0.3856</td><td>* 0.3912</td></tr><tr><td>BrklyCH1 & gmu96ca1</td><td>0.3753</td><td>0.4028</td><td>0.3999</td><td>* 0.4022</td></tr><tr><td>BrklyCH1 & gmu96cm1</td><td>0.3979</td><td>0.4234</td><td>0.4201</td><td>* 0.4243</td></tr><tr><td>CLCHNA & Cor5C1vt</td><td>0.3434</td><td>0.3560</td><td>0.3492</td><td>* 0.3707</td></tr><tr><td>CLCHNA & HIN300</td><td>0.2746</td><td>0.2478</td><td>0.2154</td><td>0.2579</td></tr><tr><td>CLCHNA & city96c1</td><td>0.3007</td><td>0.3459</td><td>0.3573</td><td>* 0.3931</td></tr><tr><td>CLCHNA & gmu96ca1</td><td>0.3269</td><td>0.3667</td><td>0.3634</td><td>* 0.3690</td></tr><tr><td>CLCHNA & gmu96cm1</td><td>0.3555</td><td>0.3864</td><td>0.3783</td><td>* 0.3883</td></tr><tr><td>Cor5C1vt & HIN300</td><td>0.3778</td><td>0.3081</td><td>0.2520</td><td>0.3139</td></tr><tr><td>Cor5C1vt & city96c1</td><td>0.3709</td><td>0.4091</td><td>0.4104</td><td>* 0.4285</td></tr><tr><td>Cor5C1vt & gmu96ca1</td><td>0.3568</td><td>0.3684</td><td>0.3676</td><td>* 0.3724</td></tr><tr><td>Cor5C1vt & gmu96cm1</td><td>0.3831</td><td>0.3926</td><td>0.3911</td><td>* 0.3975</td></tr><tr><td>HIN300 & city96c1</td><td>0.2616</td><td>0.2565</td><td>0.2444</td><td>0.3036</td></tr><tr><td>HIN300 & gmu96ca1</td><td>0.3466</td><td>0.2942</td><td>0.2464</td><td>0.2954</td></tr><tr><td>HIN300 & gmu96cm1</td><td>0.3764</td><td>0.3205</td><td>0.2613</td><td>0.3150</td></tr><tr><td>city96c1 & gmu96ca1</td><td>0.3310</td><td>0.3764</td><td>0.3854</td><td>* 0.3939</td></tr><tr><td>city96c1 & gmu96cm1</td><td>0.3595</td><td>0.3970</td><td>0.4047</td><td>* 0.4090</td></tr><tr><td>gmu96ca1 & gmu96cm1</td><td>0.3451</td><td>0.3514</td><td>0.3511</td><td>* 0.3505</td></tr><tr><td>Average:</td><td>0.3489</td><td>0.3557</td><td>0.3426</td><td>0.3654</td></tr></table>" |
| }, |
| "TABREF10": { |
| "text": "Impact of cluster size.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Size of Cluster</td><td>200</td><td>100</td><td>50</td><td>25</td><td>10</td><td>5</td></tr><tr><td>11pt AvP</td><td colspan=\"6\">0.3621 0.3654 0.3661 0.3675 0.3668 0.3661</td></tr></table>" |
| } |
| } |
| } |
| } |