ACL-OCL / Base_JSON /prefixU /json /U13 /U13-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U13-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:10:26.939518Z"
},
"title": "Cumulative Progress in Language Models for Information Retrieval",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Puurula",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Waikato Private",
"location": {
"postCode": "3105, 3240",
"settlement": "Bag, Hamilton",
"country": "New Zealand"
}
},
"email": "asp12@students.waikato.ac.nz"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The improvements to ad-hoc IR systems over the last decades have been recently criticized as illusionary and based on incorrect baseline comparisons. In this paper several improvements to the LM approach to IR are combined and evaluated: Pitman-Yor Process smoothing, TF-IDF feature weighting and modelbased feedback. The increases in ranking quality are significant and cumulative over the standard baselines of Dirichlet Prior and 2-stage Smoothing, when evaluated across 13 standard ad-hoc retrieval datasets. The combination of the improvements is shown to improve the Mean Average Precision over the datasets by 17.1% relative. Furthermore, the considered improvements can be easily implemented with little additional computation to existing LM retrieval systems. On the basis of the results it is suggested that LM research for IR should move towards using stronger baseline models.",
"pdf_parse": {
"paper_id": "U13-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "The improvements to ad-hoc IR systems over the last decades have been recently criticized as illusionary and based on incorrect baseline comparisons. In this paper several improvements to the LM approach to IR are combined and evaluated: Pitman-Yor Process smoothing, TF-IDF feature weighting and modelbased feedback. The increases in ranking quality are significant and cumulative over the standard baselines of Dirichlet Prior and 2-stage Smoothing, when evaluated across 13 standard ad-hoc retrieval datasets. The combination of the improvements is shown to improve the Mean Average Precision over the datasets by 17.1% relative. Furthermore, the considered improvements can be easily implemented with little additional computation to existing LM retrieval systems. On the basis of the results it is suggested that LM research for IR should move towards using stronger baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Research on ad-hoc Information Retrieval (IR) has been recently criticized for being based on incorrect baseline comparisons. According to extensive evaluation of IR systems from over a decade, no progress has been demonstrated on standard datasets (Armstrong et al., 2009a; Armstrong et al., 2009b) .",
"cite_spans": [
{
"start": 249,
"end": 274,
"text": "(Armstrong et al., 2009a;",
"ref_id": "BIBREF0"
},
{
"start": 275,
"end": 299,
"text": "Armstrong et al., 2009b)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we propose that although much of this criticism is valid, much of the more recent progress in Language Model-based (LM) IR has not been evaluated or received the attention that it deserved. We evaluate on 13 standard IR datasets some of the improvements that have been suggested to LMs over the years. It is shown that the combination of Pitman-Yor Process smoothing, TF-IDF feature weighting and Model-based Feedback produces a substantial and cumulative improvement over the common baseline LM smoothing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Improvements to LMs for IR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The LM approach to ad-hoc IR considers documents and queries to be generated by underlying n-gram LMs. The Query Likelihood (QL) framework for LM retrieval (Hiemstra, 1998) treats queries as being generated by document models, reducing the retrieval of the most relevant documents into ranking documents by the posterior probability of each document given the query. Unigram LMs and a uniform distribution over document priors is commonly assumed, so that the QLscore for each document correspond to the conditional log-probability of the query given the document:",
"cite_spans": [
{
"start": 156,
"end": 172,
"text": "(Hiemstra, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LM Approach to IR",
"sec_num": "2.1"
},
{
"text": "log p m (w) = log Z(w) + n w n log p m (n), (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Approach to IR",
"sec_num": "2.1"
},
{
"text": "where Z(w) is a Multinomial normalizer, w is the query word count vector, and p m (n) is given by a Multinomial estimated from the document word count vector d m :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Approach to IR",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p m (n) = d mn ||d m || 1",
"eq_num": "(2)"
}
],
"section": "LM Approach to IR",
"sec_num": "2.1"
},
{
"text": "The QL framework is the standard application of LMs to IR. It is equivalent to using a Multinomial Naive Bayes model for ranking, with classes corresponding to documents, and a uniform prior over the document models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Approach to IR",
"sec_num": "2.1"
},
{
"text": "The standard choices for LM model smoothing in IR have been Dirichlet Prior (DP) and 2stage Smoothing (2SS) (Zhai and Lafferty, 2004; Smucker and Allan, 2007; Zhai, 2008) . A recent improvement has been Pitman-Yor Process (PYP) smoothing, derived as approximate inference on a Hierarchical Pitman-Yor Process (Momtazi and Klakow, 2010; Huang and Renals, 2010) . All methods interpolate document model parameter estimates linearly with a background model, differing in how the interpolation weight is determined. PYP applies additionally power-law discounting of the document counts. For all methods the smoothed parameter estimates can be expressed in the form:",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Zhai and Lafferty, 2004;",
"ref_id": "BIBREF23"
},
{
"start": 134,
"end": 158,
"text": "Smucker and Allan, 2007;",
"ref_id": "BIBREF19"
},
{
"start": 159,
"end": 170,
"text": "Zhai, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 336,
"end": 359,
"text": "Huang and Renals, 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p m (n) = (1 \u2212 \u03b1 m ) d mn ||d m || 1 + \u03b1 m p c (n),",
"eq_num": "(3)"
}
],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "where d m is the discounted count vector, p c (n) is the background model and \u03b1 m is the smoothing weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "DP chooses the smoothing weight as \u03b1 m = 1 \u2212 ||dm|| 1 ||dm|| 1 +\u00b5 , where \u00b5 is a parameter. 2SS combines DP with Jelinek-Mercer smoothing, using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "\u03b1 m = 1 \u2212 ||dm|| 1 \u2212\u03b2||dm|| 1 ||dm|| 1 +\u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": ", where \u03b2 is a linear interpolation parameter. PYP uses",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "\u03b1 m = 1 \u2212 ||d m || 1 ||dm|| 1 +\u00b5 , with the discounted counts d mn = max(d mn \u2212\u2206 mn , 0), where \u2206 mn = \u03b4d \u03b4 mn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "is produced by Power-law Discounting (Huang and Renals, 2010) with the discounting parameter \u03b4. Replacing the discounting in PYP with the linear Jelinek-Mercer smoothing reproduces the 2SS estimates:",
"cite_spans": [
{
"start": 37,
"end": 61,
"text": "(Huang and Renals, 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "||d m || 1 = ||d m || 1 \u2212 \u03b2||d m || 1 . PYP is therefore a non-linear discounting version of 2SS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "The background model p c (n) is commonly a collection model estimated by treating all available documents as a single large document:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "p c (n) = m dmn n m d m n . A uniform distribu- tion is less commonly used: p c (n) = 1 |N | .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitman-Yor Process Smoothing",
"sec_num": "2.2"
},
{
"text": "Unigram LMs make several incorrect modeling assumptions about natural language, such as considering all words equally informative. Feature weighting has shown to be useful in improving the effectiveness of Multinomial models in both IR (Smucker and Allan, 2006; and other uses (Rennie et al., 2003; Frank and Bouckaert, 2006) . This is in contrast to earlier theory in IR that considered smoothing with collection model as non-complementary to feature weighting (Zhai and Lafferty, 2004) .",
"cite_spans": [
{
"start": 236,
"end": 261,
"text": "(Smucker and Allan, 2006;",
"ref_id": "BIBREF18"
},
{
"start": 277,
"end": 298,
"text": "(Rennie et al., 2003;",
"ref_id": "BIBREF15"
},
{
"start": 299,
"end": 325,
"text": "Frank and Bouckaert, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 462,
"end": 487,
"text": "(Zhai and Lafferty, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TF-IDF Feature Weighting",
"sec_num": "2.3"
},
{
"text": "TF-IDF word weighting for dataset documents can be done by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF-IDF Feature Weighting",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d n = log(1 + d n ||d || 0 ) log M M n ,",
"eq_num": "(4)"
}
],
"section": "TF-IDF Feature Weighting",
"sec_num": "2.3"
},
{
"text": "where d is the unweighted count vector, ||d || 0 the number of unique words in the document, M the number of documents and M n the number of documents where the word n occurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF-IDF Feature Weighting",
"sec_num": "2.3"
},
{
"text": "The first factor in Equation 4 is a TF log transform, using unique length normalization (Singhal et al., 1996) . The second factor is Robertson-Walker IDF (Robertson and Zaragoza, 2009) . Weighting query word vectors works identically. Collection model smoothing has an overlapping function to IDF weighting (Hiemstra and Kraaij, 1998) . Here this interaction is taken into account by changing the background smoothing distribution into a uniform distribution.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Singhal et al., 1996)",
"ref_id": "BIBREF17"
},
{
"start": 155,
"end": 185,
"text": "(Robertson and Zaragoza, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 308,
"end": 335,
"text": "(Hiemstra and Kraaij, 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TF-IDF Feature Weighting",
"sec_num": "2.3"
},
{
"text": "Pseudo-feedback is a traditional method used in IR that can have a large impact on retrieval performance. The top ranked documents can be used to construct a query model for a second pass of retrieval. With LMs there are two different ways to formalize this: KL-divergence Retrieval (Zhai and Lafferty, 2001) and Relevance Models (Lavrenko and Croft, 2001 ). Both methods enable replacing the query vector with a model (Zhai, 2008) .",
"cite_spans": [
{
"start": 283,
"end": 308,
"text": "(Zhai and Lafferty, 2001)",
"ref_id": "BIBREF22"
},
{
"start": 330,
"end": 355,
"text": "(Lavrenko and Croft, 2001",
"ref_id": "BIBREF9"
},
{
"start": 419,
"end": 431,
"text": "(Zhai, 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback Models",
"sec_num": "2.4"
},
{
"text": "A number of variants exist for LM feedback modeling. Practical modeling choices are using only the top K retrieved documents, and truncating the query model to the words present in the original query (Zhai, 2008) . The documents can be weighted according to the posterior probability of the document given the query, p(d m |w) \u221d p m (w) (Lavrenko and Croft, 2001 ).",
"cite_spans": [
{
"start": 200,
"end": 212,
"text": "(Zhai, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 337,
"end": 362,
"text": "(Lavrenko and Croft, 2001",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback Models",
"sec_num": "2.4"
},
{
"text": "The query model can also be interpolated linearly with the original query (Zhai and Lafferty, 2001 ). These modeling choices are combined here, resulting in a robust feedback model that has the same complexity for inference as the original query.",
"cite_spans": [
{
"start": 74,
"end": 98,
"text": "(Zhai and Lafferty, 2001",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback Models",
"sec_num": "2.4"
},
{
"text": "Using the top K = 50 retrieved documents, the query words w n > 0 can be interpolated with the top document models p k (n):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback Models",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w n = (1 \u2212 \u03bb) w n ||w || 1 \u03bb k p k (w ) p k (n) Z ,",
"eq_num": "(5)"
}
],
"section": "Feedback Models",
"sec_num": "2.4"
},
{
"text": "where w is the original query, \u03bb is the interpolation weight, and Z is a normalizer for the feedback counts: Z = n:w n >0 k p k (w )p k (n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback Models",
"sec_num": "2.4"
},
{
"text": "Combining the LM improvements was evaluated on standard ad-hoc IR datasets. These are the TREC 1-5 1 datasets split according to data sources, OHSU-TREC 2 and FIRE 2008-2011 3 . Each dataset was filtered by stopwording, short word removal and Porter-stemming. The datasets were each split into a development set for calibrating parameters and a held-out evaluation set. The OHSU-TREC dataset was split according to documents, using ohsumed.87 for development and ohsumed.88-91 for evaluation. The TREC and FIRE datasets were split according to queries, using the first 3/5 of queries for each year as development data and the remaining 2/5 as the evaluation data. For OHSU-TREC the queries consisted of the title and description sections of queries 1-63. For TREC and FIRE the description sections were used from queries 1-450 and 26-175, respectively. Table 1 summarizes the dataset split sizes.",
"cite_spans": [],
"ref_spans": [
{
"start": 853,
"end": 860,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.5"
},
{
"text": "The software used for the experiments was SGMWeka version 1.44, an open source toolkit for generative modeling 4 . Ranking effectiveness for the experiments was evaluated using Mean Average Precision from the top 50 documents (MAP@50). Smoothing parameters were optimized for MAP@50 using a parallelized Gaussian Luke, 2009) on the development sets. The significance of experiment results was tested on the evaluation set MAP@50 scores of each dataset, using paired one-sided t-tests, with significance level p < 0.05.",
"cite_spans": [
{
"start": 313,
"end": 324,
"text": "Luke, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.5"
},
{
"text": "The experiment results are shown in Table 2 . Comparing PYP to DP and 2SS, PYP improves significantly on DP smoothing. The difference to 2SS is considerable as well, but not statistically significant due to variance. Adding TF-IDF (+TI) weighting to PYP, the improvement becomes significant over the 2SS baseline. Adding feedback (+FB) results in an improvement that is significant compared to both other improvements. The overall mean improvement over 2SS is 4.07 MAP@50, a 17.1% relative improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.5"
},
{
"text": "This paper presented an empirical evaluation of combining improvements to information retrieval language models. Experiments on standard adhoc IR datasets show that several improvements significantly and cumulatively improve on the baseline methods of LM retrieval using 2SS and DP smoothing methods. This contrasts with the reported illusionary improvements in IR literature (Armstrong et al., 2009a; Armstrong et al., 2009b) . The considered improvements require very little additional computation and can be implemented with small modifications to existing IR search engines. (Miller et al., 1999; Song and Croft, 1999; Clinchant et al., 2006; Krikon and Kurland, 2011) . Unfortunately, like the improvements discussed in this paper, many of these methods lack publicly available implementations, have been pursued by few researchers, and have been evaluated on a limited number of datasets. Evaluation of methods such as these could yield practical tools for IR and other applications of LMs.",
"cite_spans": [
{
"start": 376,
"end": 401,
"text": "(Armstrong et al., 2009a;",
"ref_id": "BIBREF0"
},
{
"start": 402,
"end": 426,
"text": "Armstrong et al., 2009b)",
"ref_id": "BIBREF1"
},
{
"start": 579,
"end": 600,
"text": "(Miller et al., 1999;",
"ref_id": "BIBREF12"
},
{
"start": 601,
"end": 622,
"text": "Song and Croft, 1999;",
"ref_id": "BIBREF20"
},
{
"start": 623,
"end": 646,
"text": "Clinchant et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 647,
"end": 672,
"text": "Krikon and Kurland, 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.6"
},
{
"text": "The criticism of progress in ad-hoc IR (Armstrong et al., 2009a; Armstrong et al., 2009a; Trotman and Keeler, 2011) has missed valuable developments in LM-based IR. A second matter neglected in this criticism is the shift towards the learning-to-rank framework of IR (Joachims, 2002; Li, 2011) , where individual retrieval models have reduced roles as base rankers and features. In this context it is not necessary for models to improve on a single measure or replace older ones; rather, it is sufficient that new models provide complementary information for combination of results.",
"cite_spans": [
{
"start": 39,
"end": 64,
"text": "(Armstrong et al., 2009a;",
"ref_id": "BIBREF0"
},
{
"start": 65,
"end": 89,
"text": "Armstrong et al., 2009a;",
"ref_id": "BIBREF0"
},
{
"start": 90,
"end": 115,
"text": "Trotman and Keeler, 2011)",
"ref_id": "BIBREF21"
},
{
"start": 267,
"end": 283,
"text": "(Joachims, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 284,
"end": 293,
"text": "Li, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.6"
},
{
"text": "The work reported here is preliminary and further experiments are required to understand possible interaction effects between the combined improvements. Given the performance and simplicity of the evaluated improvements, the commonly used DP and 2SS baselines for LMs should not generally be used as primary baselines for IR experiments. The combination of improvements shown in this paper is one potential baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.6"
},
{
"text": "http://trec.nist.gov/data/test coll.html 2 http://trec.nist.gov/data/t9 filtering.html 3 http://www.isical.ac.in/\u02dcclia/ 4 http://sourceforge.net/projects/sgmweka/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Has adhoc retrieval improved since 1994?",
"authors": [
{
"first": "Timothy",
"middle": [
"G"
],
"last": "Armstrong",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09",
"volume": "",
"issue": "",
"pages": "692--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy G. Armstrong, Alistair Moffat, William Web- ber, and Justin Zobel. 2009a. Has adhoc retrieval improved since 1994? In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09, pages 692-693, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improvements that don't add up: ad-hoc retrieval results since 1998",
"authors": [
{
"first": "Timothy",
"middle": [
"G"
],
"last": "Armstrong",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th ACM conference on Information and knowledge management, CIKM '09",
"volume": "",
"issue": "",
"pages": "601--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy G. Armstrong, Alistair Moffat, William Web- ber, and Justin Zobel. 2009b. Improvements that don't add up: ad-hoc retrieval results since 1998. In Proceedings of the 18th ACM conference on In- formation and knowledge management, CIKM '09, pages 601-610, New York, NY, USA. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lexical entailment for information retrieval",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Clinchant",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gaussier",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 28th European conference on Advances in Information Retrieval, ECIR'06",
"volume": "",
"issue": "",
"pages": "217--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Clinchant, Cyril Goutte, and Eric Gaussier. 2006. Lexical entailment for information retrieval. In Proceedings of the 28th European conference on Advances in Information Retrieval, ECIR'06, pages 217-228, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Naive bayes for text classification with unbalanced classes",
"authors": [
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Remco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bouckaert",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 10th European conference on Principle and Practice of Knowledge Discovery in Databases, PKDD'06",
"volume": "",
"issue": "",
"pages": "503--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eibe Frank and Remco R. Bouckaert. 2006. Naive bayes for text classification with unbalanced classes. In Proceedings of the 10th European conference on Principle and Practice of Knowledge Discovery in Databases, PKDD'06, pages 503-510, Berlin, Hei- delberg. Springer-Verlag.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Twentyone at trec-7: Ad-hoc and cross-language track",
"authors": [
{
"first": "Djoerd",
"middle": [],
"last": "Hiemstra",
"suffix": ""
},
{
"first": "Wessel",
"middle": [],
"last": "Kraaij",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of Seventh Text REtrieval Conference (TREC-7",
"volume": "",
"issue": "",
"pages": "227--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djoerd Hiemstra and Wessel Kraaij. 1998. Twenty- one at trec-7: Ad-hoc and cross-language track. In In Proc. of Seventh Text REtrieval Conference (TREC-7, pages 227-238.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A linguistically motivated probabilistic model of information retrieval",
"authors": [
{
"first": "Djoerd",
"middle": [],
"last": "Hiemstra",
"suffix": ""
}
],
"year": 1998,
"venue": "Research and Advanced Technology for Digital Libraries",
"volume": "1513",
"issue": "",
"pages": "569--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djoerd Hiemstra. 1998. A linguistically motivated probabilistic model of information retrieval. In Re- search and Advanced Technology for Digital Li- braries, volume 1513 of Lecture Notes in Computer Science, pages 569-584, Berlin, Germany. Springer Verlag.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Power law discounting for n-gram language models",
"authors": [
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2010,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "5178--5181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Songfang Huang and Steve Renals. 2010. Power law discounting for n-gram language models. In ICASSP, pages 5178-5181. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Optimizing search engines using clickthrough data",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '02",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2002. Optimizing search en- gines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '02, pages 133-142, New York, NY, USA. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A study of the integration of passage-, document-, and clusterbased information for re-ranking search results",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Krikon",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Kurland",
"suffix": ""
}
],
"year": 2011,
"venue": "Inf. Retr",
"volume": "14",
"issue": "6",
"pages": "593--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyal Krikon and Oren Kurland. 2011. A study of the integration of passage-, document-, and cluster- based information for re-ranking search results. Inf. Retr., 14(6):593-616, December.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Relevance based language models",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '01",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Lavrenko and W. Bruce Croft. 2001. Rele- vance based language models. In Proceedings of the 24th annual international ACM SIGIR confer- ence on Research and development in information retrieval, SIGIR '01, pages 120-127, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A short introduction to learning to rank",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "IEICE Transactions",
"volume": "",
"issue": "10",
"pages": "1854--1862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Li. 2011. A short introduction to learning to rank. IEICE Transactions, 94-D(10):1854-1862.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Essentials of Metaheuristics. Lulu, version 1.2 edition",
"authors": [
{
"first": "Sean",
"middle": [
"Luke"
],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Luke. 2009. Essentials of Metaheuristics. Lulu, version 1.2 edition. Available for free at http://cs.gmu.edu/\u223csean/book/metaheuristics/.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A hidden markov model information retrieval system",
"authors": [
{
"first": "R",
"middle": [
"H"
],
"last": "David",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"M"
],
"last": "Leek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SI-GIR '99",
"volume": "",
"issue": "",
"pages": "214--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R. H. Miller, Tim Leek, and Richard M. Schwartz. 1999. A hidden markov model informa- tion retrieval system. In Proceedings of the 22nd annual international ACM SIGIR conference on Re- search and development in information retrieval, SI- GIR '99, pages 214-221, New York, NY, USA. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hierarchical Pitman-Yor language model for information retrieval",
"authors": [
{
"first": "Saeedeh",
"middle": [],
"last": "Momtazi",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '10",
"volume": "",
"issue": "",
"pages": "793--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saeedeh Momtazi and Dietrich Klakow. 2010. Hi- erarchical Pitman-Yor language model for informa- tion retrieval. In Proceedings of the 33rd inter- national ACM SIGIR conference on Research and development in information retrieval, SIGIR '10, pages 793-794, New York, NY, USA. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Effective term weighting for sentence retrieval",
"authors": [
{
"first": "Saeedeh",
"middle": [],
"last": "Momtazi",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 14th European conference on Research and advanced technology for digital libraries, ECDL'10",
"volume": "",
"issue": "",
"pages": "482--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saeedeh Momtazi, Matthew Lease, and Dietrich Klakow. 2010. Effective term weighting for sen- tence retrieval. In Proceedings of the 14th Euro- pean conference on Research and advanced technol- ogy for digital libraries, ECDL'10, pages 482-485, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tackling the poor assumptions of naive bayes text classifiers",
"authors": [
{
"first": "Jason",
"middle": [
"D"
],
"last": "Rennie",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Teevan",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Karger",
"suffix": ""
}
],
"year": 2003,
"venue": "ICML'03",
"volume": "",
"issue": "",
"pages": "616--623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D. Rennie, Lawrence Shih, Jaime Teevan, and David R. Karger. 2003. Tackling the poor assump- tions of naive bayes text classifiers. In ICML'03, pages 616-623.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The probabilistic relevance framework: Bm25 and beyond",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "Found. Trends Inf. Retr",
"volume": "3",
"issue": "",
"pages": "333--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Found. Trends Inf. Retr., 3:333-389, April.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Pivoted document length normalization",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 19th annual international ACM SI-GIR conference on Research and development in information retrieval, SIGIR '96",
"volume": "",
"issue": "",
"pages": "21--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Singhal, Chris Buckley, and Mandar Mitra. 1996. Pivoted document length normalization. In Pro- ceedings of the 19th annual international ACM SI- GIR conference on Research and development in in- formation retrieval, SIGIR '96, pages 21-29, New York, NY, USA. ACM.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lightening the load of document smoothing for better language modeling retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Smucker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, SI-GIR '06",
"volume": "",
"issue": "",
"pages": "699--700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D. Smucker and James Allan. 2006. Lighten- ing the load of document smoothing for better lan- guage modeling retrieval. In Proceedings of the 29th annual international ACM SIGIR conference on Re- search and development in information retrieval, SI- GIR '06, pages 699-700, New York, NY, USA. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An Investigation of Dirichlet Prior Smoothings Performance Advantage",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Smucker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D. Smucker and James Allan. 2007. An In- vestigation of Dirichlet Prior Smoothings Perfor- mance Advantage. Technical report, Department of Computer Science, University of Massachusetts, Amherst.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A general language model for information retrieval",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the eighth international conference on Information and knowledge management, CIKM '99",
"volume": "",
"issue": "",
"pages": "316--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Song and W. Bruce Croft. 1999. A general lan- guage model for information retrieval. In Proceed- ings of the eighth international conference on In- formation and knowledge management, CIKM '99, pages 316-321, New York, NY, USA. ACM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Ad hoc ir: not much room for improvement",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Trotman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Keeler",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, SIGIR '11",
"volume": "",
"issue": "",
"pages": "1095--1096",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Trotman and David Keeler. 2011. Ad hoc ir: not much room for improvement. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, SIGIR '11, pages 1095-1096, New York, NY, USA. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Modelbased feedback in the language modeling approach to information retrieval",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the tenth international conference on Information and knowledge management, CIKM '01",
"volume": "",
"issue": "",
"pages": "403--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengxiang Zhai and John Lafferty. 2001. Model- based feedback in the language modeling approach to information retrieval. In Proceedings of the tenth international conference on Information and knowl- edge management, CIKM '01, pages 403-410, New York, NY, USA. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A study of smoothing methods for language models applied to information retrieval",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2004,
"venue": "ACM Trans. Inf. Syst",
"volume": "22",
"issue": "2",
"pages": "179--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengxiang Zhai and John Lafferty. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Trans. Inf. Syst., 22(2):179-214, April.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Statistical language models for information retrieval a critical review",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2008,
"venue": "Found. Trends Inf. Retr",
"volume": "2",
"issue": "3",
"pages": "137--213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ChengXiang Zhai. 2008. Statistical language models for information retrieval a critical review. Found. Trends Inf. Retr., 2(3):137-213, March.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"text": "Dataset documents, test queries",
"content": "<table><tr><td>Data</td><td colspan=\"4\">Development Evaluation</td></tr><tr><td/><td colspan=\"4\">Docs Test Docs Test</td></tr><tr><td>fire en</td><td colspan=\"2\">21919 90</td><td colspan=\"2\">16075 60</td></tr><tr><td colspan=\"5\">ohsu trec 36890 63 196555 63</td></tr><tr><td>trec ap</td><td colspan=\"4\">47172 118 33474 80</td></tr><tr><td>trec cr</td><td>5063</td><td>38</td><td>4006</td><td>29</td></tr><tr><td colspan=\"3\">trec doe 10053 28</td><td>7717</td><td>10</td></tr><tr><td colspan=\"3\">trec fbis 23207 68</td><td colspan=\"2\">17315 48</td></tr><tr><td>trec fr</td><td colspan=\"4\">25185 112 20581 75</td></tr><tr><td>trec ft</td><td colspan=\"4\">41452 113 30549 75</td></tr><tr><td>trec la</td><td colspan=\"2\">25944 87</td><td colspan=\"2\">17834 56</td></tr><tr><td>trec pt</td><td>1635</td><td>9</td><td>1792</td><td>5</td></tr><tr><td colspan=\"2\">trec sjmn 9160</td><td>29</td><td>6469</td><td>19</td></tr><tr><td colspan=\"3\">trec wsj 21847 60</td><td colspan=\"2\">15839 41</td></tr><tr><td>trec zf</td><td colspan=\"2\">19901 60</td><td colspan=\"2\">13763 39</td></tr><tr><td colspan=\"3\">random search algorithm (</td><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Ranking effectiveness as % MAP@50.",
"content": "<table><tr><td/><td>DP</td><td>2SS PYP PYP PYP</td></tr><tr><td/><td/><td>+TI</td><td>+TI</td></tr><tr><td>Dataset</td><td/><td>+FB</td></tr><tr><td>fire en</td><td colspan=\"2\">44.44 44.46 45.16 44.68 48.04</td></tr><tr><td colspan=\"3\">ohsu trec 29.73 29.72 28.77 31.24 32.33</td></tr><tr><td>trec ap</td><td colspan=\"2\">22.76 23.05 24.41 24.91 28.55</td></tr><tr><td>trec cr</td><td colspan=\"2\">17.03 17.17 18.02 17.88 19.47</td></tr><tr><td colspan=\"3\">trec doe 26.49 24.97 30.58 30.98 34.66</td></tr><tr><td colspan=\"3\">trec fbis 23.51 23.57 24.66 26.14 28.81</td></tr><tr><td>trec fr</td><td colspan=\"2\">18.42 18.53 18.72 18.86 19.68</td></tr><tr><td>trec ft</td><td colspan=\"2\">23.26 23.55 24.65 23.73 24.80</td></tr><tr><td>trec la</td><td colspan=\"2\">18.05 19.27 19.06 20.43 20.78</td></tr><tr><td>trec pt</td><td colspan=\"2\">13.23 11.57 11.64 22.45 27.53</td></tr><tr><td colspan=\"3\">trec sjmn 20.84 21.47 20.27 16.83 17.12</td></tr><tr><td colspan=\"3\">trec wsj 32.00 32.44 33.77 34.53 38.41</td></tr><tr><td>trec zf</td><td colspan=\"2\">17.92 18.48 17.54 19.52 20.97</td></tr><tr><td>mean</td><td colspan=\"2\">23.67 23.71 24.40 25.55 27.78</td></tr><tr><td colspan=\"3\">Several LM improvements have also been</td></tr><tr><td colspan=\"3\">developed that require considerable additional</td></tr><tr><td colspan=\"3\">computation. Methods such as document neigh-</td></tr><tr><td colspan=\"3\">borhood smoothing, passage-based language</td></tr><tr><td colspan=\"3\">models, word correlation models and bigram</td></tr><tr><td colspan=\"3\">language models have all been shown to substan-</td></tr><tr><td colspan=\"3\">tially improve LM performance</td></tr></table>",
"html": null,
"num": null
}
}
}
}