ACL-OCL / Base_JSON /prefixC /json /C02 /C02-1026.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C02-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:20:08.914425Z"
},
"title": "The Effectiveness of Dictionary and Web-Based Answer Reranking",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "cyl@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe an in-depth study of using a dictionary (WordNet) and web search engines (Altavista, MSN, and Google) to boost the performance of an automated question answering system, Webclopedia, in answering definition questions. The results indicate applying dictionary and web-based answer reranking together increase the performance of Webclopedia on a set of 102 TREC-10 definition questions by 25% in mean reciprocal rank score and 14% in finding answers in the top 5.",
"pdf_parse": {
"paper_id": "C02-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe an in-depth study of using a dictionary (WordNet) and web search engines (Altavista, MSN, and Google) to boost the performance of an automated question answering system, Webclopedia, in answering definition questions. The results indicate applying dictionary and web-based answer reranking together increase the performance of Webclopedia on a set of 102 TREC-10 definition questions by 25% in mean reciprocal rank score and 14% in finding answers in the top 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In an attempt to further progress in information retrieval research, the Text REtrieval Conference (TREC) sponsored by the National Institute of Standards and Technology (NIST) started a series of large-scale evaluations of domain independent automated question answering systems in TREC-8 (Voorhees 2000) and continued in TREC-9 and TREC-10. NTCIR (NII-NACSIS Test Collection for IR Systems, TREC's counterpart in Japan) initiated its question answering evaluation effort, Question Answering Challenge (QAC) in (Fukumoto et al. 2001 . Research systems participating in TRECs and the coming QAC focused on the problem of answering closed-class questions that have short fact-based answers (\"factoids\") from a large collection of text. These systems bear a similar structure:",
"cite_spans": [
{
"start": 290,
"end": 305,
"text": "(Voorhees 2000)",
"ref_id": "BIBREF16"
},
{
"start": 512,
"end": 533,
"text": "(Fukumoto et al. 2001",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "(1) Question analysis -identify question keywords to be submitted to search engines (local or web), recognize question types, and suggest expected answer types. Although most systems rely on a taxonomy of expected answer types, the number of nodes in the taxonomy varies widely from single digits to a few thousands. For example, Abney et al. (2000) used 5; Ittycheriah et al. (2001), 31; Hovy et al. (2001), 140; Harabagiu et al. (2001) , 8,797. These taxonomies were mostly based on named entities and WordNet (Fellbaum 1998) . Special types such definition questions (ex: \"What is an atom?\") were added as necessary.",
"cite_spans": [
{
"start": 330,
"end": 357,
"text": "Abney et al. (2000) used 5;",
"ref_id": null
},
{
"start": 358,
"end": 388,
"text": "Ittycheriah et al. (2001), 31;",
"ref_id": null
},
{
"start": 389,
"end": 413,
"text": "Hovy et al. (2001), 140;",
"ref_id": null
},
{
"start": 414,
"end": 437,
"text": "Harabagiu et al. (2001)",
"ref_id": null
},
{
"start": 512,
"end": 527,
"text": "(Fellbaum 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "(2) Passage or Sentence retrieval -this aims to provide a text pool of manageable size for extracting candidate answers. Most top performing systems in TRECs use their own retrieval methods for passages (Brill et al. 2001; Clarke et al. 2001; Harabagiu et al. 2001) or sentences (Hovy et al. 2001) .",
"cite_spans": [
{
"start": 203,
"end": 222,
"text": "(Brill et al. 2001;",
"ref_id": "BIBREF2"
},
{
"start": 223,
"end": 242,
"text": "Clarke et al. 2001;",
"ref_id": null
},
{
"start": 243,
"end": 265,
"text": "Harabagiu et al. 2001)",
"ref_id": null
},
{
"start": 279,
"end": 297,
"text": "(Hovy et al. 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "(3) Candidate answer extraction -extract candidate answers according to answer types. If the expected answer types are typical named entities, information extraction engines (Bikel et al. 1999, Srihari and Li 2000) are used to extract candidate answers. Otherwise special answer patterns are used to pinpoint answers. For example, Soubbotin and Soubbotin (2001) create a set of 6 answer patterns for definition questions. (4) Answer ranking -assign scores to candidate answers according to their frequency in top ranked passages (Abney et al. 2000; Clarke et al. 2001) , similarity to candidate answers extracted from external sources such as the web (Brill et al. 2001; Buchholz 2001) or WordNet (Harabagiu et al. 2001; Hovy et al. 2001) , density, distance, or order of question keywords around the candidates, similarity between the dependency structures of questions and candidate answers (Harabagiu et al. 2001; Hovy et al. 2001; Ittycheriah et al. 2001) , and match of expected answer types. In this paper, we describe an in-depth study of answer reranking for definition questions. Definition questions account for over 100 (20%) test questions in TREC-10. They are not named entities that have been the cornerstones of many high performance QA systems (Srihari and Li 2000; Harabagiu et al. 2001) . By reranking we mean the following. Assume a QA system such as Webclopedia (Section 3) provides an initial set of ranked candidate answers from the TREC corpus. The ranking is based on the IR engine's passage or sentence match scores. One can then measure the effectiveness of utilizing resources such as WordNet or the web to rerank the initial results, hoping to achieve better mean reciprocal rank (MRR) and percent of correctness in the top 5 (PTC5). Answer reranking is often overlooked. The answer candidates (<= 400 instances per question) generated by Webclopedia from TREC corpus included answers for 83% of 102 definition questions used in this study (the TREC-10 definition questions). However, Webclopedia ranked only 64% of them in the top 5, giving an MRR score of 45%. If a perfect answer reranking function had been used, the best achievable MRR would have been 83% (an 84% increase over the original 45%). Section 2 gives a brief overview of TREC-10. Section 3 outlines the Webclopedia system. Section 4 defines definition questions and describes our dictionary and web-based reranking methods. Section 5 presents experiments and results. We conclude with lessons learned and future work.",
"cite_spans": [
{
"start": 174,
"end": 205,
"text": "(Bikel et al. 1999, Srihari and",
"ref_id": null
},
{
"start": 206,
"end": 214,
"text": "Li 2000)",
"ref_id": "BIBREF15"
},
{
"start": 529,
"end": 548,
"text": "(Abney et al. 2000;",
"ref_id": "BIBREF0"
},
{
"start": 549,
"end": 568,
"text": "Clarke et al. 2001)",
"ref_id": null
},
{
"start": 651,
"end": 670,
"text": "(Brill et al. 2001;",
"ref_id": "BIBREF2"
},
{
"start": 671,
"end": 685,
"text": "Buchholz 2001)",
"ref_id": "BIBREF3"
},
{
"start": 697,
"end": 720,
"text": "(Harabagiu et al. 2001;",
"ref_id": null
},
{
"start": 721,
"end": 738,
"text": "Hovy et al. 2001)",
"ref_id": "BIBREF10"
},
{
"start": 893,
"end": 916,
"text": "(Harabagiu et al. 2001;",
"ref_id": null
},
{
"start": 917,
"end": 934,
"text": "Hovy et al. 2001;",
"ref_id": "BIBREF10"
},
{
"start": 935,
"end": 959,
"text": "Ittycheriah et al. 2001)",
"ref_id": "BIBREF12"
},
{
"start": 1260,
"end": 1281,
"text": "(Srihari and Li 2000;",
"ref_id": "BIBREF15"
},
{
"start": 1282,
"end": 1304,
"text": "Harabagiu et al. 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The main task of the TREC-10 (Voorhees and Harman 2002) QA track required participants to return a ranked list of five answers of no more than 50 bytes long per question that were supported by the TREC-10 QA text collection. The TREC-10 QA document collection consists of newspaper and newswire articles on TREC disks 1 to 5. It contains about 3 GB of texts. Test questions were drawn from filtered MSNSearch and AskJeeves logs. NIST assessors then sifted 500 questions from the filtered logs as test set. The questions were closed-class fact-based (\"factoid\") questions such as \"How far is it from Denver to Aspen?\" and \"What is an atom?\". Mean reciprocal rank (MRR) was used as the indicator of system performance. Each question receives a score as the reciprocal of the rank of the first correct answer in the 5 submitted responses. No score is given if none of the 5 responses contain a correct answer. MRR is then computed for a system by taking the mean of the reciprocal ranks of all questions. Besides MRR score, we are also interested in learning how well a system places a correct answer within the five responses regardless of its rank. We called this percent of correctness in the top 5 (PCT5). PCT5 is a precision related metric and indicates the upper bound that a system can achieve if it always places the correct answer as its first response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TREC-10 Q&A Track",
"sec_num": null
},
{
"text": "Webclopedia's architecture follows the principle outlined in Section 1. We briefly describe each stage in the following. Please refer to (Hovy et al. 2002) for more detail.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "(Hovy et al. 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Webclopedia: An Automated Question Answering System",
"sec_num": "3"
},
{
"text": "(1) Question Analysis: We used an in-house parser, CONTEX (Hermjakob 2001) , to parse and analyze questions and relied on BBN's IdentiFinder (Bikel et al., 1999) to provide basic named entity extraction capability.",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "(Hermjakob 2001)",
"ref_id": "BIBREF10"
},
{
"start": 141,
"end": 161,
"text": "(Bikel et al., 1999)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Webclopedia: An Automated Question Answering System",
"sec_num": "3"
},
{
"text": "(2) Document Retrieval/Sentence Ranking: The IR engine MG (Witten et al. 1994 ) was used to return at least 500 documents using Boolean queries generated from the query formation stage. However, fewer than 500 documents may be returned when very specific queries are given. To decrease the amount of text to be processed, the documents were broken into sentences. Each sentence was scored using a formula that rewards word and phrase overlap with the question and expanded query words. The ranked sentences were then filtered by expected answer types (ex: dates, metrics, and countries) and fed to the answer extraction module.",
"cite_spans": [
{
"start": 55,
"end": 77,
"text": "MG (Witten et al. 1994",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Webclopedia: An Automated Question Answering System",
"sec_num": "3"
},
{
"text": "(3) Candidate Answer Extraction: We again used CONTEX to parse each of the top N sentences, marked candidate answers by named entities and special answer patterns such as definition patterns, and then started the ranking process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Webclopedia: An Automated Question Answering System",
"sec_num": "3"
},
{
"text": "(4) Answer Ranking: For each candidate answer several steps of matching were performed. The matching process considered question keyword overlaps, expected answer types, answer patterns, semantic type, and the correspondence of question and answer parse trees. Scores were given according to the goodness of the matching. The candidate answers' scores were compared and ranked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Webclopedia: An Automated Question Answering System",
"sec_num": "3"
},
{
"text": "(5) Answer Reranking, Duplication Removal, and Answer output: For some special question type such as definition questions (e.g., \"What is cryogenics?\"), we used WordNet glosses or web search results to rerank the answers. Duplicate answers were removed and only one instance was kept to increase coverage. The best 5 answers were output. Answer reranking is the main topic of this paper. Section 4 presents these methods in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Webclopedia: An Automated Question Answering System",
"sec_num": "3"
},
{
"text": "Compared to other question types, definition questions are special. They are typically very short and in the form of \"What is|are (a|an) X?\", where X is a 1 to 3 words term 1 , for example: \"What is autism?\", \"What is spider veins?\" and \"What is bangers and mash?\". As we learned from past TREC experience, it was more difficult to find relevant documents for short queries. As stated earlier, over 20% of questions in TREC-10 were of definition type, which was a reflection of real user queries mined from the web search engine logs (Voorhees 2001) . Several top performing systems in the evaluation treated this type of question as a special category and most of them used definition answer patterns. The best performing system, InsightSoft-M, (Soubbotin and Soubbotin 2001) used a set of six definition patterns including P1:{<Q; is/are; [a/an/the]; A>, <A; is/are; [a/an/the]; Q>} and P2:{<Q; comma; [a/an/the]; A; [comma/period]>, <A; comma; [a/an/the]; Q; [comma/period]>}, where Q is the term to be defined and A is the candidate answer. The InsightSoft-M system returned 88 correct responses based on these patterns. The runner up system (Harabagiu et al. 2001 ) used 12 answer patterns with extension of WordNet hypernyms. They did not report their success rate for TREC-10 but according to Pa\u015fca (2001) 2 , this set 1 Among the 102 TREC-10 definition questions, 81 asked the definition of one word; 19, two words; 2, three words. 2 Among them 31 were extracted through pattern of patterns with WordNet extension extracted 59 out of 67 definition questions in TREC-8 and TREC-9. The success stories of these systems indicated that carefully crafted answer patterns were effective in candidate answer extraction. However, just applying answer patterns blindly might lead to disastrous results, as shown by Hermjakob (2002) , since correct and incorrect answers were equally likely to match these patterns. For example, for the question \"What is autism?\", the following answers are found in the TREC-10 corpus using the patterns described by the InsightSoft-M system:",
"cite_spans": [
{
"start": 534,
"end": 549,
"text": "(Voorhees 2001)",
"ref_id": "BIBREF17"
},
{
"start": 1146,
"end": 1168,
"text": "(Harabagiu et al. 2001",
"ref_id": null
},
{
"start": 1300,
"end": 1312,
"text": "Pa\u015fca (2001)",
"ref_id": "BIBREF14"
},
{
"start": 1814,
"end": 1830,
"text": "Hermjakob (2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition Questions",
"sec_num": "4.1"
},
{
"text": "\u2460 autism Q , a nourishing A , equivocal \u2026 \u2461 autism Q , the disorder is A , in fact, \u2026 \u2462 autism Q , the discovery could open new approaches for treating t A he \u2026 \u2463 autism Q is a mental disorder that is a \"severely incapacitatin A g \u2026 \u2464 autism Q , the inability to communicate with others A . Obviously, patterns alone cannot distinguish which one is the best answer. Some other mechanisms are necessary. We propose two different methods to solve this problem. One is a dictionary-based method using WordNet glosses and the other is to go directly to the web and compile web glosses on the fly to help select the best answers. The effect of combining both methods was also studied. We describe these two methods in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition Questions",
"sec_num": "4.1"
},
{
"text": "Using a dictionary to look up the definition of a term is the most straightforward solution for answering definition questions. For example, the definition of autism in the WordNet is: \"an abnormal absorption with the self; marked by communication disorders and short attention span and inability to treat others as people\". However, we need to find a candidate answer string from the TREC-10 corpus that is equivalent to this definition. By inspection, we find that candidate answers \u2461, \u2463, and \u2464 shown in the previous section are more compatible to the definition and \u2464 seems to be the best one. To automate the decision process, we construct a definition database based on the WordNet noun matching and 27 were from WordNet hypernym expansion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary-Based Reranking",
"sec_num": "4.2"
},
{
"text": ". glosses. Closed class words are thrown away and each word w i in the glosses is assigned a gloss weight wn i s as follows 3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary-Based Reranking",
"sec_num": "4.2"
},
{
"text": ") 1 / log( + = i wn i n N s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary-Based Reranking",
"sec_num": "4.2"
},
{
"text": "where n i is the number of times word w i occurring in the WordNet noun glosses and N is total number of occurrences of all noun gloss words in the WordNet. The goodness of the matching M wn for each candidate answer is simply the sum of the weight of the matched word stems between its WordNet definition and itself. For example, candidate answer \u2464 and autism's WordNet definition have these matches: {inability 5 \u21d4 inability wn , communicate 5 \u21d4 communication wn , others 5 \u21d4 others wn }. The reranking score S wn for each candidate answer is its original score multiplied by M wn . The final ranking is then sorted according to S wn , duplicate answers are removed, and the top 5 answers are output. Table 1 shows the top 5 answers returned before and after applying dictionary-based reranking. It demonstrates that dictionary-based reranking not only pushes the best answer to the first place but also boosts other lower ranked good answers i.e. \"a mental disorder\" to the second place. Harabagiu et al. (2001) also used WordNet to assist in answering definition questions. However, they took the hypernyms of the term to be defined as the default answers while we used its glosses. The hypernym of \"autism\" is \"syndrome\". In this case it would not boost the desired answer to the top but it would instead \"validate\" \"Down's syndrome\" as a good answer. Further research is needed to investigate the tradeoff between using hypernyms and glosses. WordNet glosses were incorporated in IBM's statistical question answering system as definition features (Ittycheriah et al. 2001 ).",
"cite_spans": [
{
"start": 991,
"end": 1014,
"text": "Harabagiu et al. (2001)",
"ref_id": null
},
{
"start": 1553,
"end": 1577,
"text": "(Ittycheriah et al. 2001",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 703,
"end": 710,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dictionary-Based Reranking",
"sec_num": "4.2"
},
{
"text": "However, they did not report the effectiveness of the features in definition answer extraction. Out of vocabulary words is the major problem of dictionary-based reranking. For example, no WordNet entry is found for \"e-coli\" but searching the term \"e-coli\" at www.altavista.com and www.google.com yield the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary-Based Reranking",
"sec_num": "4.2"
},
{
"text": "\u2022 E. coli is a food borne illness. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary-Based Reranking",
"sec_num": "4.2"
},
{
"text": "The World Wide Web contains massive amounts of information covering almost any thinkable topic. The TREC-10 questions are typical instances of queries for which users tend to believe answers can be found from the web. However, the candidate answers extracted from the web have to find support in the TREC-10 corpus in order to be judged as correct otherwise they will be marked as unsupported. The search results of \"e-coli\" from two online search engines indicate that \"e-coli\" is an abbreviation for the bacteria Escherichia. However, to automatically identify \"e-coli\" as \"Escherichia\" from these two pages is the same QA problem that we set off to resolve. The only advantage of using the web instead of just the TREC-10 corpus is the assumption that the web contains many more redundant candidate answers due to its huge size. Compared to Table 1 . Top 5 answers returned before (Webclopedia) and after (Webclopedia + WordNet) dictionary-based answer reranking for question \"What is autism?\". A \"-\" indicates wrong answers and a \"+\" indicates correct answers.",
"cite_spans": [],
"ref_spans": [
{
"start": 844,
"end": 851,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Web-Based Reranking",
"sec_num": "4.3"
},
{
"text": "Webclopedia + WordNet 1 -Down's syndrome + the inability to communicate with others 2 -mental retardation + a mental disorder 3 + the inability to communicate with others -NIL 4 -NIL -Down's syndrome 5 -a group of similar-looking diseases -mental retardation .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autism Webclopedia",
"sec_num": null
},
{
"text": "Google's 2,073,418,204 web pages 4 , TREC-10 corpus contains only about 979,000 articles. For a given question, we first query the web, apply answer extraction algorithms over a set of top ranked web pages (usually in the lower hundreds), and then rank candidate answers according to their frequency in the set. This assumes the more a candidate answer occurs in the set the more likely it is the correct answer. Clarke et al. (2001) and Brill et al. (2001) both applied this principle and achieved good results. Instead of using Webclopedia to extract candidate answers from the web and then project back to the TREC-10 corpus, we treat the web as a huge dynamic dictionary. We compile web glosses on the fly for each definition question and apply the same reranking procedure used in the dictionary-based method. We detail the procedure in the following.",
"cite_spans": [
{
"start": 413,
"end": 433,
"text": "Clarke et al. (2001)",
"ref_id": null
},
{
"start": 438,
"end": 457,
"text": "Brill et al. (2001)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Autism Webclopedia",
"sec_num": null
},
{
"text": "(1) Query a search engine (e.g., Altavista) with the term (e.g., \"e-coli\") to be defined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autism Webclopedia",
"sec_num": null
},
{
"text": "(2) Download the first R pages (e.g., R = 70). This was the number that Google (www.google.com) advertised at its front page as of January 31, 2002. 5 This is essentially TFIDF (product of term frequency and inverse document frequency) used in the information retrieval research.",
"cite_spans": [
{
"start": 139,
"end": 150,
"text": "31, 2002. 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Autism Webclopedia",
"sec_num": null
},
{
"text": "for each candidate answer is simply the sum of the weights of the matched word stems between its web gloss definition and itself. Only words with gloss weight T s web i \u2265 are used to compute M web . The value of T serves as a cut-off threshold to filter out low confidence words. (6) The reranking score S web for each candidate answer is its original score multiplied by M web . The final ranking is then sorted according to S web , duplicate answers are removed, and the top 5 answers are output. Table 2 shows the top 5 answers returned before and after applying web-based reranking for the question \"What is Wimbledon?\". Google was used as the search engine with T=5, W=10, and R=70.",
"cite_spans": [],
"ref_spans": [
{
"start": 499,
"end": 506,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Autism Webclopedia",
"sec_num": null
},
{
"text": "We used a set of 102 definition questions from TREC-10 QA track as our test set. The performance of Webclopedia without dictionary or web-based answer reranking was used as the baseline. Webclopedia with dictionary-based answer reranking. To study the effect of using different search engines, context window sizes, number of top ranked web pages, and web gloss weight cut-off threshold on the performance of web-based answer reranking, we had the following setup:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": null
},
{
"text": "\u2022 Three search engines (E): Altavista (E A ), Google (E G ), and MSNSearch (E M ). \u2022 A run that combined all three search engines' results (E X ). \u2022 Two different context window sizes (W):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": null
},
{
"text": "5 (W 5 ) and 10 (W 10 ). \u2022 Eleven sets of top ranked web pages (R x ): top 5, 10, 20, 30, 40, 50, 60, 70, 80, 90 , and 100. \u2022 Two different gloss weight cut-off thresholds (T): 5 (T 5 ) and 10 (T 10 ). To investigate the performance of combining dictionary and web-based answer reranking, we ran the above setup again but each question's reranking score S web+wn was the multiplication of Table 2 . Top 5 answers returned before (Webclopedia) and after (Webclopedia + Google) web-based answer reranking for question \"What is Wimbledon?\". A \"-\" indicates wrong answers and a \"+\" indicates correct answers.",
"cite_spans": [
{
"start": 75,
"end": 77,
"text": "5,",
"ref_id": null
},
{
"start": 78,
"end": 81,
"text": "10,",
"ref_id": null
},
{
"start": 82,
"end": 85,
"text": "20,",
"ref_id": null
},
{
"start": 86,
"end": 89,
"text": "30,",
"ref_id": null
},
{
"start": 90,
"end": 93,
"text": "40,",
"ref_id": null
},
{
"start": 94,
"end": 97,
"text": "50,",
"ref_id": null
},
{
"start": 98,
"end": 101,
"text": "60,",
"ref_id": null
},
{
"start": 102,
"end": 105,
"text": "70,",
"ref_id": null
},
{
"start": 106,
"end": 109,
"text": "80,",
"ref_id": null
},
{
"start": 110,
"end": 112,
"text": "90",
"ref_id": null
}
],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": null
},
{
"text": "Webclopedia + Google (T=5, W=10, R=70) 1 -the French Open and the U.S. Open.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wimbledon Webclopedia",
"sec_num": null
},
{
"text": "+ the most famous front yard in tennis and scene 2 -SW20, which includes a Japanese-style water garden -the French Open and the U.S. Open. 3 + the most famous front yard in tennis and scene -NIL 4 -NIL -Sampras' biggest letdown of the year 5 -Sampras' biggest letdown of the year -Lawn Tennis & Croquet Club, home of the Wimbledon its original score, web-based matching score M web , and dictionary-based matching score M wn . A total of 354 runs were performed. Manual evaluation of these 354 runs was not impossible but would be time consuming. We instead used the answer patterns provided by NIST to score all runs automatically. Due to space constraint, Table 3 shows the (MRR, PCT5) score pair for 90 runs out of 352 runs. The other two runs were the baseline run with a score pair of (0.450, 0.637) and the dictionary-based run, (0.535, 0.667). The best run was the combined dictionary and web-based run using Google as the search engine with 10-word context window, 70 top ranked pages, and a gloss weight cut-off threshold of 5. Analyzing all runs according to Table 3 , we made the following observations.",
"cite_spans": [],
"ref_spans": [
{
"start": 658,
"end": 665,
"text": "Table 3",
"ref_id": null
},
{
"start": 1069,
"end": 1076,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Wimbledon Webclopedia",
"sec_num": null
},
{
"text": "(1) Dictionary-based reranking improved baseline performance by 19% in MRR and 5% in PCT5 (MRR: 0.535, PCT5: 0.667).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wimbledon Webclopedia",
"sec_num": null
},
{
"text": "(2) The best web-based reranking (MRR: 0.539, PCT5: 0.676) was achieved with W=10, R=70, and T=5. It was comparable to the dictionary-based reranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wimbledon Webclopedia",
"sec_num": null
},
{
"text": "(3) Web-based reranking generally improved results. Only 6 runs 6 (not shown in the table) did worse in their MRR scores than just using Webclopedia alone and these runs concentrated on low ranked page counts of 5 and 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wimbledon Webclopedia",
"sec_num": null
},
{
"text": "(4) Different search engines reached their best performance at different parameter settings. Overall Google did better. (5) Combining multiple search engine results (runs designed with X and X+) did not always improve performance. In some cases, it even degraded system performance (E x T 5 W 10 R 70 : 0.519, 0.637). (6) Lower web gloss weight cut-off threshold was better at 5. (7) Longer context window was better at 10 (not shown in the table). (8) Taking top ranked pages of 50 to 90 pages provided better results. (9) Combining dictionary and web-based reranking always did better than using the web-based method alone. (10) Using WordNet and Google together was always better than just using WordNet alone in both MRR and PCT5 (the underlined cells).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wimbledon Webclopedia",
"sec_num": null
},
{
"text": "To investigate the effectiveness of using dictionary and web-based answer reranking on question of different difficulty, we define question difficulty as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Difficulty",
"sec_num": "5.1"
},
{
"text": ") / ( 1 N n d \u2212 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Difficulty",
"sec_num": "5.1"
},
{
"text": ", where n is the number of systems participating in TREC-10 that returned answers in top 5 and N is the number of total runs (that is, 67 for TREC-10). When d = 1 no systems provided an answer in top 5; while d = 0 if all runs provided at least one answer in top 5. Table 4 shows the improvement of MRR and PCT5 scores at four different question difficulty levels with four different system setups. The results indicate that using either dictionary or web-based answer reranking improved system performance at all levels. The best results were achieved when evidence from both resources was used. However, it also demonstrates the difficulty of improving performance on very hard questions (d>=0.75). This implies we might need to consider alternative methods to improve the system performance further. Table 3 . Results of 90 runs shown in (MRR, PCT5) score pair where A: Altavista, G: Google, M: MSNSearch, X: all three search engines, W: context window size, R: number of top ranked web paged used, T: web gloss weight cut-off threshold. Runs marked with '+' indicate both dictionary and web-based answer reranking are used. (0.503,0.667) (0.518,0.667) (0.538,0.676) (0.555,0.696) (0.548,0.696) A +T 10 (0.502,0.667) (0.515,0.667) (0.528,0.667) (0.528,0.676) (0.525,0.676) G T 5 (0.513,0.637) (0.515,0.647) (0.536,0.647) (0.539,0.676) (0.515,0.657) G T 10 (0.497,0.637) (0.503,0.647) (0.527,0.647) (0.523,0.657) (0.518,0.637) G +T 5 (0.551,0.686) (0.537,0.667) (0.557,0.676) (0.561,0.725) (0.547,0.706) G +T 10 (0.536,0.676) (0.530,0.676) (0.547,0.676) (0.544,0.706) (0.545,0.686) M T 5 (0.521,0.647) (0.513,0.627) (0.517,0.647) (0.514,0.637) (0.499,0.637) M T 10 (0.505,0.627) (0.499,0.608) (0.502,0.637) (0.488,0.627) (0.493,0.608) M +T 5 (0.543,0.676) (0.552,0.667) (0.544,0.676) (0.542,0.696) (0.533,0.676) M +T 10 (0.527,0.647) (0.537,0.647) (0.525,0.667) (0.519,0.696) (0.520,0.667) X T 5 (0.526,0.647) (0.539,0.676) (0.533,0.627) (0.519,0.637) * (0.515,0.627) X T 10 (0.509,0.618) (0.524,0.657) (0.532,0.627) (0.524,0.647) (0.517,0.637) X +T 5 (0.553,0.696) (0.551,0.696) (0.556,0.686) (0.550,0.696) (0.546,0.686) X +T 10 (0.531,0.657) (0.543,0.686) (0.555,0.686) (0.550,0.696) (0.546,0.686) .",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 4",
"ref_id": "TABREF1"
},
{
"start": 803,
"end": 810,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question Difficulty",
"sec_num": "5.1"
},
{
"text": "We described dictionary-based answer reranking using WordNet, web-based answer reranking using three different online search engines, and their evaluations at various parameter settings on a set of 102 TREC-10 definition questions. We showed that using either approach alone improved MRR score by 19% and PCT5 score by 5% over the baseline. However, the best performance was achieved when both methods were used together. In that setting a 25% increase in MRR score and 14% improvement in PCT5 score were obtained. The difference on the best MRR and PCT5 scores (0.56 vs. 0.73) suggests neither dictionary-based nor web-based will solve the reranking problem completely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "To improve the performance further, we need better ways to compile web glosses and combine them with WordNet glosses. We also need a better combination function \uf8e7 a statistical model for combining patterns, dictionary, and web scores. We have started investigating the possibility of applying answer reranking to other question types and exploring specialized web resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "This is essentially inverse document (WordNet gloss entry) frequency (IDF) used in the information retrieval research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These were E A T 5 W 5 R 5 (0.437, 0.598), E A T 10 W 5 R 5 (0.434, 0.608), E A T 10 W 10 R 5 (0.437, 0.598), E M T 5 W 5 R 5 (0.436, 0.608), E M T 10 W 5 R 5 (0.438, 0.608), and E M T 10 W 10 R 5 (0.443, 0.618).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Answer Extraction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Singhal",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Applied Natural Language Processing Conference (ANLP-NAACL-00)",
"volume": "",
"issue": "",
"pages": "296--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abney, S., M. Collins, and A. Singhal. 2000. Answer Extraction. In Proceedings of the Applied Natural Language Processing Conference (ANLP-NAACL-00), Seattle, WA, 296-301.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Algorithm that Learns What's in a Name",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning-Special Issue on NL Learning",
"volume": "34",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, D., R. Schwartz, and R. Weischedel. 1999. An Algorithm that Learns What's in a Name. In Machine Learning-Special Issue on NL Learning, 34, 1-3.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Data-Intensive Question Answering",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the TREC-10 Conference. NIST",
"volume": "",
"issue": "",
"pages": "183--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill, E., J. Lin, M. Banko, S. Dumais, and A. Ng. 2001. Data-Intensive Question Answering. In Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 183-189.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using Grammatical Relations, Answer Frequencies and the World Wide Web for TREC Question Answering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Buchholz",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the TREC-10 Conference. NIST",
"volume": "",
"issue": "",
"pages": "496--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buchholz, S. 2001. Using Grammatical Relations, Answer Frequencies and the World Wide Web for TREC Question Answering. In Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 496-503.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Web Reinforced Question Answering",
"authors": [
{
"first": "G",
"middle": [
"L"
],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mclearn",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the TREC-10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, and G.L. McLearn. 2001. Web Reinforced Question Answering. In Proceedings of the TREC-10",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Ch",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fukumoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Masui",
"suffix": ""
}
],
"year": 1998,
"venue": "NTCIR Workshop 3 QA Task -Question Answering Challenge (QAC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, Ch. (ed). 1998. WordNet: An Electronic Lexical Database. Cambridge: MIT Press. Fukumoto, J, T. Kato, and F. Masui. 2001. NTCIR Workshop 3 QA Task -Question Answering Challenge (QAC). http://research.nii.ac.jp/ntcir/ workshop/qac/cfp-en.html.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Open Questions Regarding Precision of the Insight Q&A System",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Buneascu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "G\u00eerju",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morarescu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Workshop on Open-Domain Question Answering post-conference workshop of ACL-2001",
"volume": "",
"issue": "",
"pages": "479--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, M. Surdeanu, R. Buneascu, R. G\u00eerju, V. Rus and P. Morarescu. 2001. FALCON: Boosting Knowledge for Answer Engines. In Proceedings of the 9 th Text Retrieval Conference (TREC-9), NIST, 479-488. Hermjakob, U. 2001. Parsing and Question Classification for Question Answering. In Proceedings of the Workshop on Open-Domain Question Answering post-conference workshop of ACL-2001, Toulouse, France. Hermjakob, U. 2002. Open Questions Regarding Precision of the Insight Q&A System. Personal communication..",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Use of External Knowledge in Factoid QA",
"authors": [
{
"first": "E",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "C.-Y.",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the TREC-10 Conference. NIST",
"volume": "",
"issue": "",
"pages": "166--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, E.H., U. Hermjakob, and C.-Y. Lin. 2001. The Use of External Knowledge in Factoid QA. In Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 166-174.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using Knowledge to Facilitate Pinpointing of Factoid Answers",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ravichandran ; Ittycheriah",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravichandran. 2002. Using Knowledge to Facilitate Pinpointing of Factoid Answers. In Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002), Taipei, Taiwan, August 24 -September 1, 2002. Ittycheriah, A., M. Franz, and S. Roukos. 2001.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "IBM's Statistical Question Answering System",
"authors": [],
"year": null,
"venue": "Proceedings of the TREC-10 Conference. NIST",
"volume": "",
"issue": "",
"pages": "317--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IBM's Statistical Question Answering System. In Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 317-323.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "High Performance, Open-Domain Question Answering from Large Text Collections",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the TREC-10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pa\u015fca, M. 2001. High Performance, Open-Domain Question Answering from Large Text Collections. Ph.D. dissertation, Southern Methodist University, Dallas, TX. Soubbotin, M.M. and S.M. Soubbotin. 2001. Patterns of Potential Answer Expressions as Clues to the Right Answer. In Proceedings of the TREC-10",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Question Answering System Supported by Information Extraction",
"authors": [
{
"first": "",
"middle": [],
"last": "Conference",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nist",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gaithersburg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Srihari",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1 st Meeting of the North American Chapter of the Association for Computational Linguistics (ANLP-NAACL-00)",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference. NIST, Gaithersburg, MD, 175-182. Srihari, R. and W. Li. 2000. A Question Answering System Supported by Information Extraction. In Proceedings of the 1 st Meeting of the North American Chapter of the Association for Computational Linguistics (ANLP-NAACL-00), Seattle, WA, 166-172.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "NIST Special Publication 500-246. Voorhees, E. 2001. Overview of the Question Answering Track",
"authors": [
{
"first": "E",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Eighth Text REtrieval Conference (TREC-8)",
"volume": "",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Voorhees, E.M. 1999. The TREC-8 Question and Answering Track Report. In Proceedings of the Eighth Text REtrieval Conference (TREC-8), pages 77-82, 2000. NIST Special Publication 500-246. Voorhees, E. 2001. Overview of the Question Answering Track. In Proceedings of the TREC-10",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Proceedings of the Eighth Text REtrieval Conference (TREC-10)",
"authors": [
{
"first": "",
"middle": [],
"last": "Conference",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nist",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gaithersburg",
"suffix": ""
},
{
"first": "E",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
},
{
"first": "D",
"middle": [
"K"
],
"last": "Harman",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "71--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference. NIST, Gaithersburg, MD, 71-81. Voorhees, E.M. and D.K. Harman. 2001. The Proceedings of the Eighth Text REtrieval Conference (TREC-10), NIST. Witten, I.H., A. Moffat, and T.C. Bell. 1994. Managing Gigabytes: Compressing and Indexing Documents and Images. New York: Van Nostrand Reinhold.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(e.g., W = 10) words centered at the term to be defined from each page. Closed class words are ignored. These context words are used as candidate web glosses. is the frequency of c i w in the set of context words extracted in (3), N is the total number of training questions, and n i is the number of training questions in which c i w occurs. (5) The goodness of the matching M web4",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"html": null,
"text": "System",
"content": "<table><tr><td/><td>d&gt;=0.00 (102) d&gt;=0.25 (95) d&gt;=0.50 (71) d&gt;=0.75 (40)</td></tr><tr><td>F</td><td>(0.450,0.637) (0.394,0.611) (0.264,0.549) (0.084,0.375)</td></tr><tr><td>F+</td><td>(0.535,0.667) (0.474,0.642) (0.323,0.592) (0.100,0.375)</td></tr><tr><td>FG</td><td>(0.539,0.676) (0.475,0.653) (0.319,0.592) (0.125,0.375)</td></tr><tr><td>F+G</td><td>(0.561,0.725) (0.498,0.705) (0.333,0.648) (0.128,0.400)</td></tr><tr><td/><td>performance at different question</td></tr><tr><td colspan=\"2\">difficulty levels. (F: Webclopedia only, F+:</td></tr><tr><td colspan=\"2\">Webclopedia with WordNet, FG: Webclopedia</td></tr><tr><td colspan=\"2\">with Google, and F+G: Webclopedia with</td></tr><tr><td colspan=\"2\">WordNet and Google)</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}