ACL-OCL / Base_JSON /prefixH /json /H01 /H01-1036.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:02.991960Z"
},
"title": "Information Extraction with Term Frequencies",
"authors": [
{
"first": "T",
"middle": [
"R"
],
"last": "Lynam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Science University of Waterloo Ontario",
"location": {
"country": "Canada"
}
},
"email": ""
},
{
"first": "C",
"middle": [
"L A"
],
"last": "Clarke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Science University of Waterloo Ontario",
"location": {
"country": "Canada"
}
},
"email": ""
},
{
"first": "G",
"middle": [
"V"
],
"last": "Cormack",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Science University of Waterloo Ontario",
"location": {
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "H01-1036",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Every day, millions of people use the internet to answer questions. Unfortunately, at present, there is no simple and successful means to consistently accomplish this goal. One common approach is to enter a few terms from a question into a Web search system and scan the resulting pages for the answer, a laborious process. To address this need, a question answering (QA) system was created to find and extract answers from a corpus. This system contains three parts: a parser for generating question queries and categories, a passage retrieval element, and an information extraction (IE) component. The extraction method was designed to elicit answers from passages collected by the information retrieval engine. The subject of this paper is the information extraction component. It is based on the premise that information related to the answer will be found many times in a large corpus like the Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "The system was applied to the Question Answering Track at TREC-9 and achieved the second best results overall [3] . The information extraction and parsing components were new for TREC-9; the TREC-8 system solely used passage retrieval [4] . Each new component yielded greater than 10% improvement in mean reciprocal rank, TREC's standard evaluation measure.",
"cite_spans": [
{
"start": 110,
"end": 113,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 235,
"end": 238,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "In the sections that follow, the extraction component is described and evaluated according to its contribution to the system's effectiveness. In particular, this paper investigates the contribution of a voting scheme favouring terms found in many candidate passages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Architecturally, the question answering system is simple. First the parser analyses the question and generates a query for the passage retrieval component. It also provides selection rules for the information extraction component. Next, the passage retrieval component executes the query over the target corpus and retrieves a ranked list of passages for the answer IE component to process. Thirdly, the information extraction component finds the answers' extracts in the passages retrieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "The parser is a probabilistic version of Earley's algorithm. It determines all possible parses of the grammar and selects the most probable. The grammar contains only 80 production rules [3] .",
"cite_spans": [
{
"start": 187,
"end": 190,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": ". The passage retrieval component collects arbitrary substrings of a document in the corpus. These substrings are considered passages and given a score. Passage scores are based on the terms contained in the query and the passage length. Passages with a length of one thousand words were retrieved in the TREC-9 system. The information extraction component locates possible answers in the top ten passages. It then selects the best answer extracts of a predetermined length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "The overall approach of question analysis followed by IR succeeded by IE is nearly universal in QA systems [1, 2, 5, 6, 7, 8, 9] . The TREC-9 question answering track required the QA system to find solutions to 693 questions. Two different runs were judged: 50-and 250-byte answer extracts. Question answering systems were evaluated by the mean reciprocal answer rank (MRR). Five passages of the desired length are evaluated in order. The score is based on the rank of the first correct passage according to the formula:",
"cite_spans": [
{
"start": 107,
"end": 110,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 111,
"end": 113,
"text": "2,",
"ref_id": "BIBREF1"
},
{
"start": 114,
"end": 116,
"text": "5,",
"ref_id": "BIBREF4"
},
{
"start": 117,
"end": 119,
"text": "6,",
"ref_id": "BIBREF5"
},
{
"start": 120,
"end": 122,
"text": "7,",
"ref_id": "BIBREF6"
},
{
"start": 123,
"end": 125,
"text": "8,",
"ref_id": "BIBREF7"
},
{
"start": 126,
"end": 128,
"text": "9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a4 \u00a2 \u00a6 \u00a5 \u00a7\u00a8 \u00a9 \" ! $ # % & ( ' 0 ) 1 0 2 3 5 4 7 6 0 8 2 9 4 7 @ B A \u00a7 C # % D E $ F 4 F C # H G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "If the answer is found at multiple ranks, the best (lowest) rank will be used. If an answer is not found in the top five, the score for that particular question is zero. The TREC-9 results reveal the improvements of the new components added to the system. The TREC-8 system was used as a baseline. With the combination of the parse-generated queries and the information extraction components, there is a total improvement of 106% and 25% for 50-and 250-byte runs respectively. The information extraction element has a greater impact when the answer is shorter as seen in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 571,
"end": 578,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "The algorithm requires a set of passages that are likely to contain an answer, and a category for each question. This algorithm is similar to the information extraction technique used in the GuruQA system [8] . The key to the algorithm is using term frequencies to give individual terms a score. Important information is uncovered by looking at repeated terms in a set of passages. In addition, terms are scored based on their recurrence in the corpus. The system applies very simple patterns to discover individual words or numbers, allowing the evaluation of the term's frequency. This method proceeds in the following sequence:",
"cite_spans": [
{
"start": 205,
"end": 208,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "1. Simplify the question category from the parser output. 4. Modify each term weight depending on its distance from the centre and rank of the passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "5. Select the (50-byte or 250-byte TREC 9 format) answer that maximizes the sum of the terms' weight found within the passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "6. Set all terms' weight in the selected answer to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "7. Repeat steps 5 and 6 until five answers are selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "The initial procedure simplifies the answer categories. The algorithm utilizes the question classification given by the parser in the following categories: Proper (person, name, company), Place (city, country, state), Time (date, time of day, weekday, month, duration, age), How (much, many, far, tall, etc.). The latter category is divided into sub-categories for monetary values, numbers, distances and other methods of measurement. Next, the passages are scanned using the patterns for the given question classification. The purpose of the patterns is to narrow the number of possible answers which will increases the performance. It is important to note that the patterns do not contribute to the terms' weight. These simple patterns are regular expressions that have been hand-coded. For example, the pattern for Proper is [\u02c6A-Za-z][A-Z][A-Za-z][\u02c6A-Za-z0-9], which matches a capital letter followed by one or more letters surrounded by white space or punctuation. Each word in the passage either matches a pattern or not. Patterns do not stretch over more than one word. In the passage \"Bank of America\" only \"Bank\" and \"America\" would be considered possible answers. The algorithm can find the correct answer \"Bank of America\" by determining that \"Bank\" and \"America\" should be in the answer. When question classification is unknown, the term frequency for all words in the passages is computed. The system was evaluated using no question classification and still achieved a MRR of 0.338. With no classification, only the term frequency equation is utilized to evaluate answers. This confirms the power of the term frequency equation (1) . The patterns for each question classification are very naive so in theory, if the patterns were improved the entire system would also improve.",
"cite_spans": [
{
"start": 1640,
"end": 1643,
"text": "(1)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "Thirdly, the terms are differentiated by assigning each term a weight. The term weight is related to the term's rareness. The rarer the term, the higher the term's value. The power of the information extraction component is almost entirely derived from this step. Each term's weight is calculated by the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D 3 \u00a5 P I 3 R Q ! T S V U W Y X 3 b a",
"eq_num": "(1)"
}
],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "where`3 is the number of times the term is in the corpus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TERM FREQUENCY ALGORITHM",
"sec_num": "3."
},
{
"text": "is the number of times the term is in the set of passages, and W is the total number of terms in the corpus. Knowing the term's corpus frequency is important; however, the strength of the formula is drawn from the multiple occurrences of terms appearing in the retrieved passages. An answer extract containing \"Bank of America\" will most likely be selected if \"Bank\" and \"America\" have high term frequency values. Essentially, this calculation employs the corpus term frequency in conjunction with a voting scheme. The equation will reveal the rarest term in the corpus that occurs most often in the passages retrieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "The fourth step modifies the term weight depending on its location. The centre of the passages is the centre of the query terms' locations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "As a possible answer's distance from the centre increases its relation to the query, terms decrease. To utilize this information, the term weight is modified in conjunction with its distance from the centre of the passage. The farther from the centre, the more the term weight is decreased. The term value is then modified according to the passage ranking in which it was found; the lower the ranking, the more the term weight is decreased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "Step four is important because it distinguishes duplicate terms depending on each term's position. This means that if there are many duplications of a possible answer each one will have a different term weight. For example, the term \"Bank\" found in the best passage would have a higher term weight than a \"Bank\" term found in a lower ranking passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "For TREC-9, the system was required to produce 50-and 250byte substrings. Each substring is assigned a score equal to the sum of the terms' weight within it. The best answer is the substring of the required length with the highest score. The weight of all the terms appearing in the answer substring is reduced to zero (step six). The final step is the selection of the next best substring; this process repeats until the number of desired substrings is fulfilled. Reducing the terms' weight to zero allows for distinction between each of the answers, eliminating answers that are almost the same. When a term is part of a phrase like \"knowing is half the battle\" the terms in the phrase will usually appear together in the retrieved passages. This means the phrase would be selected if \"knowing\" , \"half\", and \"battle\" scored highly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "The idea behind the algorithm is to evaluate potential answers in the passages retrieved using the term frequency equation. The question classification patterns are used to limit the number of possible answers evaluated, which heightens accuracy. The algorithm will select phrases even if all the words are not possible answers. The term frequency algorithm does not need to know the answer classification to perform proficiently. This is a very robust method to extract answers, though knowing the question classification does improve the system's mean reciprocal rank considerably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "In the future, term frequencies may be used in combination with Natural Language Processing (NLP) techniques such as a name entity tagging to further enhance the system's results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I 3",
"sec_num": null
},
{
"text": "In a large corpus there is duplicate or supporting information for almost any given question. The term frequency formula utilizes this knowledge, through two simple premises: the more a term is repeated, the more conceivable it is the correct answer, and the less likely a term appears by chance, the more probable it is also correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": "4."
},
{
"text": "The duplication component's importance in formula (1) can be evaluated by modifying the value of c in the term frequency equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": "4."
},
{
"text": "D 3 \u00a5 P I d 3Q ! S H U W Y X $ 3a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": "4."
},
{
"text": "(2) Figure 2 demonstrates the value that duplicate information in the passages has on the result by modifying c . The graph reveals that as the importance of duplicate terms increases, the performance of the system strengthens. By eliminating the repetition part of the equation (c",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": "4."
},
{
"text": ") the system only achieves a mean reciprocal rank of 0.237. As expected and demonstrated in the graph, the value of this part of the formula reaches a maximum before decreasing the overall system's accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 P e",
"sec_num": null
},
{
"text": "Overall, the information extraction component improves the question answering system. Notably, the term frequency algorithm does not require information regarding the structure or grammar of a natural language; therefore the algorithm may be use in many natural languages. The term frequency algorithm can even extract answers when the question's meaning is completely unknown. Having an elementary and reliable way to evaluate each term in a set of passages is useful. One possibility is to add highly weighted terms to the original query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "5."
},
{
"text": "In theory, as the corpus size expands, the performance of the system should increase as more duplicate information will become available. Finally, the initial value of the term frequency algorithm is beneficial to the overall system and future applications of question answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Question answering from large document collections",
"authors": [
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "House",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Light",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Mani",
"suffix": ""
}
],
"year": 1999,
"venue": "1999 AAAI Fall Symposium on Question Answering Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Breck, J. Burger, D. House, M. Light, and I. Mani. Question answering from large document collections. In 1999 AAAI Fall Symposium on Question Answering Systems, North Falmouth, MA, 1999.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Examining the role of statistical and linguistic knowledge sources in a general-knowledge question-answering system",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 2000,
"venue": "Sixth Applied Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "180--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Cardie, V. Ng, D. Pierce, and C. Buckley. Examining the role of statistical and linguistic knowledge sources in a general-knowledge question-answering system. In Sixth Applied Natural Language Processing Conference, pages 180-187, 2000.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Question answering by passage selection",
"authors": [
{
"first": "C",
"middle": [
"L A"
],
"last": "Clarke",
"suffix": ""
},
{
"first": "G",
"middle": [
"V"
],
"last": "Cormack",
"suffix": ""
},
{
"first": "D",
"middle": [
"I E"
],
"last": "Kisman",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Lynam",
"suffix": ""
}
],
"year": 2000,
"venue": "9th Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. L. A. Clarke, G. V. Cormack, D. I. E. Kisman, and T. R. Lynam. Question answering by passage selection. In 9th Text REtrieval Conference, Gaithersburg, MD, 2000.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fast automatic passage ranking",
"authors": [
{
"first": "G",
"middle": [
"V"
],
"last": "Cormack",
"suffix": ""
},
{
"first": "C",
"middle": [
"L A"
],
"last": "Clarke",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Palmer",
"suffix": ""
},
{
"first": "D",
"middle": [
"I E"
],
"last": "Kisman",
"suffix": ""
}
],
"year": 1999,
"venue": "8th Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. V. Cormack, C. L. A. Clarke, C. R. Palmer, and D. I. E. Kisman. Fast automatic passage ranking. In 8th Text REtrieval Conference, Gaithersburg, MD, November 1999.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Finding answers in large collections of texts: Paragraph indexing f abductive inference",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Maiorano",
"suffix": ""
}
],
"year": 1999,
"venue": "1999 AAAI Fall Symposium on Question Answering Systems",
"volume": "",
"issue": "",
"pages": "63--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. M. Harabagiu and S. J. Maiorano. Finding answers in large collections of texts: Paragraph indexing f abductive inference. In 1999 AAAI Fall Symposium on Question Answering Systems, pages 63-71, North Falmouth, MA, 1999.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Webclopedia",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "C.-Y",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Junk",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gerber",
"suffix": ""
}
],
"year": 2000,
"venue": "9th Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Hovy, U. Hermjakob, C.-Y. Lin, M. Junk, and L. Gerber. The Webclopedia. In 9th Text REtrieval Conference, Gaithersburg, MD, 2000.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "IBM's statistical question answering system",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 2000,
"venue": "9th Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ittycheriah, M. Franz, W.-J. Zhu, and A. Ratnaparkhi. IBM's statistical question answering system. In 9th Text REtrieval Conference, Gaithersburg, MD, 2000.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Ranking suspected answers to natural language questions using predictive annotation",
"authors": [
{
"first": "D",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Prager",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Samn",
"suffix": ""
}
],
"year": 2000,
"venue": "6th Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. R. Radev, J. Prager, and V. Samn. Ranking suspected answers to natural language questions using predictive annotation. In 6th Conference on Applied Natural Language Processing, Seattle, May 2000.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Halfway to question answering",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Woods",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Houston",
"suffix": ""
}
],
"year": 2000,
"venue": "9th Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. A. Woods, S. Green, P. Martin, and A. Houston. Halfway to question answering. In 9th Text REtrieval Conference, Gaithersburg, MD, 2000.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Scan the passages for patterns matching the question category. Overview of QA processing.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Significance of repetition in term frequency equation.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td colspan=\"4\">Mean reciprocal ranks using TREC-9 evaluation</td></tr><tr><td/><td colspan=\"2\">50-byte answer</td><td>250-byte answer</td></tr><tr><td/><td/><td>MRR</td><td>MRR</td></tr><tr><td>baseline</td><td>0.189</td><td/><td>0.407</td></tr><tr><td colspan=\"2\">parse-generated queries improvement 0.191</td><td>(+1%)</td><td>0.464 (+14%)</td></tr><tr><td colspan=\"4\">information extraction improvement 0.357 (+89%) 0.467 (+15%)</td></tr><tr><td>TREC-9 system</td><td colspan=\"3\">0.390 (+106%) 0.507 (+25%)</td></tr><tr><td>3. Assign each possible answer term an initial weight based on</td><td/><td/></tr><tr><td>its rareness.</td><td/><td/></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
}
}
}
}