| { |
| "paper_id": "X98-1025", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:06:12.025717Z" |
| }, |
| "title": "SUMMARIZATION: (1) USING MMR FOR DIVERSITY-BASED RERANKING AND (2) EVALUATING SUMMARIES", |
| "authors": [ |
| { |
| "first": "Jade", |
| "middle": [], |
| "last": "Goldstein", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", |
| "location": { |
| "postCode": "15213", |
| "region": "PA", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", |
| "location": { |
| "postCode": "15213", |
| "region": "PA", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper 1 develops a method for combining queryrelevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in reranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in ad-hoc query and in single document summarization. The latter are borne out by the trial-run (unofficial) TREC-style evaluation of summarization systems. However, the clearest advantage is demonstrated in the automated construction of large document and non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection. This paper also discusses our preliminary evaluation of summarization methods for single documents.", |
| "pdf_parse": { |
| "paper_id": "X98-1025", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper 1 develops a method for combining queryrelevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in reranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in ad-hoc query and in single document summarization. The latter are borne out by the trial-run (unofficial) TREC-style evaluation of summarization systems. However, the clearest advantage is demonstrated in the automated construction of large document and non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection. This paper also discusses our preliminary evaluation of summarization methods for single documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "With the continuing growth of online information, it has become increasingly important to provide improved mechanisms to find information quickly. Conventional IR systems rank and assimilate documents based on maximizing relevance to the user query [1, 8, 6, 12, 13] . In cases where relevant documents are few, or cases where very-high recall is necessary, pure relevance ranking is very appropriate. But in cases where there is a vast sea of potentially relevant documents, highly redundant with each other or (in the extreme) containing partially or fully duplicative information we must utilize means beyond pure relevance for document ranking.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 252, |
| "text": "[1,", |
| "ref_id": null |
| }, |
| { |
| "start": 253, |
| "end": 255, |
| "text": "8,", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 256, |
| "end": 258, |
| "text": "6,", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 259, |
| "end": 262, |
| "text": "12,", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 263, |
| "end": 266, |
| "text": "13]", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In order to better illustrate the need to combine relevance and anti-redundancy, consider a reporter or a This research was performed as part of Carnegie Group Inc.'s Tipster III Summarization Project under the direction of Mark Borger and Alex Kott. student, using a newswire archive collection to research accounts of airline disasters. He composes a wellthough-out query including \"airline crash\", \"FAA investigation\", \"passenger deaths\", \"fire\", \"airplane accidents\", and so on. The IR engine returns a ranked list of the top 100 documents (more if requested), and the user examines the top-ranked document. It's about the suspicious TWA-800 crash near Long Island. Very relevant and useful. The next document is also about \"TWA-800\", so is the next, and so are the following 30 documents. Relevant? Yes. Useful? Decreasingly so. Most \"new\" documents merely repeat information already contained in previously offered ones, and the user could have tired long before reaching the first non-TWA-800 air disaster document. Perfect precision, therefore, may prove insufficient in meeting user needs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "A better document ranking method for this user is one where each document in the ranked list is selected according to a combined criterion of query relevance and novelty of information. The latter measures the degree of dissimilarity between the document being considered and previously selected ones already in the ranked list. Of course, some users may prefer to drill down on a narrow topic, and others a panoramic sampling bearing relevance to the query. Best is a usertunable method that focuses the search from a narrow beam to a floodlight. Maximal Marginal Relevance (MMR) provides precisely such functionality, as discussed below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "If we consider document summarization by relevantpassage extraction, we must again consider antiredundancy as well as relevance. Both query-free summaries and query-relevant summaries need to avoid redundancy, as it defeats the purpose of summarization. For instance, scholarly articles often state their thesis in the introduction, elaborate upon it in the body, and\" reiterate it in the conclusion. Including all three in versions in the summary, however, leaves little room for other useful information. If we move beyond single document summarization to document cluster summarization, where the summary must pool passages from different but possibly overlapping documents, reducing redundancy becomes an even more significant problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Luhn's work at IBM in the 1950's [12] , and evolved through several efforts including Tait [24] and Paice in the 1980s [17, 18] . Much early work focused on the structure of the document to select information. In the 1990's several approaches to summarization blossomed, include trainable methods [10] , linguistic approaches [8, 15] and our information-centric method [2] , the first to focus on query-relevant summaries and anti-redundancy measures. As part of the TIPSTER program [25] , new investigations have started into summary creation using a variety of strategies. These new efforts address query relevant as well as \"generic\" summaries and utilize a variety of approaches including using co-reference chains (from the University of Pennsylvania) [25] , the combination of statistical and linguistic approaches (Smart and Empire) from SaBir Research, Cornell University and GE R&D Labs, topic identification and interpretation from the ISI, and template based summarization from New Mexico State University [25] .", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 37, |
| "text": "[12]", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 91, |
| "end": 95, |
| "text": "[24]", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 119, |
| "end": 123, |
| "text": "[17,", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 124, |
| "end": 127, |
| "text": "18]", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 297, |
| "end": 301, |
| "text": "[10]", |
| "ref_id": null |
| }, |
| { |
| "start": 326, |
| "end": 329, |
| "text": "[8,", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 330, |
| "end": 333, |
| "text": "15]", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 369, |
| "end": 372, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 483, |
| "end": 487, |
| "text": "[25]", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 757, |
| "end": 761, |
| "text": "[25]", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1017, |
| "end": 1021, |
| "text": "[25]", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated document summarization dates back to", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we discuss the Maximal Marginal Relevance method (Section 2), its use for document reranking (Section 3), our approach to query-based single document summarization (Section 4), and our approach to long documents (Section 6) and multidocument summarization (Section 6). We also discuss our evaluation efforts of single document summarization (Section 7-8) and our preliminary results (Section 9).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated document summarization dates back to", |
| "sec_num": null |
| }, |
| { |
| "text": "Most modern IR search engines produce a ranked list of retrieved documents ordered by declining relevance to the user's query [1, 18, 21, 26] . In contrast, we motivated the need for '\"relevant novelty\" as a potentially superior criterion. However, there is no known way to directly measure new-and-relevant information, especially given traditional bag-of-words methods such as the vector-space model [19, 21] . A first approximation to measuring relevant novelty is to measure relevance and novelty independently and provide a linear combination as the metric. We call the linear combination \"marginal relevance\" --i.e. a document has high marginal relevance if it is both relevant to the query and contains minimal similarity to previously selected documents. We strive to maximize marginal relevance in retrieval and summarization, hence we label our method \"maximal marginal relevance\" (MMR).", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 129, |
| "text": "[1,", |
| "ref_id": null |
| }, |
| { |
| "start": 130, |
| "end": 133, |
| "text": "18,", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 134, |
| "end": 137, |
| "text": "21,", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 138, |
| "end": 141, |
| "text": "26]", |
| "ref_id": null |
| }, |
| { |
| "start": 402, |
| "end": 406, |
| "text": "[19,", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 407, |
| "end": 410, |
| "text": "21]", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAXIMAL MARGINAL RELEVANCE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Let C = document collection (or document stream) Let Q = ad-hoc query (or analyst-profile or topic/category specification) Let R = IR (C, Q, q) -i.e. the ranked list of documents retrieved by an IR system, given C and Q and a relevance threshold theta, below which it will not retrieve documents. (q can be degree of match, or number of documents).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximal Marginal Relevance (MMR) metric is defined as follows:", |
| "sec_num": null |
| }, |
| { |
| "text": "Let S = subset of documents in R already provided to the user. (Note that in an IR system without MMR and dynamic reranking, S is typically a proper prefix of list R.) R~ is the set difference, i.e. the set of documents in R, not yet offered to the user.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximal Marginal Relevance (MMR) metric is defined as follows:", |
| "sec_num": null |
| }, |
| { |
| "text": "def MMR(C,Q,R,S)=Argmax[X*Sim 1 (Di,Q)-(1-X)Max(Sim2(Di,Dj))]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximal Marginal Relevance (MMR) metric is defined as follows:", |
| "sec_num": null |
| }, |
| { |
| "text": "Di ~R\\S Dj eS", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximal Marginal Relevance (MMR) metric is defined as follows:", |
| "sec_num": null |
| }, |
| { |
| "text": "Given the above definition, MMR computes incrementally the standard relevance-ranked list when the parameter ~=1, and computes a maximal diversity ranking among the documents in R when X=0. For intermediate values of ~, in the interval [0,1], a linear combination of both criteria is optimized. Users wishing to sample the information space around the query, should set ~, at a smaller value, and those wishing to focus in on multiple potentially overlapping or reinforcing relevant documents, should set ~, to a value closer to 1. For document retrieval, we found that a particularly effective search strategy (reinforced by the user study discussed below) is to start with a small L (e.g. ~, = .3) in order to understand the information space in the region of the query, and then to focus on the most important parts using a reformulated query (possibly via relevance feedback) and a larger value of ~ (e.g. ~, = .7).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximal Marginal Relevance (MMR) metric is defined as follows:", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the similarity metric Sim 1 used in document retrieval and relevance ranking between documents and query could be the same as Sim2 between documents (e.g., both could be cosine similarity), but this need not be the case. A more accurate, but computationally more costly metric could be used when applied only to the elements of the retrieved document set R, given that IRI << ICI, if MMR is applied for re-ranking the top portion of the ranked list produced by a standard IR system. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximal Marginal Relevance (MMR) metric is defined as follows:", |
| "sec_num": null |
| }, |
| { |
| "text": "We implemented MMR in two retrieval engines, PURSUIT (an upgraded version of the original TM retrieval engine inside the Lycos search engine), [9] and SMART (the publicly available version of the Cornell IR engine) [1]. Using the scoring functions available in each system for both Siml and Sim2, we obtained consistent and expected results in the behavior of the two systems.", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 146, |
| "text": "[9]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DOCUMENT REORDERING", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The results of MMR reranking are shown in Table 1 . In this Reuters document collection, article 1403 is a duplicate of 1388. MMR reranking performs as expected, for decreasing values of 1, the ranking of 1403 drops.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 42, |
| "end": 50, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "DOCUMENT REORDERING", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Also as predicted, novel but still relevant information as evidenced by document 69 starts to increase in ranking. Relevant, but similar to the highest ranked documents, such as document 1713 drop in ranked ordering. Document 2149 's position varies depending on its similarity to previously seen information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DOCUMENT REORDERING", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We also performed a pilot experiment with five users who were undergraduates from various disciplines. The purpose of the study was to find out if they could tell what was the difference between the standard ranked document order retrieved by SMART and a MMR reranked order with X = ..5. They were asked to perform nine different search tasks to find information and asked various questions about the tasks. They used two methods to retrieve documents, known only as R and S. Parallel tasks were constructed so that one set of users would perform method R on one task and method S on a similar task. Users were not told how the documents were presented only that either \"method R\" or \"method S\" were used and that they needed to be try to distinguish the differences between methods. After each task we asked them to record the information found. We also asked them to look at the ranking for method R and method S and see if they could tell any difference between the two. The majority of people said they preferred the method which gave in their opinion the most broad and interesting topics. In the final section they were asked to select a search method and use it for a search task. 80% (4 out of 5) chose the method MMR to use. The person who chose Smart stated it was because \"it tends to group more like stories together.\" The users indicated a differential preference for MMR in navigation and for locating the relevant candidate documents more quickly, and pure-relevance ranking when looking at related documents within that band. Three of the five users clearly discovered the differential utility of diversity search and relevance-only search. One user explicitly stated his strategy: The initial study was too small to yield statistically significant trends with respect to speed of known-item retrieval, or recall improvements for broader query tasks. However, based on our own experience and questionnaire responses from the five users, we expect that task demands play a large role with respect to which method yields better performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DOCUMENT REORDERING", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\"Method R [", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DOCUMENT REORDERING", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Human summarization of documents, sometimes called \"abstraction\" is a fixed-length generic summary, reflecting the key points that the abstractor --rather than the user --deems important. Consider a physician evaluating a particular chemotherapy regimen who wants to know about its adverse effects to elderly female patients. The retrieval engine produces several lengthy reports (e.g. a 300-page clinical study), whose abstracts do not contain any hint of whether there is information regarding effects on elderly patients. A useful summary for this physician would contain query-relevant passages (e.g. differential adverse effects on elderly males and females, buried in page 211-212 of the clinical study) assembled into a summary. A different user with different information needs may require a totally different summary of the same document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We developed a minimal-redundancy queryrelevant summarizer-by-extraction method, which differs from previous work in summarization [10, 12, 15, 18, 24] in several dimensions.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 135, |
| "text": "[10,", |
| "ref_id": null |
| }, |
| { |
| "start": 136, |
| "end": 139, |
| "text": "12,", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 140, |
| "end": 143, |
| "text": "15,", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 144, |
| "end": 147, |
| "text": "18,", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 148, |
| "end": 151, |
| "text": "24]", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "\u2022 Optional query relevance: as discussed above a query or a user interest profile (for the vector sum of both, appropriately weighted) is used to select relevant passages. If a generic query-free summary is desired, the centroid vector of the document is calculated and passages are selected with the principal components of the centroid as the query.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "\u2022 Variable granularity summarization: The length of the summary is under user control. Brief summaries are useful for indicative purposes (e.g. whether to read further), and longer ones for drilling and extracting detailed information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "\u2022 Non-redundancy: Information density is enhanced by ensuring a degree of dissimilarity between passages contained in the summary. The degree of query-focus vs. diversity sampling is under user control (the ~, parameter in the MMR formula).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Our process for creating single document summaries is as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "1. Segment a document into passages and index the passages using the inverted indexing method used by the IR engine for full documents. Passages may be phrases, sentences, n-sentence chunks, or paragraphs. For the TIPSTER III evaluation, we used sentences as passages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SINGLE DOCUMENT SUMMARIES", |
| "sec_num": "4." |
| }, |
| { |
| "text": "to the query. Use a threshold below which the passages are discarded. We used a similarity metric based on cosine similarity using the traditional TF-IDF weights.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Within a document, identify the passages relevant", |
| "sec_num": "2." |
| }, |
| { |
| "text": "the passages (rather than full documents).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Depending on the desired length of the summary, select a few or larger number. If the parameter is not very close to 1, redundant query relevant passages will tend to be eliminated and other different, slightly less query relevant passages will be included. We allow the user to select the number of passages or the percentage of the document size (also known as the \"compression ratio\"). 4. Reassemble the selected passages into a summary document using one of the following summarycohesion criteria: \u2022 Document appearance order:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Present the segments according to their order of presentation in the original document. If the first sentence is longer than a threshold, we automatically include this sentence in the summary as it tends to set the context for the article. If the user only wants to view a few segments, the first sentence must also meet a threshold for sentence rank to be included.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u2022 News-story principle: Present the information in MMR-ranked order, i.e., the most relevant and most diverse information first. In this manner, the reader gets the maximal information even if they stop reading the summary. This allows the diversity of relevant information to be presented earlier and topic introduced may be revisited after other relevant topics have been introduced. \u2022 Topic-cohesion principle: First group together the document segments by topic clustering (using sub-document similarity criteria). Then rank the centroids of each cluster by MMR (most important first) and present the information, a topic-coherent cluster at a time, starting with the cluster whose centroid ranks highest.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We implemented query-relevant documentappearance-based sequencing of information. Our method of summarization does not require the more elaborate language-regeneration needed by Kathy McKeown and her group at Columbia in their summarization work [15] . As such our method is simpler, faster and more widely applicable, but yields potentially less cohesive summaries. All summary results in this paper use the SMART search engine with stopwords eliminated from the indexed data and stemming.", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 250, |
| "text": "[15]", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Query: Delaunay refinement mesh generation finite element method foundations three dimension analysis; ~ = .3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[1] Delaunay refinement is a technique for generating unstructured meshes of triangles or tetrahedra suitable for use in the finite element method or other numerical methods for solving partial differential equations. [5] The purpose of this thesis is to further this progress by cementing the foundations of two-dimensional Delaunay refinement, and by extending the technique and its analysis to three dimensions. [15] Nevertheless, Delaunay refinement methods for tetrahedral mesh generation have the rare distinction that they offer strong theoretical bounds and frequently perform well in practice.", |
| "cite_spans": [ |
| { |
| "start": 218, |
| "end": 221, |
| "text": "[5]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 415, |
| "end": 419, |
| "text": "[15]", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[39] If one can generate meshes that are completely satisfying for numedcal techniques like the finite element method, the other applications fall easily in line.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[131] Our understanding of the relative merit of different metrics for measuring element quality, or the effects of small numbers of poor quality elements on numedcal solutions, is based as much on engineedng expedence and rumor as it is on mathematical foundations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[158] Delaunay refinement methods are based upon a wellknown geometric construction called the Delaunay triangulation, which is discussed extensively in the mesh generation chapter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[201] I first extend Ruppert's algorithm to three dimensions, and show that the extension generates nicely graded tetrahedral meshes whose circumradius-to-shortest edge ratios are nearly bounded below two.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[2250] Refinement Algorithms for Quality Mesh Generation: Delaunay refinement algodthms for mesh generation operate by maintaining a Delaunay or constrained Delaunay triangulation, which is refined by inserting carefully placed vertices until the mesh meets constraints on element quality and size.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3648] I do not know to what difference between the algorithms one should attribute the slightly better bound for Delaunay refinement, nor whether it marks a real difference between the algodthms or is an artifact of the different methods of analysis. Query: sliver mesh boundary removal small angles; ;L = .7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[1] Delaunay refinement is a technique for generating unstructured meshes of tdangles or tetrahedra suitable for use in the finite element method or other numerical methods for solving partial differential equations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[129] Hence, many mesh generation algorithms take the approach of attempting to bound the smallest angle.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[2621] Because s is locked, inserting a vertex at c will not remove t from the mesh.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[2860] Of course, one must respect the PSLG; small input angles cannot be removed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3046] The worst slivers can often be removed by Delaunay refinement, even if there is no theoretical guarantee.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3047] Meshes with bounds on the circumradius-to-shortest edge ratios of their tetrahedra are an excellent starting point for mesh smoothing and optimization methods designed to remove slivers and improve the quality of an existing mesh (see smoothing section).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3686] If one inserts a vertex at the circumcenter of each sliver tetrahedron, will the algorithm fail to terminate?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3702] A sliver can always be eliminated by splitting it, but how can one avoid creating new slivers in the process?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3723] Unfortunately, my practical success in removing slivers is probably due in part to the severe restdctions on input angle I have imposed upon Delaunay refinement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "[3724] Practitioners report that they have the most difficulty removing slivers at the boundary of a mesh, especially near small angles. Figure 2 : Focused-query MMR-generated summary of dissertation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 137, |
| "end": 145, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Apply the MMR metric as defined in Section 2 to", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The MMR-passage selection ':'method for summarization works better for longer documents (which typically contain more inherent passage redundancy across document sections such as abstract, introduction, conclusion, results, etc.). To demonstrate the quality of summaries that can be obtained for long documents, we summarized an entire dissertation containing 3,772 sentences with a generic topic query constructed by expanding the thesis title ( Figure 1 ). In contrast, Figure 2 shows the results of a more specialized query with a larger L value to focus summarization less on diversity and more on topic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 447, |
| "end": 455, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 472, |
| "end": 480, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "SUMMARIZING DOCUMENTS LONGER", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The above example demonstrates the utility of query relevance in summarization and the incremental utility of controlling summary focus via the lambda parameter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUMMARIZING DOCUMENTS LONGER", |
| "sec_num": "5." |
| }, |
| { |
| "text": "It also highlights a shortcoming of summarization by extraction, namely coping with antecedent references. Sentence [2621] refers to coefficients \"s\", \"c\", and \"t,\" which do not make sense outside the framework that defines them. Such referential problems are ameliorated with increased passage length, for instance using paragraphs rather than sentences. However, longer-passage selection also implies longer summaries. Another solution co-reference resolution [25] .", |
| "cite_spans": [ |
| { |
| "start": 462, |
| "end": 466, |
| "text": "[25]", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUMMARIZING DOCUMENTS LONGER", |
| "sec_num": "5." |
| }, |
| { |
| "text": "As discussed earlier, MMR passage selection works equally well for summarizing single documents or clusters of topically related documents. Our method for multi-document summarization follows the same basic procedure as that of single document summarization (see section 4). In step 2 (Section 4), we identify the N most relevant passages from each of the documents in the collection and use them to form the passage set to be MMR re-ranked. N is dependent on the desired resultant length of the summary. We used N relevant passages from each document collection rather than the top relevant passages in the entire collection so that each article had a chance to provide a query-relevant contribution. In the future we intend to compare this to using MMR ranking where the entire document set is treated as a single document. Steps 2, 3 and 4 are primarily the same.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MULTI-DOCUMENT SUMMARIES is", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The TIPSTER evaluation corpus provided several sets of topical clusters to which we applied MMR summarization. In one such example on a cluster of apartheid-related documents, we used the topic description as the query (see Figure 3 ) and N was set to 4 (4 sentences per article were reranked). The top 10 sentences for ~ = 1 (effectively query relevance, but no MMR) and k = .3 (both query relevance and MMR anti-redundancy) are shown in Figures 4 and 5 respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 224, |
| "end": 232, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 439, |
| "end": 454, |
| "text": "Figures 4 and 5", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MULTI-DOCUMENT SUMMARIES is", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The summaries clearly demonstrate the need for MMR in passage selection. The 7~ = 1 case exhibits considerable redundancy, ranging from nearreplication in passages [4] and [5] to redundant content in passages [7] and [9] . Whereas the L = .3 case exhibits no such redundancy. Counting clearly distinct propositions in both cases yields a 20% greater information content for the MMR case, though both summaries are equivalent in length.", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 167, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 172, |
| "end": 175, |
| "text": "[5]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 209, |
| "end": 212, |
| "text": "[7]", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 217, |
| "end": 220, |
| "text": "[9]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MULTI-DOCUMENT SUMMARIES is", |
| "sec_num": "6." |
| }, |
| { |
| "text": "<head> Tipster Topic Description <num> Number: 110 <dom> Domain: International Politics <title> Topic: Black Resistance Against the South African Government <desc> Description: Document will discuss efforts by the black majority in South Afdca to overthrow domination by the white minority government. <smry> Summary: Document will discuss efforts by the black majority in South Africa to overthrow domination by the white minority government. <narr> Narrative: A relevant document will discuss any effort by blacks to force political change in South Africa. The reported black challenge to apartheid may take any form --military, political, or economic --but of greatest interest would be information on reported activities by armed personnel linked to the African National Congress ( [1] [761] AP880212-0060 [15] ANGOP quoted the Angolan statement as saying the main causes of conflict in the region are South Africa's \"'illegal occupation\" of Namibia, South African attacks against its black-ruled neighbors and its alleged creation of armed groups to carry out \"'terrorist activities\" in those countries, and the denial of political rights to the black majodty in South Africa. [2] [758] AP880803-0080 [25] Three Canadian anti-apartheid groups issued a statement urging the government to sever diplomatic and economic links with South Africa and aid the African National Congress, the banned group fighting the white-dominated government in South Africa. [3] [756] AP880803-0082 [25] Three Canadian anti-apartheid groups issued a statement urging the government to sever diplomatic and economic links with South Africa and aid the African National Congress, the banned group fighting the white-dominated government in South Africa. [4] [790] AP880802-0165 [27] South Africa says the ANC, the main black group fighting to overthrow South Africa's white government, has seven major military bases in Angola, and the Pretona government wants those bases closed down. [5] [654] AP880803-0158 [27] South Africa says the ANC, the main black group fighting to overthrow South Africa's white-led government, has seven major military bases in Angola, and it wants those bases closed down. [6] [92] WSJ910204-0176 [2] de Klerk's proposal to repeal the major pillars of apartheid drew a generally positive response from black leaders, but African National Congress leader Nelson Mandela called on the international community to continue economic sanctions against South Africa until the government takes further steps. [7] [781] AP880823-0069 [18] The ANC is the main guerrilla group fighting to overthrow the South African government and end apartheid, the system of racial segregation in which South Africa's black majority has no vote in national affairs. [8] [375] WSJ890908-0159 [24] For everywhere he tums, he hears the same mantra of demands --release, lift bans, dismantle, negotiate --be it from local anti-apartheid activists or from foreign governments: release political prisoners, like African National Congress leader Nelson Mandela; lift bans on all political organizations, such as the ANC, the Pan Africanist Congress and the United Democratic Front; dismantle all apartheid legislation; and finally, begin negotiations with leaders of all races. [9] [762] AP880212-0060 [14] The African National Congress is the main rebel movement fighting South Africa's white-led government and SWAPO is a black guerrilla group fighting for independence for Namibia, which is administered by South Africa. [10] [91] WSJ910404-0007 [8] Under an agreement between the South Afncan government and the Afncan National Congress, the major anti-apartheid organization, South Africa's remaining political prisoners are scheduled for release by April 30. [15] ANGOP quoted the Angolan statement as saying the main causes of conflict in the region are South Africa's \"'illegal occupation\" of Namibia, South African attacks against its black-ruled neighbors and its alleged creation of armed groups to carry out \"'terrorist activities\" in those countries, and the denial of political dghts to the black majority in South Afdca. [ [11] These included a picture of Oliver Tambo, the exiled leader of the banned African National Congress; a story about 250 women attending an ANC conference in southern Afnca; a report on the cdsis in black education; and an advertisement sponsored by a Catholic group in West Germany that quoted a Psalm and called for the abolition of torture in South Africa. [8] [12] [303] AP880621-0089 [8] There was no immediate comment from South Africa, which in the past has staged cross-border raids on Botswana and other neighboring countries to attack suspected facilities of the Afdcan National Congress, which seeks to overthrow South Afdca's white-led government. [9] [24] [502] wsJg00510-0088 [24] While the membership of Inkatha, the religiously and politically conservative group that is the ANC's chief rival for power in black South Afdca, is overwhelmingly Zulu, Inkatha's leader, Mangosutho Buthelezi, has very seldom appealed to sectional tnbal loyalties. [10] [16] [593] AP890821-0092 [11] Besides ending the emergency and lifting bans on anti-apartheid groups and individual activists, the Harare summit's conditions included the removal of all troops from South Afnca's black townships, releasing all political prisoners and ending political tdals and executions, and a government commitment to free political discussion. [14] ANGOP quoted the Angolan statement as saying the main causes of conflict in the region are South Africa's \"'illegal occupation\" of Namibia, South African attacks against its black-ruled neighbors and its alleged creation of armed groups to carry out \"'terronst activities\" in those countries, and the denial of political rights to the black majority in South Africa. </TEXT> Figure 6 : Single Document Summary AP880212-0060, 10% of document length.", |
| "cite_spans": [ |
| { |
| "start": 810, |
| "end": 814, |
| "text": "[15]", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1182, |
| "end": 1185, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1206, |
| "end": 1210, |
| "text": "[25]", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1459, |
| "end": 1462, |
| "text": "[3]", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1483, |
| "end": 1487, |
| "text": "[25]", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1736, |
| "end": 1739, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1760, |
| "end": 1764, |
| "text": "[27]", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 1968, |
| "end": 1971, |
| "text": "[5]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1992, |
| "end": 1996, |
| "text": "[27]", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 2184, |
| "end": 2187, |
| "text": "[6]", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 2208, |
| "end": 2211, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 2512, |
| "end": 2515, |
| "text": "[7]", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 2536, |
| "end": 2540, |
| "text": "[18]", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 2752, |
| "end": 2755, |
| "text": "[8]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 2777, |
| "end": 2781, |
| "text": "[24]", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 3257, |
| "end": 3260, |
| "text": "[9]", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 3281, |
| "end": 3285, |
| "text": "[14]", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 3503, |
| "end": 3507, |
| "text": "[10]", |
| "ref_id": null |
| }, |
| { |
| "start": 3528, |
| "end": 3531, |
| "text": "[8]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 3744, |
| "end": 3748, |
| "text": "[15]", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 4115, |
| "end": 4116, |
| "text": "[", |
| "ref_id": null |
| }, |
| { |
| "start": 4117, |
| "end": 4121, |
| "text": "[11]", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 4480, |
| "end": 4483, |
| "text": "[8]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 4484, |
| "end": 4488, |
| "text": "[12]", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 4509, |
| "end": 4512, |
| "text": "[8]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 4780, |
| "end": 4783, |
| "text": "[9]", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 4784, |
| "end": 4788, |
| "text": "[24]", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 4810, |
| "end": 4814, |
| "text": "[24]", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 5080, |
| "end": 5084, |
| "text": "[10]", |
| "ref_id": null |
| }, |
| { |
| "start": 5085, |
| "end": 5089, |
| "text": "[16]", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 5110, |
| "end": 5114, |
| "text": "[11]", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 784, |
| "end": 785, |
| "text": "(", |
| "ref_id": null |
| }, |
| { |
| "start": 5829, |
| "end": 5837, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Topic:", |
| "sec_num": null |
| }, |
| { |
| "text": "As can be seen from the above summaries, multidocument synthetic summaries require support in the user interface. In particular, the following issues need to be addressed:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic:", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Attributability: The user needs to be able to access easily the source of a given passage. This could be the single document summary (see Figure 6 ). \u2022 Contextually: The user needs to be able to zoom in on the context surrounding the chosen passages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 140, |
| "end": 148, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Topic:", |
| "sec_num": null |
| }, |
| { |
| "text": "The user should be able to highlight certain parts of the synthetic summary and give a command to the system indicating that these parts are to be weighted heavily and that other parts are to be given a lesser weight.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 Redirection:", |
| "sec_num": null |
| }, |
| { |
| "text": "An ideal text summary contains the relevant information for which the user is looking, excludes extraneous information, provides background to suit the user's profile, eliminates redundant information and filters out relevant information that the user knows or has seen. The first step in building such summaries is extracting the relevant pieces of articles to a user query. We performed a pilot evaluation in which we used a database of assessor marked relevant sentences to examine how well a summarization system could extract the relevant sections of documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Automatically generating text extraction summaries based on a query or high frequency words from the text can produce a reasonable looking summary, yet this summary can be far from the optimal goal of quality summaries: readable, useful, intelligible, appropriate length summaries from which the information that the user is seeking can be extracted. Jones & Galliers define this type of evaluation as intrinsic (measuring a system's quality) compared to extrinsic (measuring a system's performance in a given task) [7] .", |
| "cite_spans": [ |
| { |
| "start": 516, |
| "end": 519, |
| "text": "[7]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "In the past year, there has been a focus in TIPSTER on both the intrinsic and extrinsic aspects of summarization evaluation [4] .", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 127, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "The evaluation consisted of three tasks (1) determining document relevance to a topic for query-relevant summaries (an indicative summary), (2) determining categorization for generic summaries (an indicative summary), (3) establishing whether summaries can answer a specified set of questions (an informative summary) by comparison to an ideal summary. In each task, the summaries are rated in terms of confidence in decision, intelligibility and length. Jing, Barzilay, McKeown and Elhadad [6] performed a pilot experiment (40 sentences) in which they examined the performance (precision-recall) of three summarization systems (one using notion of number of sentences, the other two using numbers of words or number of clauses). They compared the performance of these systems against human ideal summaries and found that different systems achieved their best performances at different lengths (compression ratios).", |
| "cite_spans": [ |
| { |
| "start": 491, |
| "end": 494, |
| "text": "[6]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "They also found the same results for determining document relevance to a topic (one of the TIPSTER tasks) for query-relevant summaries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Our approach to summarization is different from Columbia and TIPSTER in that the focus is not on an \"ideal human summary\" of any particular document cutoff size. An ideal summarization system must first be able to recognize the relevant sentences (or parts of a document) for a topic or query and then be able to create a summary from these relevant segments. Although a list of words, an index or table of contents, is an appropriate label summary and can indicate relevance, informative summaries need at least noun-verb phrases. We choose to use the sentence as our underlying unit and evaluated summarization systems for the first stage of summary creation -coverage of relevant sentences. Other systems [16, 23] use the paragraph as a summary unit. Since the paragraph consists of more than one sentence and often more than one information unit, it is not as suitable for this type of evaluation, although it may be more suitable for a construction unit in summaries due to the additional context that it provides. For example., paragraphs will often solve co-reference issues, yet provide additional nonrelevant information.", |
| "cite_spans": [ |
| { |
| "start": 708, |
| "end": 712, |
| "text": "[16,", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 713, |
| "end": 716, |
| "text": "23]", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "One of the issues in summarization evaluation is how to score (penalize) extraneous non-useful information contained in a summary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Unlike document information retrieval, text summarization evaluation has not extensively addressed the performance of different methodologies by evaluating the effects of different components. Most summarization systems use linguistic knowledge as well as a statistical component [3, 5, 16, 23] . We applied the monolingual information retrieval method of query expansion [20, 27, 28] to summarization, using parts of the document to expand our queries.", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 283, |
| "text": "[3,", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 284, |
| "end": 286, |
| "text": "5,", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 287, |
| "end": 290, |
| "text": "16,", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 291, |
| "end": 294, |
| "text": "23]", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "We also performed compression experiments. We used a modified version of the 11pt average recall/precision (Section 9.2) to evaluate our results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EVALUATION OF SINGLE DOCUMENT SUMMARIZATION", |
| "sec_num": "7." |
| }, |
| { |
| "text": "For our pilot experiment, we created two data sets, one based on relevant sentence judgments, the other based on model summaries (Section 8.1). We then defined a modified version of the 11-point average recall precision (Section 8.2) to use as our evaluation measure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EXPERIMENT DESIGN", |
| "sec_num": "8." |
| }, |
| { |
| "text": "We then performed experiments as described in Section 9 to evaluate the effects of MMR, query expansion, and compression.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EXPERIMENT DESIGN", |
| "sec_num": "8." |
| }, |
| { |
| "text": "We created two data sets for our pilot experiments. For the first { 110 Set} we took 50 documents from the TIPSTER evaluation provided set of 200 news articles spanning 1988-1991. All these documents were on the same topic (see Figure 3) . Three evaluators ranked each of the sentences in the document as relevant, somewhat relevant and not relevant. For the purpose of this experiment, somewhat relevant was treated as relevant and the final score for the sentence was determined by a majority vote. The sentences that received this majority vote were tabulated as a relevant sentence (to the topic). The document was ranked as relevant or not relevant.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 228, |
| "end": 237, |
| "text": "Figure 3)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "All three assessors had 68% agreement in their relevance judgments. The query was extracted from the topic (see Figure 3 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 112, |
| "end": 120, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "The second data set {Model Sutures} was provided as a training set for the Question and Answer portion of the TIPSTER evaluation. It consisted of \"model summaries\" which contained sentences of an article that answered a list of questions. These model sentences were used to score the summarizer. The query was extracted from the questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "We modified the 11-pt recall-precision curves [21] commonly used for document information retrieval. Since many documents only have a few relevant sentences, corresponding curves for summarization have a lot of intervals with missing data items. To remedy this situation, we implemented a step function for the precision values. This allowed the recall intervals that would not naturally be filled to be assigned an actual precision value. For example, in the case of two relevant sentences in the document, points 0-5 (the first five intervals) would all have the first precision value (naturally occurring at point 5) and points 6-10 (the second value), the second value (naturally occurring at point 10). We interpolated the results of each query for the composite graph to form modified interpolated recall-precision curves.", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 50, |
| "text": "[21]", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Code", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "In order to account for the fact that a compressed summary does not have the opportunity to return the full set of relevant sentences, we use a normalized version of recall and a normalized version of F1 as defined below. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Code", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "In this section we describe the experiments we performed and results obtained in evaluating the diversity gain -MMR (Section 9.1), query expansion (Section 9.2) and compression (Section 9.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EXPERIMENTS AND RESULTS", |
| "sec_num": "9." |
| }, |
| { |
| "text": "In order to evaluate what the relevance loss for the MMR diversity gain in single document summarization, we created summaries for two document length percentages (measured by number of sentences) and determined how many relevant sentences the summaries contained. The results are given in Table 2 for document percentages 0.25 and 0.1. Two precision scores were calculated, (1) that of TREC relevance plus at least one CMU assessor marking the document as relevant (yielding 23 documents) and (2) at least two of the three CMU assessor marking the document as relevant (yielding 15 documents). From these scores we can see there is no significant statistical difference between the ~,=1, ~,=.7, and 3.=.3 scores. This is often explained by cases where the L=l article failed to pick up a piece of relevant information and the reranking of k=.7 or .3 might or vice versa. The baseline (baseln) contains the first N sentences of the document, where N is the number of sentences in the summary.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 290, |
| "end": 297, |
| "text": "Table 2", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MMR (Diversity Gain)", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "We expanded the original queries by: (1) adding the highest ranked sentence of the document (a form of pseudo-relevance feedback), (2) adding the title, and (3) adding the title and the highest ranked sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query Expansion", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "The most significant effects were shown for short queries (see Figures 7, 9 ). For the longer queries, the effect was less (see Figures 8, 10) . For 20% document length (characters rounded up to the sentence boundary) adding the highest ranked sentence (prf) and title to the query helps performance for the 110 set relevant summary judgments (Figures 7, 8) . For 10% document length, for short queries just adding the title performed better than prf and the title (Figures 9,10 ).", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 138, |
| "text": "Figures 8,", |
| "ref_id": null |
| }, |
| { |
| "start": 139, |
| "end": 142, |
| "text": "10)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 63, |
| "end": 75, |
| "text": "Figures 7, 9", |
| "ref_id": null |
| }, |
| { |
| "start": 343, |
| "end": 357, |
| "text": "(Figures 7, 8)", |
| "ref_id": null |
| }, |
| { |
| "start": 465, |
| "end": 478, |
| "text": "(Figures 9,10", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Query Expansion", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "We will determine if these results hold over more extensive data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Query Expansion", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "These results are similar to those obtained for document information retrieval [27] . Since 72% of the first sentences were marked relevant (Table 3) , one area we plan to explore is results using the first sentence in the summary and/or query under specified circumstances, such as our first sentence heuristics (Section 4).", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 83, |
| "text": "[27]", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 140, |
| "end": 149, |
| "text": "(Table 3)", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Query Expansion", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "An important evaluation criteria for summarization is what is the ideal summary output length (compression of the document) and how does it affects the user's task. To begin looking at this issue, we evaluated the performance of our system at different summary lengths as a percentage of the document length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "We used a document compression factor based on the number of characters in the document. If this cutoff fell in the middle of a sentence the rest of the sentence was allowed, thus the output summary ends up being slightly longer than the actually compression factor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "The data set statistics are shown in Tables 3 and 4 . Note that non-relevant documents (Table 4) still have a high percentage of relevant sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 51, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 87, |
| "end": 96, |
| "text": "(Table 4)", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "Ten documents in the 110 set were non-relevant and had no relevant sentences. We also see that the summary length or number of relevant sentences chosen per document varies significantly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "Summaries were compared using the modified interpolated normalized recall-precision curve as previously described (Section 8.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "In Figure 11 , we examine the effect of compression on normalized recall and precision and in Figure 12 , we show a plot of normalized F1. This F1 graph indicates that the normalized F1 score is helped by having the pseudo-relevance feedback and title in the query thereby extracting relevant sentences that would otherwise be missed.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 12, |
| "text": "Figure 11", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 94, |
| "end": 103, |
| "text": "Figure 12", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "As the number of sentences that are allowed in the summary grows, the difficulty of finding relevant sentences grows and thus the added prf sentence and title to the query help to find relevant sentences for their particular document. We need to do more studying on the effects of query expansion and compression on summarization, as well as see how our preliminary results hold for additional data sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "If we calculate the normalized F1 score for the first sentence retrieved in the summary, we obtain a score of .80 for 110 Set standard query, .67 for 110 Set short query and .79 for the Model Summaries. This indicates that even for the short query we obtain a relevant sentence two thirds of the time. However, ideally this first sentence retrieval score would be 1.0 and we will explore methods to increase this score as well as select a \"highly relevant\" first retrieved sentence for the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compression", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "We have shown that MMR ranking provides a useful and beneficial manner of providing information to the user by allowing the user to minimize redundancy. This is especially true in the case of query-relevant multi-document summarization in this one data collection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONCLUSION", |
| "sec_num": "10." |
| }, |
| { |
| "text": "We are currently performing studies on how this extends to additional document collections. In the future we will also be investigating how to handle co-reference in our system as well as analyzing the most suitable ~, par/maeters and clustering the output results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONCLUSION", |
| "sec_num": "10." |
| }, |
| { |
| "text": "Text Summarization is still in the infant stage in terms of evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONCLUSION", |
| "sec_num": "10." |
| }, |
| { |
| "text": "Many monolingual document information retrieval results can be applied to text summarization, but as of yet, there has been little evaluation of these techniques. This pilot experiment showed many areas that need to be examined in further detail, including whether the summary selects the most relevant sentences in the document and whether these results generalize to more data sets and other document genres. We also plan to explore further the effects of query expansion using WordNet, as well as the use the first sentence (for news stories) in the query and/or summary. We also plan to run experiments fixing the number of sentences for each document as the number of relevant sentences chosen by the assessors as well as a small number, such as three. We are currently in the process of building a more extensive sentence relevance database for further evaluation. In this database, we are collecting data on the user selected most relevant sentence(s) for each document. We also plan to explore how to join the relevant sections to provide a \"good\", understandable, readable, relevant, non-redundant summary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONCLUSION", |
| "sec_num": "10." |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": ".o ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "C~", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Implementation of the SMART", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Buckley, Implementation of the SMART", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Use of MMR, Diversity-Based Reranking for Reordering Dotalments and Producing Summaries", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "G" |
| ], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Goldstein", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "InProceedings of SIGIR 98", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.G. Carbonell, and J. Goldstein, The Use of MMR, Diversity-Based Reranking for Reordering Dotalments and Producing Summaries, InProceedings of SIGIR 98,", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "lingual Interactive Document SummariT~rion, AAAI Intelligent Text Summarization Workshop", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Cowie", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Mahes", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Nirenbug", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zajae", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Minds --Multi", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "131--1328", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Cowie, K. Mahes, S. Nirenbug, R. Zajae, MINDS -- Multi-lingual Interactive Document SummariT~rion, AAAI Intelligent Text Summarization Workshop, p. 131-1328, Stanford, CA March 1998", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A Proposal for Task-Based Evaluation of Text Summarization Systems In ACIIIEACIL", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "F" |
| ], |
| "last": "Hand", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T.F. Hand, A Proposal for Task-Based Evaluation of Text Summarization Systems In ACIIIEACIL,-97", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automated Text Summarization in SUMMARIST", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "ACL/EACL-97 Summarization Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "18--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Hovy and C.Y. Lin, Automated Text Summarization in SUMMARIST, In ACL/EACL-97 Summarization Workshop, 18-24, Madrid, Spain July 1997", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Summarization Evaluation Methods E~iments and Analysis, AAAI Intelligent Text Summarization Workshop", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Jing", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "Nil", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "60--68", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Jing, R. Barzilay, K. McKeown, NIl. Elhadad, Summarization Evaluation Methods E~iments and Analysis, AAAI Intelligent Text Summarization Workshop, p. 60-68, Stanford, CA March 1998", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Evaluation Natural Language Processing Systems: an Analysis and Review", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "S" |
| ], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Galliers", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K.S. Jones and J.R. Galliers, Evaluation Natural Language Processing Systems: an Analysis and Review. New York: Springer 1996", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Lexical Semantics in Summarization", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Klavans", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shaw", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the First Annual Workshop of the IFIP Working Group FOR NLP and KR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.L Klavans and J. Shaw, Lexical Semantics in Summarization, In Proceedings of the First Annual Workshop of the IFIP Working Group FOR NLP and KR, Nantes, France, April 1995.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Information Retrieval Systems: Theory and Implementation", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Kowalski", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Kowalski, Information Retrieval Systems: Theory and Implementation, Kluwer Academic Publishers, 1997.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Proceedings of the 18th Annual Int. ACM/SIGIR Conference on Research and Development in IR", |
| "authors": [ |
| { |
| "first": "Trainable Document", |
| "middle": [], |
| "last": "Summarizer", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "68--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Trainable Document Summarizer, In Proceedings of the 18th Annual Int. ACM/SIGIR Conference on Research and Development in IR, Seattle, WA, July 1995, pp. 68-73.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Language-Oriented Information Retrieval", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Bhandaru", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "International Journal oflntelligent Systems", |
| "volume": "4", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.D. Lewis, B. Croft, B., and N. Bhandaru, \"Language-Oriented Information Retrieval,\" International Journal oflntelligent Systems, Vol 4 (3), Fall 1989.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Automatic Creation of Literature Abstracts", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "P" |
| ], |
| "last": "Luhn", |
| "suffix": "" |
| } |
| ], |
| "year": 1958, |
| "venue": "IBM Journal", |
| "volume": "", |
| "issue": "", |
| "pages": "159--165", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H.P. Luhn, Automatic Creation of Literature Abstracts, IBM Journal, 1958, pp. 159-165.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Retrieval Performance in FERRET: A Conceptual Conference on Research and Development in Information Retrieval", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mauldin", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 14th International Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M.L Mauldin, Retrieval Performance in FERRET: A Conceptual Conference on Research and Development in Information Retrieval, Proceedings of the 14th International Conference on Research and Development in Information Retrieval, October 1991.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Web Agent Related Research at the Center for Machine Translation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mauldin", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Leavitt", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of SIGNIDR V", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M.L. Mauldin and J.R. Leavitt, Web Agent Related Research at the Center for Machine Translation. In Proceedings of SIGNIDR V, McLean Virginia, August 1994.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Empirically Designing and Evaluating a New Revision-based Model for Summary Generation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Robin", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Kukich", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Information Processing and Management", |
| "volume": "31", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. McKeown, J. Robin, and K. Kukich, Empirically Designing and Evaluating a New Revision-based Model for Summary Generation. In Information Processing and Management, 31 (5) 1995.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Automatic Text Summarization by Paragraph Extraction", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Singhal", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "ACL/EACL-97", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Mitra, A. Singhal and C. Buckley, Automatic Text Summarization by Paragraph Extraction, In ACL/EACL-97", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Automatic Generation of Literature Abstracts -An Approach Based on the Indification of Self-Indicated Phrases", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Paice", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C.D. Paice, Automatic Generation of Literature Abstracts -An Approach Based on the Indification of Self- Indicated Phrases, in Information Retrieval Research, R.N.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Butterworths, London", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "172--191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Williams, editors, Butterworths, London, 1981, 172-191.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Constructing Literature Abstracts by Computer: Techniques and Prospects", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Paice", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Information Processing and Management", |
| "volume": "26", |
| "issue": "", |
| "pages": "171--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C.D. Paice, Constructing Literature Abstracts by Computer: Techniques and Prospects, In Information Processing and Management, Vol. 26, 1990, pp. 171-186.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Improving retrieval performance by relevance feedback", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Salton G", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of American Society for Information Sciences", |
| "volume": "41", |
| "issue": "", |
| "pages": "288--297", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Salton G and C. Buckley Improving retrieval performance by relevance feedback. Journal of American Society for Information Sciences, 41:288-297, 1990. [20].", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Salton Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer, Addison-Wesley 1989.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Introduction to Modern Information Retrieval", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Saiton", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mcgill", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Saiton and M.J. McGill, Introduction to Modern Information Retrieval, McGraw-Hill, New York, McGraw- Hill Computer Science Series, 1983.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Automatic Text Structuring and Summarization", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Singhal", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Salton, A. Singhal, M. Mitra,. and C. Buckley, Automatic Text Structuring and Summarization,", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A Robust Practical Text Summarization, AAAI Intelligent Text Summarization Workshop", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Strzalkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Wise", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "26--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Strzalkowski, J. Wang, and B. Wise, A Robust Practical Text Summarization, AAAI Intelligent Text Summarization Workshop, p. 26-3, Stanford, CA March 1998.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Automatic Summarizing of English Texts", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "I" |
| ], |
| "last": "Tait", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.I. Tait, Automatic Summarizing of English Texts, PhD dissertation, University of Cambridge, 1983.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "TIPSTER Text Phase II1 18-Month Workshop", |
| "authors": [], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "TIPSTER Text Phase II1 18-Month Workshop, Fairfax, VA 4-6 May, 1988,", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Query expansion using local and global document analysis in", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "19th Ann Int ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '96)", |
| "volume": "", |
| "issue": "", |
| "pages": "4--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Xu and B. Croft. Query expansion using local and global document analysis in 19th Ann Int ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '96), pages 4-11, 1996.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Using Wordnet to disambiguate words senses for text retrieval", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "M" |
| ], |
| "last": "Vorhees", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings ofACM SIGIR Conference ( SIGIR '93)pages", |
| "volume": "", |
| "issue": "", |
| "pages": "171--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E.M. Vorhees. Using Wordnet to disambiguate words senses for text retrieval. In Proceedings ofACM SIGIR Conference ( SIGIR '93)pages, 171-180, 1993.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Generic MMR-generated summary of dissertation.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Topic and Query for Tipster Topic 110", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "~ =l.0 Multi Document Summarization [Rank] Document ID [Sentence Number] Sentence [1] [1] [761] AP880212-0060", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "~ =.3 Multi Document Summarization. [Rank] [Previous Rank in X = 1.0 Version] Document ID [Sentence Number] Sentence <TITLE>Angola Rejects South African Proposal for Peace Talks </TITLE> <TEXT> [1] Angola has rejected a South Afncan proposal for a regional peace conference that would include Angolan rebels, Angola's official ANGOP news agency reported Friday.", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "of Relevant Sentences in Document RelSum = Number of Relevant Sentences in Summary SentSum = Number of Sentences in Summary Definitions: Precision P = RelSum / SentSum Recall R = RelSum / Rel F1 = 2P*R / (P + R) NorR = RelSum / rain (Rel, SentSum) NorF1 = 2P*NorR/(P+NorR)", |
| "num": null |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "--..::::::~:-,:~---~,,~ ~ -~....~\",~ first 20% of document --~-.-......... ~',~ \". \"~.~ \"'::\"\":::\" ............. X.. -\"~N \\\\ \"'~ ~4--~~-\"'\" \" ...... .... I::~: : :~:-, \"'\"-X ..................... -X ............. '\"'\"'~\"-. \"\"'-, ~'~ \"~'~. ~ \"19\" ......... Query Expansion Effects -SMART weighting Inn, 20% document length docs, query o ......... ~_ _ 110 Set relevants docs, query+prf+title -4----........... ,~.,.~ -.... ~ 110 Set relevants docs, short query -E]-i:---'.-.:::::~-...~S-.:.,..~ ~.......^ 110 Set relevants docs, short query+pff+title --x ...... ~\"--..:~ ..... \"-.. v ~ Model Summs query --~---.................... .x ...................... x.:.~i-,,~'~i: L;~. \"'-+ ........ -+,~, Model_Summs-q ue ry+pff+title -~-.-Q, \"<,, ....... ~\" -+, \"\u2022% ~'~ ~ \"'\"''X ............... \"\"~. ~ , ......... .......... -.,. [] .......... .~..:~:....... L:~:::E.E.LS.~.LT_:L:L.::~ Query Expansion Effects -110 Set, SMART weighting Inn, 10% document length, relevant documents only short query+prf+title .-x ...... first 1~/,~ of document -~---0.8 .:' ..-..~ ::..-.::..-..~ .T.~..-.::::..~ :.T.7.::_~-r..-. r.c --= ~\"..--~ --+ ........ --w ' .......... -~ ~ ~ .......... ~~c ....... ~;ii '~-\"R~?\" \\k ........ E} ........... E3 -~-'~. ;:-\\-.13].......... ~3 .......... Query Expansion Effects -SMART weighting Inn, 10% document length, relevant documents only 1 Set relevant docs, query o 110 Set relevant docs, query+prf+title -~-......... ~ ......... ~.~--.--....~,....... 110 Set relevant docs, short query -B--F:Z:Z'.'Z':-~7~TL-:L'~'--:::~:C'7~.~.c_:~,~,__ . 1 10 Set relevant docs, short query+prf+title --x ...... t .......... ~ .......... E} .......... B.-\"~'<'--.~_. ....... T-tO, Model Summs query --A-.-~-.................... -x ...................... x ........... ~.~.~......x.~.::\"-'-':.'~ \"~-.......... ~\",C,~, Model_Summs-query+prf+title -~-.-\u2022 .......... ;;\" ........ --'~:~. \"~\".C,~. 'k',, \"\"X ~. \" ............... -.... '~\"~''''\"'\"E~ ......... ~ ..................... X ..................... \"'~.~ \"\" \" ~'__-.~... = ~ __. __ ~ ......... ~ .......... \"A ......... ~'L ~ ~-.~-.~.~-.~-.-~.~-.~.~.~.~.", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "SEEN AS VANGUARD FOR CHANGING DEBT STRATEGY FUNARO REJECTS UK SUGGESTION OF IMF BRAZIL PLAN ECONOMIC SPOTLIGHT -BRAZIL DEBT DEADLINES LOOM U.S. URGED TO STRENGTHEN DEBT STRATEGY U.S, URGES BANKS TO DEVELOP NEW 3RD WLD FINANCE FUNARO'S DEPARTURE COULD LEAD TO BRAZIL DEBT DEAL U.S. OFFICIALS SAY BRAZIL SHOULD DEAL WITH BANKS BRAZIL SEEKS TO REASSURE BANKS ON DEBT SUSPENSION BRAZIL SEEKS TO REASSURE BANKS ON DEBT SUSPENSION BRAZIL CRITICISES ADVISORY COMMITTEE STRUCTURE LATIN DEBTORS MAKE NEW PUSH FOR DEBT RELIE BRAZIL DEBT SEEN PARTNER TO HARD SELL TACTICS BRAZIL DEBT POSES THORNY ISSUE FOR U.S. BANKS U.S. URGES BANKS TO WEIGH PHILIPPINE DEBT PLAN U.K. SAYS HAS NO ROLE IN BRAZIL MORATORIUM TALKS", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td>I</td><td>0.7</td><td>0.3</td></tr><tr><td>BRAZIL TALKING POINT/BANK STOCKS</td><td>76 1308 1431 104 50 2149 1713 1388 1403 1291 32 99 54 44 1293 53</td><td>76 1308 1431 2149 104 1388 1293 1713 50 133 1291 99 14 54 32 69</td><td>76 1293 1308 133 14 1388 1762 2149 69 1713 104 1431 99 1291 ,54 44</td></tr><tr><td>CANADA BANKS COULD SEE PRESSURE ON BRAZIL LOANS</td><td>1762</td><td>1762</td><td>32</td></tr><tr><td>TREASURY'S BAKER SAYS BRAZIL NOT tN CRISIS</td><td>133</td><td>44</td><td>5O</td></tr><tr><td>BRAZIL'S DEBT CRISIS BECOMING POLITICAL CRISIS</td><td>14</td><td>1403</td><td>1403</td></tr><tr><td>BAKER AND VOLCKER SAY DEBT STRATEGY WILL WORK</td><td>69</td><td>53</td><td>53</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>: Initial Relevance Ranking (~, = 1) vs. MMR reranking (~L = .7 & X = .3)</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Black Resistance Against South Afdcan Government black majority South Africa overthrow domination white minority government blacks force political change South Africa black challenge apartheid military political economic activities armed personnel African National Congress (ANC) South Africa bordering states African National Congress ANC Nelson Mandela Oliver Tambo Chief Buthelezi Inkatha Zulu terrorist detainee subversive communist Limpopo River Angola Botswana Mozambique Zambia apartheid black township homelands group areas act emergency regulations", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>ANC), either in South</td></tr><tr><td>Africa or in bordering states.</td></tr><tr><td><con> Concept(s):</td></tr><tr><td>1. African National Congress, ANC, Nelson Mandela, Oliver</td></tr><tr><td>Tambo</td></tr><tr><td>2. Chief Buthelezi, Inkatha, Zulu</td></tr><tr><td>3. terrorist, detainee, subversive, communist</td></tr><tr><td>4. Limpopo River, Angola, Botswana, Mozambique, Zambia</td></tr><tr><td>5. apartheid, black township, homelands, group areas act,</td></tr><tr><td>emergency regulations</td></tr><tr><td>Query:</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "text": "2][2] [758] AP880803-0080[25] Three Canadian antiapartheid groups issued a statement urging the government to sever diplomatic and economic links with South Africa and aid the African National Congress, the banned group fighting the white-dominated government in South Africa.", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>[3] [6] [92] WSJ910204-0176 [2] de Klerk's proposal to</td></tr><tr><td>repeal the major pillars of apartheid drew a generally</td></tr><tr><td>positive response from black leaders, but African National</td></tr><tr><td>Congress leader Nelson Mandela called on the intemational</td></tr><tr><td>community to continue economic sanctions against South</td></tr><tr><td>Afdca until the government takes further steps.</td></tr><tr><td>[4] [8] [375] WSJ890908-0159 [24] For everywhere he</td></tr><tr><td>turns, he hears the same mantra of demands --release, lift</td></tr><tr><td>bans, dismantle, negotiate --be it from local anti-apartheid</td></tr><tr><td>activists or from foreign governments: release political</td></tr><tr><td>prisoners, like African National Congress leader Nelson</td></tr><tr><td>Mandela; lift bans on all political organizations, such as the</td></tr><tr><td>ANC, the Pan Afncanist Congress and the United</td></tr><tr><td>Democratic Front; dismantle all apartheid legislation; and</td></tr><tr><td>finally, begin negotiations with leaders of all races.</td></tr><tr><td>[5] [4] [790] AP880802-0165 [27] South Africa says the</td></tr><tr><td>ANC, the main black group fighting to overthrow South</td></tr><tr><td>Africa's white government, has seven major military bases in</td></tr><tr><td>Angola, and the Pretoria government wants those bases</td></tr><tr><td>closed down.</td></tr><tr><td>[6] [11] [334] AP890703-0114 [14] The white delegation</td></tr><tr><td>chief, Mike Olivier, said the ANC members, including</td></tr><tr><td>President Oliver Tambo and South African Communist Party</td></tr><tr><td>leader Joe Slovo, said some white anti-apartheid members</td></tr><tr><td>of Parliament could make a difference, although the</td></tr><tr><td>organization believes Parliament as a whole is not</td></tr><tr><td>representative of South Africans.</td></tr><tr><td>[7] [14] [788] WSJ880323-0129</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "text": "Precision Scores", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>I</td><td>Model Summaries</td><td>110 Set</td></tr><tr><td>task</td><td>Q&A</td><td>indicative summaries</td></tr><tr><td>number of documents</td><td>48</td><td>50</td></tr><tr><td>source</td><td>provided by Tipster</td><td>3 people marked each sentence</td></tr><tr><td>relevant documents</td><td>all</td><td>15</td></tr><tr><td>average sentences per document</td><td>22.6</td><td>25.1</td></tr><tr><td>median sentences per document</td><td>19</td><td>23</td></tr><tr><td>maximum sentences per document</td><td>51</td><td>5O</td></tr><tr><td>minimum sentences per document</td><td>11</td><td>11</td></tr><tr><td>query formation</td><td>provided questions</td><td>topic</td></tr><tr><td>statistics</td><td>all documents</td><td>40 documents</td></tr><tr><td>percent of document length</td><td>19.4%</td><td>24.9%</td></tr><tr><td>summary includes first sentence</td><td>72%</td><td>47%, 73% (only relevant docs)</td></tr><tr><td>average summary size (sentences)</td><td>4.3</td><td>6.1</td></tr><tr><td>median summary size (sentences)</td><td>4</td><td>5</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "text": "Data Set Comparison", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>[ [11~'~ il[~] I I1 I~ l I ~'~o] I</td><td>~ [,~t'/~ Ili I I11[~l I I I I 1;] III-,</td><td>] [I] lD |~ [a'gl Ill IllTl~ Ill iT:] II[</td></tr><tr><td>number of documents</td><td>15</td><td>25</td></tr><tr><td>average sentences per document</td><td>27.5</td><td>23.8</td></tr><tr><td>median sentences per document</td><td>23</td><td>23</td></tr><tr><td>maximum sentences per document</td><td>51</td><td>44</td></tr><tr><td>minimum sentences per document</td><td>15</td><td>11</td></tr><tr><td>percent of document length</td><td>36.2%</td><td>17.7%</td></tr><tr><td>summary includes first sentence</td><td>73%</td><td>32%</td></tr><tr><td>average summary size (sentences)</td><td>10.1</td><td>3.7</td></tr><tr><td>median summary size (sentences)</td><td>7</td><td>4</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "text": "110 Set -Relevant vs. Non-Relevant Documents with relevant sentences.", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |