| { |
| "paper_id": "W00-0404", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T05:35:11.669353Z" |
| }, |
| "title": "Extracting Key Paragraph based on Topic and Event Detection --Towards Multi-Document Summarization", |
| "authors": [ |
| { |
| "first": "Fumiyo", |
| "middle": [], |
| "last": "Fukumoto", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Yamanashi University", |
| "location": { |
| "postCode": "4-3-11, 400-8511", |
| "settlement": "Takeda, Kofu", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yoshimi", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Yamanashi University", |
| "location": { |
| "postCode": "4-3-11, 400-8511", |
| "settlement": "Takeda, Kofu", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes a method for extracting key paragraph for multi-document summarization based on distinction between a topic and a~ event. A topic emd an event are identified using a simple criterion called domain dependency of words. The method was tested on the TDT1 corpus which has been developed by the TDT Pilot Study and the result can be regarded as promising the idea of domain dependency of words effectively employed.", |
| "pdf_parse": { |
| "paper_id": "W00-0404", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes a method for extracting key paragraph for multi-document summarization based on distinction between a topic and a~ event. A topic emd an event are identified using a simple criterion called domain dependency of words. The method was tested on the TDT1 corpus which has been developed by the TDT Pilot Study and the result can be regarded as promising the idea of domain dependency of words effectively employed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "As the volume of olfline documents has drastically increased, summarization techniques have become very importaalt in IR and NLP studies. Most of the summarization work has focused on a single document. Tiffs paper focuses on multi-document summarization: broadcast news documents about the same topic. One of the major problems in the multidocument summarization task is how to identify differences and similza'ities across documents. This can be interpreted as a question of how to make a clear distinction between an e~ent mM a topic in docu= meats. Here, an event is the subject of a document itself, i.e. a writer wants to express, in other words, notions of who, what, where, when. why and how in a document. On the other hand, a topic in this paper is some unique thing that happens at some specific time and place, and the unavoidable consequences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "It'becomes background among documents. For example, in the documents of :Kobe Japan quake', the event includes early reports of damage, location and nature of quake, rescue efforts, consequences of the quake, a~ld on-site reports, while the topic is Kobe Japaa~ quake. The well-known past experience from IR ~ that notions of who, what, where, when, why and how may not make a great contribution to the topic detection and tracking task (Allan and Papka, 1998) causes this fact, i.e. a topic and an event are different from each other 1 .", |
| "cite_spans": [ |
| { |
| "start": 437, |
| "end": 460, |
| "text": "(Allan and Papka, 1998)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1 Some topic words can also be an event. Fbr instance: in the document shown in Figure 1 : 'Japan: and =quake' are topic words and also event words in the document. However, we regarded these words as a topic, i.e. not be an event.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 80, |
| "end": 88, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper: we propose a. method fi)r extracting key paragraph for multi-document smnmarization based on distinction between a topic and an event. We use a silnple criterion called domain dependency of words as a solution and present how the i.dea of domain dependency of words can be utilized effectively to identify a topic and an event: and thus allow multi-document summarization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The basic idea of our approach is that whether a word appeared in a document is a topic (an event) or not, depends on the domain to which the document belongs. Let us take a look at the following document from the TDT1 corpus. Figure I is the document whose topic is 'Kobe Japan quake', and the subject of the document (event words) is 'Two Americans known dead in Japan quake'. Underlined words denote a topic, and the words marked with '[ ]' are events. '1,,,7' of Figure 1 is paragraph id. Like Lulm's technique of keyword extraction, our method assumes that an event associated with a document appears throughout parm graphs (Luhn, 1958) , but a topic does not. This is because an event is the subject of a document itself. while a topic is an event, along with all directly related events. In Figure 1 , event words 'Americans' and 'U.S.', for instance, appears across paragraphs, while a topic word, for example, 'Kobe' appears only the third paragraph. Let us consider further a broad coverage domain which consists of a small number of sanaple news documents about the same topic, 'Kobe Japan quake'. (1-3) Kobe quake leaves questions about medical system 1. The earthquake that devastated Kobe in January raised serious questions about the efficiency of Japan's emergency medical system, a government report released on Tuesday said. 2. 'The earthquake exposed many i~ues in terms of quantity, quality, promptness and efficiency of Japan's medical care in time of disaster,' the report on-'ff'h-~alth and welfare said. Underlined words in Figure 2 and 3 show the topic of these documents. In these two documents, :Kobe' which is a topic appears in eveD\" document, while 'Americans' and 'U.S.' which are events of the document shown in Figure 1 , does not appear. Our technique for making the distinction between a topic and an event explicitly exploits this feature of the domain dependency of words: how strongly a word features a given set of data. The rest of the paper is organized as follows. The next section provides domain dependency of words which is used to identify a topic and an event for broadcast news documents. We then present a method for extracting topic and event words: and describe a paragraph-based summarization algorithm using the result of topic and event extraction. Fi-nally~ we report some experiments using the TDT1 corpus which has been developed by the TDT (Topic Detection and Tracking) Pilot Study (Allan and Carbonell, 1998) with a discussion of evaluation.", |
| "cite_spans": [ |
| { |
| "start": 629, |
| "end": 641, |
| "text": "(Luhn, 1958)", |
| "ref_id": null |
| }, |
| { |
| "start": 2441, |
| "end": 2468, |
| "text": "(Allan and Carbonell, 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 227, |
| "end": 235, |
| "text": "Figure I", |
| "ref_id": null |
| }, |
| { |
| "start": 798, |
| "end": 806, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 1548, |
| "end": 1556, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 1744, |
| "end": 1752, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The domain dependency of words that how strongly a word features a given set of data (documents) contributes to event extraction, as we previously reported (Fukumoto et al.: 1997) . In the study, we hypothesi~d that the articles from the Wall Street Journal corpus can be structured by three levels, i.e. Domain, Article and Paragraph. It'a word is nil event in a given article, it satisfies the two conditions: (1) The dispersion value of the word in the Paragraph level is smaller than that of the Art.iele, since the .word appears throughout paragr~q~hs in the Paragraph level rather than articles in the Article level.", |
| "cite_spans": [ |
| { |
| "start": 156, |
| "end": 179, |
| "text": "(Fukumoto et al.: 1997)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Dependency of Words", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(2) The dispersion value of the word in the Article is smaller than that of the Domain, as the word appears across articles rather than domains. However, ~here are two problems to adapt it to multl-document summarization task. The first is that the method extracts only events in the document. Because the goal of the study is to summarize a single document, and thus there is no answer to the question of how to identi~' differences and similarities across documents. The second is that the performance of the method greatly depends on the structure of a given data itself. Like the Wall Street Journal corpus, (i) if a given data caal be structured by three levels, Paragraph, Article and Domain, each of which consists of several paragraphs, articles and domains, respectively, aaad (ii) if Domain consists of different subject domains, such as 'aerospace', 'environment' and 'stock market', the method can be done with satisfactoD' accuracy. However, there is no guarantee to make such an appropriate structure from a given set of documents in the multi-document summarization task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Dependency of Words", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The purpose of this paper is to define domain dependency of words for a number of sample documents about the same topic, and thus for multidocument summarization task. Figure 4 illustrates the structure of broadcast news documents which have been developed by the TDT (Topic Detection and Tracking) Pilot Study (Allan and Carbonell, 1998) . It consists of two levels, Paragraph and Document. In Document level, there is a small number of sample news documents about the same topic. These documents are arranged in chronological order such as, '(l-l) Quake collapses buildings in central ,Japan ( Figure 2 )', '(1-2) Two Americans known dead in Japan quake ( Figure 1 )' and '(1-3) gobe quake leaves questions about medical system (Figure 3) :0 Given the structure shown in Figure 4 , how can we identi~\" every word in document (1-2) with an event, a topic or a general word? Our method assumes that aal event associated with a document appears across paragraphs, but a topic word does not. Then, we use domain dependency of words to extract event and topic words in document (1-2). Domain dependency of words is a measure showing how greatly each word features a given set of data. In Figure 4 .. let 'C)', 'A' and 'x' denote a topicl an event and a general word in document (1-2), respectively. We recall the example shown in Figure 1 . 'A', for instance, 'U.S.' appears across paragraphs. However, in the Document level, :A' frequently appears in document, (1-2) itself. On the basis of this example, we hypothesize that if word i is an event, it\"satisfies the following condition:", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 338, |
| "text": "(Allan and Carbonell, 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 168, |
| "end": 176, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 596, |
| "end": 604, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 658, |
| "end": 666, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 730, |
| "end": 740, |
| "text": "(Figure 3)", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 773, |
| "end": 781, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 1185, |
| "end": 1193, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 1327, |
| "end": 1335, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Domain Dependency of Words", |
| "sec_num": "2" |
| }, |
| { |
| "text": "x x :h0 0 ; 5 A i=2 0.3} o X 0 0 X ~ oo*.. 0 ' - ..J i=m oo i Paragraphleve~ ! ' 0/' , X", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Dependency of Words", |
| "sec_num": "2" |
| }, |
| { |
| "text": "[1] Word i greatly depends on a particular document in the Document level rather than a particular paragraph in the Paragraph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Dependency of Words", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Next, we turn to identi~\" the remains (words) wit.h a topic, or a general word. In Figure 5 ; a topic of documents (1-1) ~ (1-3), for instance, :Kobe' aPpears in a particular paragraph in each level of Paragraphl, Paragraph2 and Paragraph3. Here, (1-1), (1-2) and (1-3) corresponds to Paragraph1, Paragraph2 and Paragraph3, respectively. On the other hand, in Document level, a topic frequently appears acros.~ documents. Then: we hypothesize that if word i is a", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 83, |
| "end": 91, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Domain Dependency of Words", |
| "sec_num": "2" |
| }, |
| { |
| "text": ". topic, it satisfies the following condition:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(H)", |
| "sec_num": "33" |
| }, |
| { |
| "text": "o':.i :~1 Paragraph 1: C. level ~! C' xi x j=l \u2022 o\u00b0-.\u00b0 i j=2 ~ ..... j=n ic. ParagraphZi O \" level !! x i (1-2) p-3) \u00b0 x x [ 0 t i z L i :i=2 i=3 m~ --i X i ,.,.o i=rn i0i!, i O:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(H)", |
| "sec_num": "33" |
| }, |
| { |
| "text": "[2] Word i greatly depends on a particular paragraph in each Paragraph level rather than a particular document in Document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(H)", |
| "sec_num": "33" |
| }, |
| { |
| "text": "We hypothesized that the domain dependency of words is a key clue to make a distinction between a topic and an event. This can be broken down into two observations: (i) whether a word appears across paragraphs (documents), (it) whether or not a word appears frequently. We represented the former by using dispersion value, and the latter by deviation value. Topic and event words are extracted by using these values. The first step to extract topic and event words is to assign weight to the individual word in a document. We applied TF*IDF to each level of the Document and Paragraph, i.e. Paragraphl, Paragraph2 and Paragraph3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic and Event Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "N Wdit = TFdit * log Ndt (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic and Event Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Wdit in formula (1) is TF*IDF of term t in the i-th document. In a similar way, Wpit denotes TF*IDF of the term t in the i-th paragraph. TFdit in (1) denotes term frequency of t in the i-th document. N is the number of documents and Ndt is the number of do(:uments where t occurs. The second step is to calculate domain dependency of words. We defined it by using formula (2) and 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic and Event Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "DispOt = /I/E'~=l(I4;dit -mean')2 (2) \u00a5 Tn", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic and Event Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "De vdi, = (Wditmeant) ,10+50", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic and Event Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Topic and Event Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Formula (2) is dispersion value of term t in the level of Document which consists of m documents, and denotes how frequently t appears across documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DispDt", |
| "sec_num": null |
| }, |
| { |
| "text": "In a similar way, DispPt denotes dispersion of term t in the level of Paragraph. Formula 3is the deviation value of t in the i-th document and denotes how frequently it appears in a particular document, the i-th document. Devpit is deviation of term t in the i-th paragraph. In 2and 3, meant is the mean of the total TF*IDF values of term t in the level of Document. The last step is to extract a topic and an ever~t using fonmfla (2) and (3). We recall that if t is an event, it satisfies [1] described in section 2. This is shown by using formula (4) mad (5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DispDt", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "DispPt < DispDt (4) for all Pi E di Devpjt < Devdit", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "DispDt", |
| "sec_num": null |
| }, |
| { |
| "text": "Formula (4) shows that t frequently appears across paragraphs rather than documents. In formula 5, di is the i-th document and consists of the number of n paragraphs (see Figure 4 ). Pi is an element of di. (5) shows that t frequently appears in the i-th document di rather than paragraphs pj ( 1 < j < n). On the other hand: if t satisfies formula (6) and (7), then propose t as a topic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 171, |
| "end": 179, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "DispDt", |
| "sec_num": null |
| }, |
| { |
| "text": "DispPt > DispDt (6) for all dl E D, Pit exists such that Devpjt >_ Devdlt 7In formula (7), D consists of the number of rn docaments (see Figure 5 ). (7) denotes that t frequently appears in the particular paragraph pj rather than the document di which includes pj.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 137, |
| "end": 145, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "DispDt", |
| "sec_num": null |
| }, |
| { |
| "text": "The summarization task in this paper is paragraphbased extraction . Basically, paragraphs which include not only event words but also topic words are considered to be significant paragraphs. The basic algorithm works as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "1. For each document: extract topic and event words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "2. Determine the paragraph weights for all paragraphs in the documents:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(a) Compute the sum of topic weights over the total number of topic words for each paragraph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(b) Compute the sum of event weights over the total number of event words for each paragraph. A topic and an event weights are calculated by using Devdlt in formula (3). Here, t is a topic or an evcnt and i is the i-th document in the documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(c) Compute the sum of (a) and (b) for each paragraph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "3. Sort the paragraphs t~ccording to their weights and extract the N highest weighted paragrai~hs in documents in order to yield summarization of the documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "4. When their weights are the same, Compute the sum of all the topic and event word weights. Select a paragraph whose weight is higher than the others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Evaluation of extracting key paragraph based on multi-document is difficult. First, we have not found an existing collection of summaries of multiple documents. Second, the maamal effort needed to judge system output is far more extensive than for single document summarization. Consequently, we focused on the TDT1 corpus. This is because (i) events have been defined to support the TDT study effort, (ii) it was completely annotated with respect to these events (Allan and Carbonell, 1997) . Therefore, we do not need the manual effort to collect documents which discuss about the target event. We report the results of three experiments. The first experiment, Event Extraction, is concerned with event extraction technique, ha the second experiment, Tracking Task, we applied the extracted topics to tracking task (Allan and Carbonell, 1998) . The third experiment: Key Paragraph Extraction is conducted to evaluate how the extracted topic and event words can be used effectively to extract key paragraph.", |
| "cite_spans": [ |
| { |
| "start": 464, |
| "end": 491, |
| "text": "(Allan and Carbonell, 1997)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 817, |
| "end": 844, |
| "text": "(Allan and Carbonell, 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The TDT1 corpus comprises a set of documents (.15,863) that includes both newswire (Reuters) 7..965 and a manual transcription of the broadcast news speech (CNN) 7,898 documents. A set of 25 target events were defined 2 All documents were tagged by the tagger (Brill, 1992 ", |
| "cite_spans": [ |
| { |
| "start": 260, |
| "end": 272, |
| "text": "(Brill, 1992", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": null |
| }, |
| { |
| "text": "We collected 300 documents from the TDT1 corpus, each of which is mmolated with respect to one of 25 events.' The result is shown in Table 1 . In Table 1 , 'Event type' illustrates the target events defined by the TDT Pilot Study. 'Doe' denotes the number of documents. 'Rec' (Recall) is the immber of correct events divided by the total mnnber of events which are selected by a human, and 'Prec' (Precision) stands for the number of correctevents divided by the number of events which are selected by our method. The denominator 'Rec' is made by a hmnan judge. 'Accuracy' in Table 1 is the total average ratio.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 140, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 146, |
| "end": 153, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 576, |
| "end": 583, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Event Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "In Table 1 , recall and precision values range from 55.0/47.0 to 83.3/84.2, the average being 71.0/72.2. The worst result of recall and precision was when event type was 'Serbs violate Bihac' (55.0/59.3). We currently hypothesize that this drop of accuracy is due to the fhct that some documents are against our assumption of an event. Examining the documents whose event type is 'Serbs violate Bihac', 3 ( one from CNN and two from Reuters).out of 16 documents has discussed the same event, i.e. 'Bosnian Muslim enclave hit by heavy shelling'. As a result, the event appears across these three documents\u2022 Future research will shed nmre light on that.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Event Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Tracking task in the TDT project is starting from a few sample documents and finding all subsequent documents that discuss the same event (Allan and Carbonell, 1998) , (Carbonell et al., 1999) . The corpus is divided into two parts: training set and test set. Each of the documents is flagged as to whether it discusses the target event, and these flags ('YES', :'NO') axe the only information used for training the .system to correctly classiC\" the target event. We applied the extracted topic to the tracking task under \u2022 these conditions. The basic algorithm used in the experiment is as follows: Let $1: --', S,, be all the other training documents (where m is the number of training documents which does not belong to the target event) and Sx be a test docmnent which should be classified as to whether or not it discusses the target event. 81, \"\" \", Sm mid Sz are represented \" by term vectors as follows: Sx is judged to be a document that discusses the target event.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 165, |
| "text": "(Allan and Carbonell, 1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 168, |
| "end": 192, |
| "text": "(Carbonell et al., 1999)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tracking Task", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Ill ti2 s.t\u2022 li.i = { f(t,A if t,~ (1 < i < m", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tracking Task", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We used the standard TDT evaluation measure Table 2 illustrates the result.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tracking Task", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "3. Table 3 . 'Event' denotes event words in the first document in chronological order from A~ ---4, and i the title of the document is 'Emergency Work Continues After Earthquake in Japan'. Table 3 clearly demonstrates that the criterion, domain dependency of-''words effectively employed. . 'Miss' means Miss rate, which is the ratio of the doounents that were judged as YES but were not evahmted as YES for the run in question.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 189, |
| "end": 196, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tracking Task", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "'F/A' shows false alarm rate and 'FI' is a measure that balances recall and precision. 'Rec' denotes the ratio of the documents judged YES that were also evaluated as YES, and 'Prec' is the percent of the documents that were evaluated as YES which correspond to documents actually judged as YES. Table 2 shows that more training data helps the performance, as the best result was when we used :Yt = 16. Table 3 illustrates the extracted topic and event words in a sample document. The topic is 'Kobe Japan quake' and the number of positive training documents is 4. 'Devpzt', 'Devd]t', 'DispPt' and 'DispDt' denote values calculated by using formula", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 296, |
| "end": 303, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 403, |
| "end": 410, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Event type", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) and (3). , Overall, the curves also show that more training helps tile performance, while there is no significant B difference among -'Yt = 2, 4 and 8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Event type", |
| "sec_num": null |
| }, |
| { |
| "text": "We used 4 different sets as a test data. Each set con-\u2022 sists of 2, 4.. 8 and 16 documents. For each set, we I I", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction roll", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We collected 300 docmnents from the TDT1 corpus, each of which is annotated with respect to one of 25 events.' The result is shown in Table 1 . In Table 1 .. 'Event type' illustrates the target events defined by the TDT Pilot Study. ~Doc' denotes the number of documents. 'Rec' (Recall) is the nmnbet of correct events divided by the total number of events which are selected by a humaa, and :Pree ~ (Precision) stands for the number of correct-events divided by the number of events which are selected by our method. The denominator 'Rec: is made by a human judge. 'Accuracy' in Table 1 is the total average ratio.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 134, |
| "end": 141, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 147, |
| "end": 154, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 580, |
| "end": 587, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Event Extraction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In Table 1 , recall and precision values range, from 55.0/47.0 to 83.3/84.2, the average being 71.0/72.2. The worst result of recall and precision was when event type was 'Serbs violate Bihac' (55.0/59.3). We currently hypothesize that this drop of accuracy is due to the fact that some documents are against our assumption of an event. Examining the ctocuments whose event type is 'Serbs violate Bihac', 3 ( one from CNN and two from Reuters) out of 16 documents has discussed the same evefit, i.e. 'Bosnian Muslim enclave hit by heavy shelling'. As a result, the event appears across these three documents. Future research will shed more light on that.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 1", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Event Extraction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Tracking task in the TDT project is starting from a few sample documents and finding all subsequent documents that discuss the same event (Allan and Carbonell, 1998) , (Carbonell et al., 1999) . The corpus is divided into two parts: training set and test ~et. Each of the documents is flagged as to whether it discusses the target event, and these flags ('YES', 'NO') are the only information used tbr training the system to correctly classiC\" the target event. We applied the extracted topic to the tracking task under these conditions. The basic algorithm used in the \u2022 experiment is as follows: ", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 165, |
| "text": "(Allan and Carbonell, 1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 168, |
| "end": 192, |
| "text": "(Carbonell et al., 1999)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tracking Task", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Let $1, ---, S,,, be all the other training documents (where m is the number of training documents which does not belong to the target event) and Sx be a test document which should be classified as to whether or not it discusses the target event. $1, \"--, Sm and Sx are represented \" by term vectors as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "~ = '\" { s.t. llj = f(t~j) ift 0 (1 < i <m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "appears in S~ and not be a topic of ,5\"tp ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Si \u2022 S= Sim(Si, S~) = I Si II S~ I (S)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The greater the value of Sim (Si,S,) is, the more similar 5\"/ and Sz are. If the similarity value between the test document S, and the document Stp is largest among all the other pairs of documents, i,e. ($1, Sx), \" \", (Sin, S=),", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 36, |
| "text": "(Si,S,)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "S= is judged to be a document that discusses the target event.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We used the standard TDT evaluation measure 3 Table 2 illustrates the result\u2022 Table 3 , 'Event' denotes event words in the first rio of the documents that were, judged as YES but document in chronological order from .,X~ = 4, and not evaluated as YES for the run in question, the title of the document is 'Emergency Work Con-i were 'F/A' shows false Mann rate mad 'FI' is a measure tinues After Earthquake in Japan'. Table 3 clearly i that balances recall and precision. 'Rec' denotes the demonstrate~ that the criterion, domain dependency ratio of the documents judged YES that were also of words effectively employed, i evaluated as YES, and Tree' is the percent of the Figure 6 illustrates the DET (Detection Evalua-| documents that were evaluated as YES which corre-tion Tradeoff) curves for a sample event (event type spond to documents actually judged as YES.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 46, |
| "end": 53, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 78, |
| "end": 85, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 417, |
| "end": 424, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 672, |
| "end": 680, |
| "text": "Figure 6", |
| "ref_id": "FIGREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "is 'Comet into Jupiter') runs at several values of Art.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "i Table 2 shows that more training data helps the performance, as the best result was when we used 9o ,., 'Overall, the curves also show that more trailfing helps tile performance, while there is no significant difference anaong :Yt = 2, 4 and 8. il", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 9, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Represent other training and test documents as term vectors", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We used 4 different sets as a test data. Each set con-I sists of 2, 4, 8 and 16 documents. For each set, we", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Key Paragraph Extraction", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "extracted 10% and 20% of the full-documents para-\"graph length (Jing et al., 1998) . Table 4 illustrates the result. In Table 4 , 'Num ~ denotes the number of documents in a set. 10 and 20\u00b0~ indicate the extraction ratio. 'Para' denotes the number of par~]graphs exr.racted by a humaa~ judge, and 'Correct' shows the accuracy ot\" the method.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 82, |
| "text": "(Jing et al., 1998)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 85, |
| "end": 92, |
| "text": "Table 4", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 120, |
| "end": 127, |
| "text": "Table 4", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "II I I", |
| "sec_num": null |
| }, |
| { |
| "text": "The best result was 77.7% (the extraction ratio is 20% and the number of documents is 2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "II I I", |
| "sec_num": null |
| }, |
| { |
| "text": "Wc now turn our attention to the main question: how was the contribution of making the distinction between a topic and an event for summarization task? Figure 7 illustrates the results of the methods which used (i) the extracted topic artd event words, i.e. our method, and (ii) only the extracted event\" words. In Figure 7 , '(10%): and '(20%)' denote the extracted paragraph ratio. 'Event' is the result when we used only the extracted event words. Figure 7 shows that our method consistently outperforms the method which used only the extra,.ted events. To summarize the evaluation:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 152, |
| "end": 160, |
| "text": "Figure 7", |
| "ref_id": "FIGREF11" |
| }, |
| { |
| "start": 315, |
| "end": 323, |
| "text": "Figure 7", |
| "ref_id": "FIGREF11" |
| }, |
| { |
| "start": 451, |
| "end": 459, |
| "text": "Figure 7", |
| "ref_id": "FIGREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "II I I", |
| "sec_num": null |
| }, |
| { |
| "text": "][: Event extraction effectively employed when each document discusses different subject about the same topic. This shows that the method will be applicable to other genres of corpora which consist of different subjects.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "II I I", |
| "sec_num": null |
| }, |
| { |
| "text": "2. The result of tracking task (79.0% average recall and 86.6% average precision) is comparable to the existing tracking techniques which tested on the TDT1 corpus (Allan and Carbonell, 1998) .", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 191, |
| "text": "(Allan and Carbonell, 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "II I I", |
| "sec_num": null |
| }, |
| { |
| "text": "3. Distinction between a topic and an event improved the results of key paragraph extraction, as our method consistently outperforms the method which used only the extracted event words (see Figure 7) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 191, |
| "end": 200, |
| "text": "Figure 7)", |
| "ref_id": "FIGREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "II I I", |
| "sec_num": null |
| }, |
| { |
| "text": "6 R e l a t e d W o r k", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "The majority of techniques for summarization fall within two broad categories: Those that rely on template instantiation and those that rely on passage extraction. Work in the former approach is the DARPAsponsored TIPSTER program and, in particular, the message understanding conferences hag provided fertile groined for such work, by placing the emphasis of docunmnt analysis to the identification and extraction of certain core entities and facts in a document, while work on template-driven, knowledge. based summarization to date is hardly domain or genre-independent (Boguraev and Kennedy. 1997) .", |
| "cite_spans": [ |
| { |
| "start": 572, |
| "end": 600, |
| "text": "(Boguraev and Kennedy. 1997)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "The alternative approach largely escapes this constraint, by viewing the task as one of identi~,ing certain passages(typically sentences) which, by some metric, are deemed to be the most representative, of the document's content. A variety of approaches exist for determining the salient sentences in the text: statistical techniques based oll word distribution (Kupiec et al., 1995) , (Zechner, 1996) , (Salton et al., 1991) , (Teufell and Moens, 1997) , symbolic techniques based on discourse structure (Marcu, 1997) and semantic relations between words (Barzil~v and Elhadad, 1997). All of their results demonstrate that passage extraction techniques are a useful first step in document summarization, although most of them have focused on a single document.", |
| "cite_spans": [ |
| { |
| "start": 362, |
| "end": 383, |
| "text": "(Kupiec et al., 1995)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 386, |
| "end": 401, |
| "text": "(Zechner, 1996)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 404, |
| "end": 425, |
| "text": "(Salton et al., 1991)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 428, |
| "end": 453, |
| "text": "(Teufell and Moens, 1997)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 505, |
| "end": 518, |
| "text": "(Marcu, 1997)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "Some researchers have started to apply a single-document summarization technique to multidocument. Stein et. al. proposed a method for summarizing multi-document using single-document summarizer (Stralkowsik et al., 1998) , (Stralkowski et al.. 1999) . Their method first summarizes each document of multi-document, then groups the summaries in clusters and finally, orders these summaries in a logical way . Their technique seems sensible. However, as she admits, (i) the order the information should not only depend on topic covered, (ii) background information that helps clari~\" related information should be placed first. More seriously, as Barzilay and Mani claim, summarization of multiple documents requires information about similarities and differences a c r o s s documents. Therefore it is difficult to identi~\" these information using a single-document summarizer technique (Mani and Bloedorn, 1997) , (Barzilay et al., 1999) .", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 221, |
| "text": "(Stralkowsik et al., 1998)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 224, |
| "end": 250, |
| "text": "(Stralkowski et al.. 1999)", |
| "ref_id": null |
| }, |
| { |
| "start": 887, |
| "end": 912, |
| "text": "(Mani and Bloedorn, 1997)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 915, |
| "end": 938, |
| "text": "(Barzilay et al., 1999)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "A method proposed by Mani et. al. deal with the problem, i.e. they tried to detect the similarities and differences in information c o n t e n t among documents (Mani and Bloedorn, 1997) . They used a spreading activation algorithm and graph matching in order to identify similarities and differences across documents. The output is presented as a set of paragraphs with similar and unique words highlighted. However, if the same information is men-Nun: \"tioned several times in different documents, much of the summary will be redundant.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 186, |
| "text": "(Mani and Bloedorn, 1997)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "Allan et. al. also address the problem aald proposed a method for event tracking using common words and surprising features by supplementing the corpus statistics (Allan and Papka, 1998) (Papka et al., 1999) . One of the purpose of this study is to make a distinction between an event aald an event class using surprising features. Here event class features are broad news areas such as politics, death, destruction and ~,'~fare. The idea is considered to be necessary to obtain higti accuracy, while Allan claims that the surprising words do not provide a broad enough coverage to capture all documents on the event.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 186, |
| "text": "(Allan and Papka, 1998)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 187, |
| "end": 207, |
| "text": "(Papka et al., 1999)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "A more recent approach dealing with this problem is Barzilav et. al's approach (Barzilay et al., 1999) . They used paraphrasing rules which are maaaually derived from the result of syntactic analysis to identify theme intersection and used language generation to reformulate them as a coherent, summary. While promising to obtain high accuracy: the result of summarization task has not been reported.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 102, |
| "text": "(Barzilay et al., 1999)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "Like Mani and Barzil~,'s techniques, our approach focuses on the problem that how to identi~\" differences and similarities across documents, rather than the problem that how to form the actual summar:,, (Sparck, 1993) , (McKeown and Radev, 1995) , (Radev and McKeown, 1998) . However, while Barzilav's approach used paraphrasing rules to eliminate redmadancy in a summary, we proposed domain dependency of words to address robustness of the technique.", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 217, |
| "text": "(Sparck, 1993)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 220, |
| "end": 245, |
| "text": "(McKeown and Radev, 1995)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 248, |
| "end": 273, |
| "text": "(Radev and McKeown, 1998)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "37", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we proposed a method for extracting key paragraph for summarization based on distinction between a topic and an event. The results showed that the average accuracy was 68.1~ when we used the TDT1 corpus. TIPSTER Text Summarization Evaluation (SUMMAC) proposed various methods for evaluating document summariza-tion and tasks (Mani et al., 1999) . Of these, participants submitted two summaries: a fixed-length summary limited to 10% of tile length of the source, and a summary which was not limited in length. Future work includes quantitative and qualitative evaluation. In addition, our method used single words rather thaaa phrases. These phrases, however, would be helpful to resolve ambiguity and reduce a lot of noise, i.e. yield much better accuracy. We plaal to apply our method to phrase-based topic and event extraction, then turn to focus on the problem that how to form the actual summary.. ", |
| "cite_spans": [ |
| { |
| "start": 340, |
| "end": 359, |
| "text": "(Mani et al., 1999)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank the reviewers for their valuable comments. This work was supported ~' the Grant-in-aid for the Japan Society for the Promotion of Science (JSPS, No.11780258) and Tateisi Science and Technology Foundation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The tdt pilot study corpus documentation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "TDT.Study. Carpus, V1.3.doc", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Allan and J. Carbonell. 1997. The tdt pilot study corpus documentation. In TDT.Study. Carpus, V1.3.doc.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Topic detection and tracking pilot study: Final report", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of the DARPA Broadcast News Transcription and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Allan and J. Carbonell. 1998. Topic detection and tracking pilot study: Final report.. In Proc. of the DARPA Broadcast News Transcription and Understanding Workshop.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "On-line new event detection and tracking", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Papka", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of 21st Annual b~-ternational A CM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "37--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Allan and R. Papka. 1998. On-line new event de- tection and tracking. In Proc. of 21st Annual b~- ternational A CM SIGIR Conference on Research and Development in Information Retrieval, pages 37-45.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Using lexical chains for text summarization", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzila", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of ACL Workshop on b~telligent Scalable Text Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "10--17", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Barzila.v and M. Elhadad. 1997. Using lexical chains for text summarization. In Proc. of ACL Workshop on b~telligent Scalable Text Summa- rization, pages 10-17.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Information fusion in the context of multidocument summarization", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. of 87th Annual Meet.ing of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "550--557", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Barzilay, K. R. McKeown, and M. Elhadad. 1999. Information fusion in the context of multi- document summarization. In Proc. of 87th An- nual Meet.ing of Association for Computational Linguistics, pages 550-557.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Saiience-based content characterization of text documents", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Boguraev", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Kennedy", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Ixi Proc. of A CL Workshop on b,telligent Scalable Tezt Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "2--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Boguraev mad C. Kennedy. 1997. Saiience-based content characterization of text documents. Ixi Proc. of A CL Workshop on b,telligent Scalable Tezt Summarization: p~ges 2-9.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A simple rule-based part of speech tagger", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proc. of the 3rd Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "152--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Brill. 1992. A simple rule-based part of speech tagger. In Proc. of the 3rd Conference on Applied Natural Language Processing, pages 152-155.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "CMU report on TDT-2: Segmentation, detection and tracking", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. o/the DARPA Broadcast News Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Carbonell, Y. Yang, mad J. Lafferty. 1999. CMU report on TDT-2: Segmentation, detection and tracking. In Proc. o/the DARPA Broadcast News Workshop.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An automatic extraction of key paragraphs based oil context dependency", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Fukumoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Fukumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of the 5th Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "291--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Fukumoto, Y. Suzuki, and J. Fukumoto. 1997. An automatic extraction of key paragraphs based oil context dependency. In Proc. of the 5th Con- ference on Applied Natural Language Processing, pages 291-298.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Summarization evaluation methods: Experiments and analysis, intelligent text sum-/ marization", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Jing", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzil~", |
| "suffix": "" |
| }, |
| { |
| "first": "'", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "E1-Hadad", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. o/1998 American Association/or Artificial h~telligence Sprin 9 Symposium", |
| "volume": "", |
| "issue": "", |
| "pages": "51--59", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Jing, R. Barzil~', K. R. McKeown, and M. E1- hadad. 1998. Summarization evaluation methods: Experiments and analysis, intelligent text sum-/ marization. In Proc. o/1998 American Associa- tion/or Artificial h~telligence Sprin 9 Symposium, pages 51-59.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A trainable document summarizer", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Kupiec ; Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proc. of the 18th Annual International ACM SIGIR Conference on Research and Development in h~formation Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "68--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Kupiec, 3. Pedersen, and F..Chen. 1995. A trainable document summarizer. In Proc. of the 18th Annual International ACM SIGIR Confer- ence on Research and Development in h~formation Retrieval, pages 68-73.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The automatic creation of literature abstracts", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "P" |
| ], |
| "last": "Lutm", |
| "suffix": "" |
| } |
| ], |
| "year": 1958, |
| "venue": "IBM journal", |
| "volume": "2", |
| "issue": "1", |
| "pages": "159--165", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. P. Lutm. 1958. The automatic creation of litera- ture abstracts. IBM journal, 2(1):159-165.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Multi-document summarization by graph search and matching", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Mani", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Bloedorn", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. o/the 15th National Conference on Artificial h~telligence", |
| "volume": "", |
| "issue": "", |
| "pages": "622--628", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Mani and E. Bloedorn. 1997. Multi-document summarization by graph search and matching. In Proc. o/the 15th National Conference on Artifi- cial h~telligence , pages 622-628.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The TIPSTER SUMMAC text summarization evaluation", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Mani", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Firmin", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Sundheim", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. o/Ninth Conference o/the European Chapter", |
| "volume": "", |
| "issue": "", |
| "pages": "77--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Mani, T. Firmin, and B. Sundheim. 1999. The TIPSTER SUMMAC text summarization evalu- ation. In Proc. o/Ninth Conference o/the Eu- ropean Chapter o/the Association/or Computa- tional Linguistics, pages 77-85.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "From discourse structures to text summaries", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of A CI, Workshop on Intelligent Scalable Text Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "82--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Marcu. 1997. From discourse structures to text summaries. In Proc. of A CI, Workshop on Intel- ligent Scalable Text Summarization, pages 82-88.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Generating summaries of multiple news articles", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "R" |
| ], |
| "last": "Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proc. of the 18th Annual h~ternational A CM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "74--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. R. McKeown and D. R. Radev. 1995. Generating summaries of multiple news articles. In Proc. of the 18th Annual h~ternational A CM SIGIR Con- ference on Research and Development in Informa- tion Retrieval, pages 74-82.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "UMASS approaches to detection and tracking at TDT2", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Papka", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Lavrenko", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. of the DARPA Broadcast News Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Papka, J. Allan, and V. Lavrenko. 1999. UMASS approaches to detection and tracking at TDT2. In Proc. of the DARPA Broadcast News Workshop.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Generating natural language summaries from multipie on-line sources", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "R" |
| ], |
| "last": "Radev", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "3", |
| "pages": "469--500", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. R. Radev and K. R. McKeown. 1998. Gen- erating natural language summaries from multi- pie on-line sources. Computational Linguistics. 24(3):469-500.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Automatic aaaalysis, theme generation, and summarization of machine-readable texts", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Buckle", |
| "suffix": "" |
| }, |
| { |
| "first": ");", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Singhal", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Science", |
| "volume": "164", |
| "issue": "", |
| "pages": "1421--1426", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Salton, J. Allan, C. Buckle); and A. Singhal. 1991. Automatic aaaalysis, theme generation, and summarization of machine-readable texts. Sci- ence, 164:1421-1426.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "What might be in a summary?", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "J" |
| ], |
| "last": "Sparck", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proc. of h~forraation Retrieval98", |
| "volume": "", |
| "issue": "", |
| "pages": "9--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. J. Sparck. 1993. What might be in a summary? In Proc. of h~forraation Retrieval98, pages 9-26.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Summarizing multiple documents using text extraction and interactive clustering", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "C" |
| ], |
| "last": "Stein", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Strzalkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "B" |
| ], |
| "last": "Wise", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. of. the Pacific Association for Computational Lin-guistics1999", |
| "volume": "", |
| "issue": "", |
| "pages": "200--208", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. C. Stein, T. Strzalkowski, aald G. B. Wise. 1999. Summarizing multiple documents using text ex- traction and interactive clustering. In Proc. of. the Pacific Association for Computational Lin- guistics1999, pages 200-208.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A text-extractlon based summarizer", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Stralkowsik", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "C" |
| ], |
| "last": "Stein", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "B" |
| ], |
| "last": "Wise", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of Tipster Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Stralkowsik, G. C. Stein, aaad G. B. Wise. 1998. A text-extractlon based summarizer. In Proc. of Tipster Workshop.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Getracker: A robust, lightweight topic tracking system", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "C" |
| ], |
| "last": "Stein", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "B" |
| ], |
| "last": "Wise", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proe. o/the DARPA Broadcast News Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. C. Stein; and G. B. Wise. 1999. Getracker: A robust, lightweight topic tracking system. In Proe. o/the DARPA Broadcast News Workshop.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Sentence extraction as a classification task", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Teufell", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of ACL Workshop on h~telligent Scalable Text Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "58--65", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Teufell and M. Moens. 1997. Sentence extraction as a classification task. In Proc. of ACL Workshop on h~telligent Scalable Text Summarization, pages 58-65.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Fast generation of abstracts from general domain text corpora by extracting relevant sentences", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Zechner", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proc. of the 16th International Gonference on Gomputational Lin9uistics", |
| "volume": "", |
| "issue": "", |
| "pages": "986--989", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Zechner. 1996. Fast generation of abstracts from general domain text corpora by extracting rele- vant sentences. In Proc. of the 16th International Gonference on Gomputational Lin9uistics, pages 986-989.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "text": "The document titled 'Two Americans l~lown dead in Japan quake'", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "lea.~t two people died and dozens were injured when a powerful earthquake rolled through central Japan Tue..~lay morning, collapsing buildings and setting off fires in the cities of Kobe and Osaka. 2. The Japan Meteorological Agency said the earthquake, which measured 7.2 on the open-ended Richter scale: rmnbled across Honshu Island from the Pacific Ocean to the Japan Sea.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "The document titled 'Quake collapses buildings in central Japan'", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "The document titled 'Kobe quake leaves questions about medical system'", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "The stnmture of broadcast news documents (event extraction)", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "text": "The structure of broadcast news documents (topic extraction)", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "text": "Create a single document Sip and represent it as a term vector For the results of topic extraction, all the documents that belong to the same topic are lmndled into a single document Stp and represent it by a term vector as follows:", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "text": "illustrates the DET (Detection Evaluation Tradeoff) curves for a sample event (event type. is 'Comet into Jupiter) runs at several values of Nt. .-.....r..?-...---..T....~:::. ....... .: ;::~:....T.~. ........ ! .......... ! ........... ? ....... .I ..-.,...*...:......*....!....~ ....... ~. ...... *.--'.'' ;' i\"\"~-\"' : ......... . ........... \": ....... \"i. s i,.4.-.1...~...i.....4....|.....~ ....... 4 ...... 4.-.. :4b a.~..i..'~ ......... | .......... ~ ........i...~....|.....~.....i.....i ...... 4 ...... .; ...... ~..i......i....~.'...i ........... 4 .......", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF9": { |
| "uris": null, |
| "text": "Create a single document Stp and represent it as \".a term vectorFor the results of topic extraction, all the documents that belong to the sanae topic are bundled into a single document S,p and represent it by a term vector as follows: denotes term frequency of word w.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF10": { |
| "uris": null, |
| "text": "the extracted topic and event E0 ~\" ...... \" .~\"'*'''\"=~''~'\":\"*'\"'\"'\"'''~i . : ' , . . : wt. \".....: : ................ i e~ ....... \u2022 B words in a sample document. The topic is 'Kobe i Japem quake' and the m~mber of positive training e0 ~.4....i..-~...~....s....i-...i~:u...i...~,.4 ........ ~ ......'2...,.t....'*--.:,*,*,?....;~::=~: *\":'\"~\" ....... ! .......... ! ........... 9\"\"'\"*~ .L...i.,.;-..i.....;....|.....; ....... 4......~;..... ~j.an..:~ ........ i .......... ,; ......4....i...~....i.....~....J.....i ...... 4 ...... 4 ...... \u00b0..i.....-i....~..i ........... 4 ....... q :\"= ' .... \"-: =:\" ......... \" ......... \"==\" ............ DET curve for a sample tracking runs B", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF11": { |
| "uris": null, |
| "text": "Accuracy with each method", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "We call it n .qeneraZ word).", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">paragraphs. We call it Paragraph level. Let words</td></tr><tr><td colspan=\"2\">within a document be an event, a topic, or among</td></tr><tr><td>others ((H)</td><td>(t.2)</td></tr><tr><td>0 0</td><td/></tr><tr><td>~umedlevel</td><td/></tr><tr><td>i:r</td><td/></tr><tr><td>I</td><td/></tr><tr><td>I</td><td>II</td></tr><tr><td>i</td><td>I</td></tr><tr><td>I</td><td>II</td></tr><tr><td>I</td><td>'. A particular document consists of several</td></tr><tr><td>i</td><td/></tr><tr><td>I</td><td>I</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "). %Ve used nouns in the documents. h t t p://morph.ldc.upenn.edu/TDT", |
| "type_str": "table", |
| "content": "<table><tr><td>I</td><td>5.2</td><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>I</td><td/><td>i</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>i</td><td/><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>i</td><td/><td>I</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>/</td><td/><td>i</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td>I</td><td/><td>i</td></tr><tr><td>I</td><td/><td>!</td></tr><tr><td>I</td><td/><td>I</td></tr><tr><td/><td/><td>i</td></tr><tr><td/><td/><td>I</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF5": { |
| "text": "The results of tracking task", |
| "type_str": "table", |
| "content": "<table><tr><td/><td>%Miss</td><td>%F/A</td><td colspan=\"3\">F1 %Rec %Prec</td></tr><tr><td>1</td><td>32.5</td><td>0.16</td><td>0.68</td><td>67.5</td><td>70.0</td></tr><tr><td>2</td><td>23.7</td><td colspan=\"2\">0~06 0.80</td><td>76.3</td><td>87.8</td></tr><tr><td>4</td><td>23.1</td><td colspan=\"2\">0.05 0.81</td><td>76.9</td><td>90.1</td></tr><tr><td>8</td><td>12.0</td><td colspan=\"2\">0.08 0,87</td><td>88.0</td><td>91.4</td></tr><tr><td>\u2022 16</td><td>13.7</td><td colspan=\"2\">0.06 0.89</td><td>86.3</td><td>93.6</td></tr><tr><td>Avg</td><td>21.0</td><td colspan=\"2\">0.08 0.76</td><td>79.0</td><td>86.6</td></tr><tr><td colspan=\"6\">In Table 2, 'Nt' denotes the number of positive train-</td></tr><tr><td colspan=\"6\">ing documents where A~ takes on values 1, 2, 4, 8</td></tr><tr><td colspan=\"4\">.3 http://www.nist.gov/speech/tdt98.htm</td><td/><td/></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"5\">: Topic and event words in :Kobe Japan</td><td/><td/><td/></tr><tr><td>quake'</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Topic word</td><td>Devplt</td><td>Devdzt</td><td/><td/><td/><td/><td/></tr><tr><td>earthquake Japan Kobe fire</td><td>53,5 69,8 56,6 57.0</td><td>50.0 50.0 50.0 46.4</td><td>12.3 13.3 8.6 2.3</td><td>10.3 9.8 6.4 1.5</td><td>.ol .(m .o6 o.1 o2. o.5 1 g Eigure 6: DET curve for a sample tracking runs s lo '2o 4o $o 8o Fatse Atarm p'rotm~Jity (in %)</td><td>90</td><td>II \u2022</td></tr><tr><td>Event word</td><td>Devplt</td><td>Devdzt</td><td>DispP t</td><td>DispDt</td><td/><td/><td/></tr><tr><td>emergency</td><td>50.0</td><td>74.7</td><td>0.9</td><td>1.5</td><td/><td/><td/></tr><tr><td>area</td><td>40.6</td><td>50.0</td><td>0.6</td><td>1.0</td><td/><td/><td/></tr><tr><td>worker</td><td>50.0</td><td>66.1</td><td>0.4</td><td>1.0</td><td/><td/><td/></tr><tr><td>rescue</td><td>43.3</td><td>50.0</td><td>2.3</td><td>3.4</td><td/><td/><td/></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF9": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"4\">: The results of tracking task</td></tr><tr><td colspan=\"3\">Nt %Miss %F/A</td><td colspan=\"3\">F1 %Rec %Prec</td></tr><tr><td>1</td><td>32.5</td><td colspan=\"2\">0.16 0.68</td><td>67.5</td><td>70:0</td></tr><tr><td>2</td><td>23.7</td><td colspan=\"2\">0.06 0.80</td><td>76.3</td><td>87.8</td></tr><tr><td>4</td><td>23.1</td><td colspan=\"2\">0.05 0.81</td><td>76.9</td><td>90.1</td></tr><tr><td>8</td><td>12.0</td><td colspan=\"2\">0.08 0.87</td><td>88.0</td><td>91.4</td></tr><tr><td>16</td><td>13.7</td><td colspan=\"2\">0.06 0.89</td><td>86.3</td><td>93.6</td></tr><tr><td>\"Avg</td><td>21.0</td><td colspan=\"2\">0.08 0.76</td><td>79.0</td><td>86.6</td></tr><tr><td colspan=\"6\">In Table 2, 'Nt' denotes the number of positive train-</td></tr><tr><td colspan=\"6\">ing documents where A~ takes on values 1, 2, 4, 8</td></tr><tr><td colspan=\"4\">z http://www.nist.gov/speech/tdt98.htrn</td><td/></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF10": { |
| "text": "The results of event words extraction", |
| "type_str": "table", |
| "content": "<table><tr><td>I</td></tr><tr><td>I</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF11": { |
| "text": "The results of Key Paragraph Extraction", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td/><td colspan=\"2\">Accuracy</td><td/><td/></tr><tr><td/><td/><td>%10</td><td/><td>%20</td><td/><td>Total</td></tr><tr><td/><td colspan=\"2\">Paa'a Correct(%)</td><td>Para</td><td>Correct(%)</td><td>Para</td><td>Correct(%)</td></tr><tr><td>2</td><td>58</td><td>44(75.8)</td><td>117</td><td>91(77.7)</td><td>175</td><td>135(77.1)</td></tr><tr><td>4</td><td>107</td><td>80(74.7)</td><td>214</td><td>160(74.7)</td><td>321</td><td>240(74.7)</td></tr><tr><td>8</td><td>202</td><td>138(68.3)</td><td>404</td><td>278(68.8)</td><td>606</td><td>416(68.6)</td></tr><tr><td>16</td><td>281</td><td>175(62~)</td><td>563</td><td>361(64.1)</td><td>844</td><td>536(63.5)</td></tr><tr><td>Total</td><td>648</td><td colspan=\"2\">437(67.4) 1,298</td><td colspan=\"2\">890(68.5) 1,946</td><td>1,327(68.1)</td></tr></table>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |