ACL-OCL / Base_JSON /prefixH /json /H05 /H05-1001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:24.137816Z"
},
"title": "Improving LSA-based Summarization with Anaphora Resolution",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of West",
"location": {
"addrLine": "Bohemia Univerzitni 22",
"postCode": "30614",
"settlement": "Pilsen",
"country": "Czech Republic"
}
},
"email": ""
},
{
"first": "Mijail",
"middle": [
"A"
],
"last": "Kabadjov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Essex",
"location": {
"addrLine": "Wivenhoe Park",
"postCode": "CO4 3SQ",
"settlement": "Colchester",
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Essex",
"location": {
"addrLine": "Wivenhoe Park",
"postCode": "CO4 3SQ",
"settlement": "Colchester",
"country": "United Kingdom"
}
},
"email": "poesio@essex.ac.uk"
},
{
"first": "Olivia",
"middle": [],
"last": "Sanchez-Graillet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Essex",
"location": {
"addrLine": "Wivenhoe Park",
"postCode": "CO4 3SQ",
"settlement": "Colchester",
"country": "United Kingdom"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose an approach to summarization exploiting both lexical information and the output of an automatic anaphoric resolver, and using Singular Value Decomposition (SVD) to identify the main terms. We demonstrate that adding anaphoric information results in significant performance improvements over a previously developed system, in which only lexical terms are used as the input to SVD. However, we also show that how anaphoric information is used is crucial: whereas using this information to add new terms does result in improved performance, simple substitution makes the performance worse.",
"pdf_parse": {
"paper_id": "H05-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose an approach to summarization exploiting both lexical information and the output of an automatic anaphoric resolver, and using Singular Value Decomposition (SVD) to identify the main terms. We demonstrate that adding anaphoric information results in significant performance improvements over a previously developed system, in which only lexical terms are used as the input to SVD. However, we also show that how anaphoric information is used is crucial: whereas using this information to add new terms does result in improved performance, simple substitution makes the performance worse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many approaches to summarization can be very broadly characterized as TERM-BASED: they attempt to identify the main 'topics,' which generally are TERMS, and then to extract from the document the most important information about these terms (Hovy and Lin, 1997) . These approaches can be divided again very broadly in 'lexical' approaches, among which we would include LSA-based approaches, and 'coreference-based' approaches . Lexical approaches to term-based summarization use lexical relations to identify central terms (Barzilay and Elhadad, 1997; Gong and Liu, 2002) ; coreference-(or anaphora-) based approaches (Baldwin and Morton, 1998; Boguraev and Kennedy, 1999; Azzam et al., 1999; Bergler et al., 2003; Stuckardt, 2003) identify these terms by running a coreference-or anaphoric resolver over the text. 1 We are not aware, however, of any attempt to use both lexical and anaphoric information to identify the main terms. In addition, to our knowledge no authors have convincingly demonstrated that feeding anaphoric information to a summarizer significantly improves the performance of a summarizer using a standard evaluation procedure (a reference corpus and baseline, and widely accepted evaluation measures).",
"cite_spans": [
{
"start": 240,
"end": 260,
"text": "(Hovy and Lin, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 522,
"end": 550,
"text": "(Barzilay and Elhadad, 1997;",
"ref_id": "BIBREF2"
},
{
"start": 551,
"end": 570,
"text": "Gong and Liu, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 617,
"end": 643,
"text": "(Baldwin and Morton, 1998;",
"ref_id": "BIBREF1"
},
{
"start": 644,
"end": 671,
"text": "Boguraev and Kennedy, 1999;",
"ref_id": "BIBREF5"
},
{
"start": 672,
"end": 691,
"text": "Azzam et al., 1999;",
"ref_id": "BIBREF0"
},
{
"start": 692,
"end": 713,
"text": "Bergler et al., 2003;",
"ref_id": "BIBREF3"
},
{
"start": 714,
"end": 730,
"text": "Stuckardt, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 814,
"end": 815,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we compare two sentence extractionbased summarizers. Both use Latent Semantic Analysis (LSA) (Landauer, 1997) to identify the main terms of a text for summarization; however, the first system (Steinberger and Jezek, 2004) , discussed in Section 2, only uses lexical information to identify the main topics, whereas the second system exploits both lexical and anaphoric information. This second system uses an existing anaphora resolution system to resolve anaphoric expressions, GUI-TAR (Poesio and Kabadjov, 2004) ; but, crucially, two different ways of using this information for summarization were tested. (Section 3.) Both summarizers were tested over the CAST corpus , as discussed in Section 4, and sig-nificant improvements were observed over both the baseline CAST system and our previous LSA-based summarizer.",
"cite_spans": [
{
"start": 107,
"end": 123,
"text": "(Landauer, 1997)",
"ref_id": "BIBREF13"
},
{
"start": 206,
"end": 235,
"text": "(Steinberger and Jezek, 2004)",
"ref_id": "BIBREF22"
},
{
"start": 501,
"end": 528,
"text": "(Poesio and Kabadjov, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "LSA (Landauer, 1997) is a technique for extracting the 'hidden' dimensions of the semantic representation of terms, sentences, or documents, on the basis of their contextual use. It is a very powerful technique already used for NLP applications such as information retrieval (Berry et al., 1995) and text segmentation (Choi et al., 2001 ) and, more recently, multi-and single-document summarization. The approach to using LSA in text summarization we followed in this paper was proposed in (Gong and Liu, 2002) . Gong and Liu propose to start by creating a term by sentences matrix A = [A 1 , A 2 , . . . , A n ], where each column vector A i represents the weighted term-frequency vector of sentence i in the document under consideration. If there are a total of m terms and n sentences in the document, then we will have an m \u00d7 n matrix A for the document. The next step is to apply Singular Value Decomposition (SVD) to matrix A. Given an m \u00d7 n matrix A, the SVD of A is defined as:",
"cite_spans": [
{
"start": 4,
"end": 20,
"text": "(Landauer, 1997)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 295,
"text": "(Berry et al., 1995)",
"ref_id": "BIBREF4"
},
{
"start": 318,
"end": 336,
"text": "(Choi et al., 2001",
"ref_id": "BIBREF7"
},
{
"start": 490,
"end": 510,
"text": "(Gong and Liu, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "(1) A = U \u03a3V T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "where U = [u ij ] is an m \u00d7 n column-orthonormal matrix whose columns are called left singular vectors, \u03a3 = diag(\u03c3 1 , \u03c3 2 , . . . , \u03c3 n ) is an n \u00d7 n diagonal matrix, whose diagonal elements are nonnegative singular values sorted in descending order, and V = [v ij ] is an n\u00d7n orthonormal matrix, whose columns are called right singular vectors. From a mathematical point of view, applying SVD to a matrix derives a mapping between the mdimensional space spawned by the weighted termfrequency vectors and the r-dimensional singular vector space. From a NLP perspective, what the SVD does is to derive the latent semantic structure of the document represented by matrix A: a breakdown of the original document into r linearly-independent base vectors ('topics'). Each term and sentence from the document is jointly indexed by these 'topics'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "A unique SVD feature is that it is capable of capturing and modelling interrelationships among terms so that it can semantically cluster terms and sen-tences. Furthermore, as demonstrated in (Berry et al., 1995) , if a word combination pattern is salient and recurring in document, this pattern will be captured and represented by one of the singular vectors. The magnitude of the corresponding singular value indicates the importance degree of this pattern within the document. Any sentences containing this word combination pattern will be projected along this singular vector, and the sentence that best represents this pattern will have the largest index value with this vector. As each particular word combination pattern describes a certain topic in the document, each singular vector can be viewed as representing a salient topic of the document, and the magnitude of its corresponding singular value represents the degree of importance of the salient topic.",
"cite_spans": [
{
"start": 191,
"end": 211,
"text": "(Berry et al., 1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "The summarization method proposed by Gong and Liu (2002) should now be easy to understand. The matrix V T describes the importance degree of each 'implicit topic' in each sentence: the summarization process simply chooses the most informative sentence for each term. In other words, the kth sentence chosen is the one with the largest index value in the kth right singular vector in matrix",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "Gong and Liu (2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "V T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "The summarization method proposed by Gong and Liu has some disadvantages as well, the main of which is that it is necessary to use the same number of dimensions as is the number of sentences we want to choose for a summary. However, the higher the number of dimensions of reduced space is, the less significant topic we take into a summary. In order to remedy this problem, we (Steinberger and Jezek, 2004) proposed the following modifications to Gong and Liu's summarization method. After computing the SVD of a term by sentences matrix, we compute the length of each sentence vector in matrix V . This is to favour the index values in the matrix V that correspond to the highest singular values (the most significant topics). Formally:",
"cite_spans": [
{
"start": 377,
"end": 406,
"text": "(Steinberger and Jezek, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "(2) s k = r i=1 v 2 k,i \u2022 \u03c3 2 i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "where s k is the length of the vector of k'th sentence in the modified latent vector space, and its significance score for summarization too. The level of dimensionality reduction (r) is essentially learned from the data. Finally, we put into the summary the sentences with the highest values in vector s. We showed in previous work (Steinberger and Jezek, 2004 ) that this modification results in a significant improvement over Gong and Liu's method. 3 Using Anaphora Resolution for Summarization",
"cite_spans": [
{
"start": 333,
"end": 361,
"text": "(Steinberger and Jezek, 2004",
"ref_id": "BIBREF22"
},
{
"start": 429,
"end": 451,
"text": "Gong and Liu's method.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An LSA-based Summarizer Using Lexical Information Only",
"sec_num": "2"
},
{
"text": "Words are the most basic type of 'term' that can be used to characterize the content of a document. However, being able to identify the most important objects mentioned in the document clearly would lead to an improved analysis of what is important in a text, as shown by the following news article cited by Boguraev and Kennedy (1999) :",
"cite_spans": [
{
"start": 308,
"end": 335,
"text": "Boguraev and Kennedy (1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The case for anaphora resolution",
"sec_num": "3.1"
},
{
"text": "( 3)PRIEST IS CHARGED WITH POPE ATTACK As Boguraev and Kennedy point out, the title of the article is an excellent summary of the content: an entity (the priest) did something to another entity (the pope). Intuitively, understanding that Fernandez and the pope are the central characters is crucial to provide a good summary of texts like these. 2 Among the clues that help us to identify such 'main characters', the fact that an entity is repeatedly mentioned is clearly important. Purely lexical methods, including the LSA-based methods discussed in the previous section, can only capture part of the information about which entities are frequently repeated in the text. As example (3) shows, stylistic conventions forbid verbatim repetition, hence the six mentions of Fernandez in the text above contain only one lexical repetition, 'Fernandez'. The main problem are pronouns, that tend to share the least lexical similarity with the form used to express the antecedent (and anyway are usually removed by stopword lists, therefore do not get included in the SVD matrix). The form of definite descriptions (the Spaniard) doesn't always overlap with that of their antecedent, either, especially when the antecedent was expressed with a proper name. The form of mention which more often overlaps to a degree with previous mentions is proper nouns, and even then at least some way of dealing with acronyms is necessary (cfr. European Union / E.U.). The motivation for anaphora resolution is that it should tell us which entities are repeatedly mentioned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The case for anaphora resolution",
"sec_num": "3.1"
},
{
"text": "In this work, we tested a mixed approach to integrate anaphoric and word information: using the output of the anaphoric resolver GUITAR to modify the SVD matrix used to determine the sentences to extract. In the rest of this section we first briefly introduce GUITAR, then discuss the two methods we tested to use its output to help summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The case for anaphora resolution",
"sec_num": "3.1"
},
{
"text": "The system we used in these experiments, GUITAR (Poesio and Kabadjov, 2004) , is an anaphora resolution system designed to be high precision, modular, and usable as an off-the-shelf component of a NL processing pipeline. The current version of the system includes an implementation of the MARS pronoun resolution algorithm (Mitkov, 1998 ) and a partial implementation of the algorithm for resolving definite descriptions proposed by Vieira and Poesio (2000) . The current version of GUITAR does not include methods for resolving proper nouns.",
"cite_spans": [
{
"start": 48,
"end": 75,
"text": "(Poesio and Kabadjov, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 323,
"end": 336,
"text": "(Mitkov, 1998",
"ref_id": "BIBREF14"
},
{
"start": 433,
"end": 457,
"text": "Vieira and Poesio (2000)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GUITAR: A General-Purpose Anaphoric Resolver",
"sec_num": "3.2"
},
{
"text": "Mitkov (1998) developed a robust approach to pronoun resolution which only requires input text to be part-of-speech tagged and noun phrases to be identified. Mitkov's algorithm operates on the basis of antecedent-tracking preferences (referred to hereafter as \"antecedent indicators\"). The approach works as follows: the system identifies the noun phrases which precede the anaphor within a distance of 2 sentences, checks them for gender and number agreement with the anaphor, and then applies genrespecific antecedent indicators to the remaining candidates (Mitkov, 1998) . The noun phrase with the highest aggregate score is proposed as antecedent.",
"cite_spans": [
{
"start": 559,
"end": 573,
"text": "(Mitkov, 1998)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun Resolution",
"sec_num": "3.2.1"
},
{
"text": "The Vieira / Poesio algorithm (Vieira and Poesio, 2000) attempts to classify each definite description as either direct anaphora, discourse-new, or bridging description. The first class includes definite descriptions whose head is identical to that of their antecedent, as in a house . . . the house. Discoursenew descriptions are definite descriptions that refer to objects not already mentioned in the text and not related to any such object. Bridging descriptions are all definite descriptions whose resolution depends on knowledge of relations between objects, such as definite descriptions that refer to an object related to an entity already introduced in the discourse by a relation other than identity, as in the flat . . . the living room. The Vieira / Poesio algorithm also attempts to identify the antecedents of anaphoric descriptions and the anchors of bridging ones. The current version of GUITAR incorporates an algorithm for resolving direct anaphora derived quite directly from Vieira / Poesio, as well as a statistical version of the methods for detecting discourse new descriptions .",
"cite_spans": [
{
"start": 30,
"end": 55,
"text": "(Vieira and Poesio, 2000)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definite Description Resolution",
"sec_num": "3.2.2"
},
{
"text": "SVD can be used to identify the 'implicit topics' or main terms of a document not only when on the basis of words, but also of coreference chains, or a mixture of both. We tested two ways of combining these two types of information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SVD over Lexical and Anaphoric Terms",
"sec_num": "3.3"
},
{
"text": "The simplest way of integrating anaphoric information with the methods used in our earlier work is to use anaphora resolution simply as a preprocessing stage of the SVD input matrix creation. Firstly, all anaphoric relations are identified by the anaphoric resolver, and anaphoric chains are identified. Then a second document is produced, in which all anaphoric nominal expressions are replaced by the first element of their anaphoric chain. For example, suppose we have the text in (4). (4) S1: Australia's new conservative government on Wednesday began selling its tough deficit-slashing budget, which sparked violent protests by Aborigines, unions, students and welfare groups even before it was announced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Substitution Method",
"sec_num": "3.3.1"
},
{
"text": "Costello.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2: Two days of anti-budget street protests preceded spending cuts officially unveiled by Treasurer Peter",
"sec_num": null
},
{
"text": "S3: \"If we don't do it now, Australia is going to be in deficit and debt into the next century.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2: Two days of anti-budget street protests preceded spending cuts officially unveiled by Treasurer Peter",
"sec_num": null
},
{
"text": "As the protesters had feared, Costello revealed a cut to the government's Aboriginal welfare commission among the hundreds of measures implemented to claw back the deficit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S4:",
"sec_num": null
},
{
"text": "An ideal resolver would find 8 anaphoric chains: By replacing each element of the 8 chains above in the text in (4) with the first element of the chain, we get the text in (5). (5) S1: Australia's new conservative government on Wednesday began selling Australia's tough deficitslashing budget, which sparked violent protests by Aborigines, unions, students and welfare groups even before Australia's tough deficit-slashing budget was announced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S4:",
"sec_num": null
},
{
"text": "Two days of violent protests by Aborigines, unions, students and welfare groups preceded spending cuts officially unveiled by Treasurer Peter Costello. S3: \"If Australia doesn't do spending cuts now, Australia is going to be in deficit and debt into the next century.\" S4: As Aborigines, unions, students and welfare groups had feared, Treasurer Peter Costello revealed a cut to Australia's new conservative government's Aboriginal welfare commission among the spending cuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2:",
"sec_num": null
},
{
"text": "This text is then used to create the SVD input matrix, as done in the first system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2:",
"sec_num": null
},
{
"text": "An alternative approach is to use SVD to identify 'topics' on the basis of two types of 'terms': terms in the lexical sense (i.e., words) and terms in the sense of objects, which can be represented by anaphoric chains. In other words, our representation of sentences would specify not only if they contain a certain word, but also if they contain a mention of a discourse entity (See Figure 1. ) This matrix would then be used as input to SVD. The chain 'terms' tie together sentences that contain the same anaphoric chain. If the terms are lexically the same (direct anaphors -like deficit and the deficit) the basic summarizer works sufficiently. However, Gong and Liu showed that the best weighting scheme is boolean (i.e., all terms have the same weight); our own previous results confirmed this. The advantage of the addition method is the opportunity to give higher weights to anaphors.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 393,
"text": "Figure 1.",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Addition Method",
"sec_num": "3.3.2"
},
{
"text": "To evaluate our system, we used the corpus of manually produced summaries created by the CAST project 3 . The CAST corpus contains news articles taken from the Reuters Corpus and a few popular science texts from the British National Corpus. It contains information about the importance of the sentences . Sentences are marked as essential or important. The corpus also contains annotations for linked sentences, which are not significant enough to be marked as important/essential, but which have to be considered as they contain information essential for the understanding of the content of other sentences marked as essential/important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CAST Corpus",
"sec_num": "4.1"
},
{
"text": "Four annotators were used for the annotation, three graduate students and one postgraduate. Three of the annotators were native English speakers, and the fourth had advanced knowledge of English. Unfortunately, not all of the documents were annotated by all of the annotators. To maximize the reliability of the summaries used for evaluation, we chose the documents annotated by the greatest number of the annotators; in total, our evaluation corpus contained 37 documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CAST Corpus",
"sec_num": "4.1"
},
{
"text": "For acquiring manual summaries at specified lengths and getting the sentence scores (for relative utility evaluation) we assigned a score 3 to the sentences marked as essential, a score 2 to important sentences and a score 1 to linked sentences. The sentences with highest scores are then selected for ideal summary (at specified lenght).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CAST Corpus",
"sec_num": "4.1"
},
{
"text": "Evaluating summarization is a notoriously hard problem, for which standard measures like Precision and Recall are not very appropriate. The main problem with P&R is that human judges often disagree what are the top n% most important sentences in a document. Using P&R creates the possibility that two equally good extracts are judged very differently. Suppose that a manual summary contains sentences [1 2] from a document. Suppose also that two systems, A and B, produce summaries consisting of sentences [1 2] and [1 3], respectively. Using P&R, system A will be ranked much higher than system B. It is quite possible that sentences 2 and 3 are equally important, in which case the two systems should get the same score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "To address the problem with precision and recall we used a combination of evaluation measures. The first of these, relative utility (RU) (Radev et al., 2000) allows model summaries to consist of sentences with variable ranking. With RU, the model summary represents all sentences of the input document with confidence values for their inclusion in the summary. For example, a document with five sentences [1 2 3 4 5] is represented as [1/5 2/4 3/4 Table 2 : Evaluation of the manual annotation improvement -summarization ratio: 30%. 4/1 5/2]. The second number in each pair indicates the degree to which the given sentence should be part of the summary according to a human judge. This number is called the utility of the sentence. Utility depends on the input document, the summary length, and the judge. In the example, the system that selects sentences [1 2] will not get a higher score than a system that chooses sentences [1 3] given that both summaries [1 2] and [1 3] carry the same number of utility points (5+4). Given that no other combination of two sentences carries a higher utility, both systems [1 2] and [1 3] produce optimal extracts. To compute relative utility, a number of judges, (N \u2265 1) are asked to assign utility scores to all n sentences in a document. The top e sentences according to utility score 4 are then called a sentence extract of size e. We can then define the following system performance metric:",
"cite_spans": [
{
"start": 137,
"end": 157,
"text": "(Radev et al., 2000)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 448,
"end": 455,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "(6) RU = n j=1 \u03b4 j N i=1 u ij n j=1 \u01eb j N i=1 u ij ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "where u ij is a utility score of sentence j from annotator i, \u01eb j is 1 for the top e sentences according to the sum of utility scores from all judges and \u03b4 j is equal to 1 for the top e sentences extracted by the system. For details see (Radev et al., 2000) . The second measure we used is Cosine Similarity, according to the standard formula: 4 In the case of ties, some arbitrary but consistent mechanism is used to decide which sentences should be included in the summary.",
"cite_spans": [
{
"start": 237,
"end": 257,
"text": "(Radev et al., 2000)",
"ref_id": null
},
{
"start": 344,
"end": 345,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "(7) cos(X, Y ) = i x i \u2022y i i (x i ) 2 \u2022 i (y i ) 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "where X and Y are representations of a system summary and its reference summary based on the vector space model. The third measure is Main Topic Similarity. This is a content-based evaluation method based on measuring the cosine of the angle between first left singular vectors of a system summary's and its reference summary's SVDs. (For details see (Steinberger and Jezek, 2004) .) Finally, we measured ROUGE scores, with the same settings as in the Document Understanding Conference (DUC) 2004.",
"cite_spans": [
{
"start": 351,
"end": 380,
"text": "(Steinberger and Jezek, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "We annotated all the anaphoric relations in the 37 documents in our evaluation corpus by hand using the annotation tool MMAX (Mueller and Strube, 2003) . 5 Apart from measuring the performance of GUITAR over the corpus, this allowed us to establish the upper bound on the performance improvements that could be obtained by adding an anaphoric resolver to our summarizer. We tested both methods of adding the anaphoric knowledge to the summarizer discussed above. Results for the 15% and 30% ratios 6 are presented in Tables 1 and 2. The baseline is our own previously developed LSA-based summarizer without anaphoric knowledge. The result is that the substitution method did not lead to significant improvement, but the addition method did: Table 4 : Evaluation of the GUITAR improvement -summarization ratio: 30%.",
"cite_spans": [
{
"start": 125,
"end": 151,
"text": "(Mueller and Strube, 2003)",
"ref_id": null
},
{
"start": 154,
"end": 155,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 741,
"end": 748,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "How Much May Anaphora Resolution Help? An Upper Bound",
"sec_num": "4.3"
},
{
"text": "addition could lead to an improvement in Relative Utility score from .595 to .662 for the 15% ratio, and from .645 to .688 for the 30% ratio. Both of these improvements were significant by t-test at 95% confidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How Much May Anaphora Resolution Help? An Upper Bound",
"sec_num": "4.3"
},
{
"text": "To use GUITAR, we first parsed the texts using Charniak's parser (Charniak, 2000) . The output of the parser was then converted into the MAS-XML format expected by GUITAR by one of the preprocessors that come with the system. (This step includes heuristic methods for guessing agreement features.) Finally, GUITAR was ran to add anaphoric information to the files. The resulting files were then processed by the summarizer. GUITAR achieved a precision of 56% and a recall of 51% over the 37 documents. For definite description resolution, we found a precision of 69% and a recall of 53%; for possessive pronoun resolution, the precision was 53%, recall was 53%; for personal pronouns, the precision was 44%, recall was 46%.",
"cite_spans": [
{
"start": 65,
"end": 81,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results with GUITAR",
"sec_num": "4.4"
},
{
"text": "The results with the summarizer are presented in Tables 3 and 4 (relative utility, f-score, cosine, and main topic). The contribution of the different anaphora resolution components is addressed in . All versions of our summarizer (the baseline version without anaphora resolution and those using substitution and addition) outperformed the CAST summarizer, but we have to emphasize that CAST did not aim at producing a highperformance generic summarizer; only a system that could be easily used for didactical purposes. However, our tables also show that using GUITAR and the addition method lead to significant improvements over our baseline LSA summarizer. The improvement in Relative Utility measure was significant by t-test at 95% confidence. Using the ROUGE measure we obtained improvement (but not significant). On the other hand, the substitution method did not lead to significant improvements, as was to be expected given that no improvement was obtained with 'perfect' anaphora resolution (see previous section).",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 63,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results with GUITAR",
"sec_num": "4.4"
},
{
"text": "Our main result in this paper is to show that using anaphora resolution in summarization can lead to significant improvements, not only when 'perfect' anaphora information is available, but also when an automatic resolver is used, provided that the anaphoric resolver has reasonable performance. As far as we are aware, this is the first time that such a result has been obtained using standard evaluation measures over a reference corpus. We also showed however that the way in which anaphoric information is used matters: with our set of documents at least, substitution would not result in significant improvements even with perfect anaphoric knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Further Research",
"sec_num": "5"
},
{
"text": "Further work will include, in addition to extending the set of documents and testing the system with other collections, evaluating the improvement to be achieved by adding a proper noun resolution algorithm to GUITAR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Further Research",
"sec_num": "5"
},
{
"text": "The terms 'anaphora resolution' and 'coreference resolution' have been variously defined(Stuckardt, 2003), but the latter term is generally used to refer to the coreference task as defined in MUC and ACE. We use the term 'anaphora resolution' to refer to the task of identifying successive mentions of the same discourse entity, realized via any type of noun phrase (proper noun, definite description, or pronoun), and whether such discourse entities 'refer' to objects in the world or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It should be noted that for many newspaper articles, indeed many non-educational texts, only a 'entity-centered' structure can be clearly identified, as opposed to a 'relation-centered' structure of the type hypothesized in Rhetorical Structures Theory(Knott et al., 2001;.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The goal of this project was to investigate to what extent Computer-Aided Summarization can help humans to produce high quality summaries with less effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We annotated personal pronouns, possessive pronouns, definite descriptions and also proper nouns, who will be handled by a future GUITAR version.6 We used the same summarization ratios as in CAST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using coreference chains for text summarization",
"authors": [
{
"first": "K",
"middle": [],
"last": "Azzam",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Humphreys",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the ACL Workshop on Coreference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Azzam, K. Humphreys and R. Gaizauskas. 1999. Using coreference chains for text summarization. In Proceedings of the ACL Workshop on Coreference. Maryland.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dynamic coreferencebased summarization",
"authors": [
{
"first": "B",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "T",
"middle": [
"S"
],
"last": "Morton",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Baldwin and T. S. Morton. 1998. Dynamic coreference- based summarization. In Proceedings of EMNLP. Granada, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using lexical chains for text summarization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL/EACL Workshop on Intelligent Scalable Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Barzilay and M. Elhadad. 1997. Using lexical chains for text summarization. In Proceedings of the ACL/EACL Workshop on Intelligent Scalable Text Summarization. Madrid, Spain.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using Knowledge-poor Coreference Resolution for Text Summarization",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bergler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Witte",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Khalife",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of DUC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bergler, R. Witte, M. Khalife, Z. Li, and F. Rudzicz. 2003. Using Knowledge-poor Coreference Resolution for Text Summarization. In Proceedings of DUC. Edmonton.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using Linear Algebra for Intelligent IR",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "Berry",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "O'brien",
"suffix": ""
}
],
"year": 1995,
"venue": "SIAM Review",
"volume": "37",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. W. Berry, S. T. Dumais and G. W. O'Brien. 1995. Using Linear Algebra for Intelligent IR. In SIAM Review, 37(4).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Salience-based content characterization of text documents",
"authors": [
{
"first": "B",
"middle": [],
"last": "Boguraev",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kennedy",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Automatic Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Boguraev and C. Kennedy. 1999. Salience-based content characterization of text documents. In I. Mani and M. T. Maybury (eds), Advances in Automatic Text Summarization, MIT Press. Cambridge, MA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL. Philadelphia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL. Philadelphia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Latent Semantic Analysis for Text Segmentation",
"authors": [
{
"first": "F",
"middle": [
"Y Y"
],
"last": "Choi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Wiemer-Hastings",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Y. Y. Choi, P. Wiemer-Hastings and J. D. Moore. 2001. La- tent Semantic Analysis for Text Segmentation. In Proceed- ings of EMNLP. Pittsburgh.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generic Text Summarization Using Relevance Measure and Latent Semantic Analysis",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACM SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Gong and X. Liu. 2002. Generic Text Summarization Us- ing Relevance Measure and Latent Semantic Analysis. In Proceedings of ACM SIGIR. New Orleans.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building better corpora for summarization",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hasler",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Hasler, C. Orasan and R. Mitkov. 2003. Building better corpora for summarization. In Proceedings of Corpus Lin- guistics. Lancaster, United Kingdom.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automated text summarization in SUMMARIST",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1997,
"venue": "ACL/EACL Workshop on Intelligent Scalable Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Hovy and C. Lin. 1997. Automated text summarization in SUMMARIST. In ACL/EACL Workshop on Intelligent Scal- able Text Summarization. Madrid, Spain.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Task-Based Evaluation of Anaphora Resolution: The Case of Summarization",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Steinberger",
"suffix": ""
}
],
"year": 2005,
"venue": "RANLP Workshop \"Crossing Barriers in Text Summarization Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Kabadjov, M. Poesio and J. Steinberger. 2005. Task- Based Evaluation of Anaphora Resolution: The Case of Summarization. In RANLP Workshop \"Crossing Barriers in Text Summarization Research\". Borovets, Bulgaria.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Beyond elaboration: The interaction of relations and focus in coherent text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Knott",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Oberlander",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "O'donnell",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 2001,
"venue": "Text representation: linguistic and psycholinguistic aspects. John Benjamins",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Knott, J. Oberlander, M. O'Donnell, and C. Mellish. 2001. Beyond elaboration: The interaction of relations and focus in coherent text. In Sanders, T., Schilperoord, J., and Spooren, W. (eds), Text representation: linguistic and psycholinguistic aspects. John Benjamins.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A solution to Plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "T",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "In Psychological Review",
"volume": "104",
"issue": "",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. K. Landauer and S. T. Dumais. 1997. A solution to Plato's problem: The latent semantic analysis theory of the acqui- sition, induction, and representation of knowledge. In Psy- chological Review, 104, 211-240.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Robust pronoun resolution with limited knowledge",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING. Montreal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of COLING. Montreal.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "MMAX: A Tool for the Annotation of Multi-modal Corpora",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Mueller and M. Strube. 2001. MMAX: A Tool for the Anno- tation of Multi-modal Corpora. In Proceedings of the IJCAI Workshop on Knowledge and Reasoning in Practical Dia- logue Systems. Seattle.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CAST: a Computer-Aided Summarization Tool",
"authors": [
{
"first": "C",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hasler",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EACL. Budapest",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Orasan, R. Mitkov and L. Hasler. 2003. CAST: a Computer- Aided Summarization Tool. In Proceedings of EACL. Bu- dapest, Hungary.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A General-Purpose, offthe-shelf Anaphora Resolution Module: Implementation and Preliminary Evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Kabadjov",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio and M. A. Kabadjov. 2004. A General-Purpose, off- the-shelf Anaphora Resolution Module: Implementation and Preliminary Evaluation. In Proceedings of LREC. Lisbon, Portugal.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Centering: A parametric theory and its instantiations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "B",
"middle": [
"Di"
],
"last": "Eugenio",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Hitzeman",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio, R. Stevenson, B. Di Eugenio, and J. M. Hitzeman. 2004. Centering: A parametric theory and its instantiations. Computational Linguistics, 30(3).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Do discourse-new detectors help definite description resolution?",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Goulart",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Uryupina",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IWCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio, M. A. Kabadjov, R. Vieira, R. Goulart, and O. Uryupina. 2005. Do discourse-new detectors help def- inite description resolution? In Proceedings of IWCS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Centroid-based summarization of multiple documents",
"authors": [],
"year": null,
"venue": "ANLP/NAACL Workshop on Automatic Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Centroid-based summarization of multiple documents. In ANLP/NAACL Workshop on Automatic Summarization. Seattle.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Text Summarization and Singular Value Decomposition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jezek",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ADVIS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Steinberger and K. Jezek. 2004. Text Summarization and Singular Value Decomposition. In Proceedings of ADVIS. Izmir, Turkey.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution",
"authors": [
{
"first": "R",
"middle": [],
"last": "Stuckardt",
"suffix": ""
}
],
"year": 2003,
"venue": "International Symposium on Reference Resolution",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Stuckardt. 2003. Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution. In International Symposium on Reference Reso- lution. Venice, Italy.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An empirically-based system for processing definite descriptions",
"authors": [
{
"first": "R",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Vieira and M. Poesio. 2000. An empirically-based system for processing definite descriptions. In Computational Lin- guistics, 26(4).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Addition method.",
"uris": null
},
"TABREF3": {
"html": null,
"text": "Evaluation of the manual annotation improvement -summarization ratio: 15%.",
"type_str": "table",
"content": "<table><tr><td>Evaluation</td><td>Lexical LSA</td><td>Manual</td><td>Manual</td></tr><tr><td>Method</td><td/><td colspan=\"2\">Substitution Addition</td></tr><tr><td>Relative Utility</td><td>0.645</td><td>0.662</td><td>0.688</td></tr><tr><td>F-score</td><td>0.557</td><td>0.549</td><td>0.583</td></tr><tr><td>Cosine Similarity</td><td>0.863</td><td>0.878</td><td>0.886</td></tr><tr><td>Main Topic Similarity</td><td>0.836</td><td>0.829</td><td>0.866</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"text": "Evaluation of the GUITAR improvement -summarization ratio: 15%.",
"type_str": "table",
"content": "<table><tr><td>Evaluation</td><td colspan=\"2\">Lexical LSA CAST</td><td>GUITAR</td><td>GUITAR</td></tr><tr><td>Method</td><td/><td/><td colspan=\"2\">Substitution Addittion</td></tr><tr><td>Relative Utility</td><td>0.645</td><td>0.618</td><td>0.626</td><td>0.678</td></tr><tr><td>F-score</td><td>0.557</td><td>0.522</td><td>0.524</td><td>0.573</td></tr><tr><td>Cosine Similarity</td><td>0.863</td><td>0.855</td><td>0.873</td><td>0.879</td></tr><tr><td>Main Topic Similarity</td><td>0.836</td><td>0.810</td><td>0.818</td><td>0.868</td></tr></table>",
"num": null
}
}
}
}