ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:50:30.316201Z"
},
"title": "Enumeration of Extractive Oracle Summaries",
"authors": [
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Corporation",
"location": {
"addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "hirao.tsutomu@lab.ntt.co.jp"
},
{
"first": "Masaaki",
"middle": [],
"last": "Nishino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Corporation",
"location": {
"addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "nishino.masaaki@lab.ntt.co.jp"
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Corporation",
"location": {
"addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "suzuki.jun@lab.ntt.co.jp"
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Corporation",
"location": {
"addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "nagata.masaaki@lab.ntt.co.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.",
"pdf_parse": {
"paper_id": "E17-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, compressive and abstractive summarization are attracting attention (e.g., Almeida and Martins (2013) , Qian and Liu (2013) , Yao et al. (2015) , Banerjee et al. (2015) , Bing et al. (2015) ). However, extractive summarization remains a primary research topic because the linguistic quality of the resultant summaries is guaranteed, at least at the sentence level, which is a key requirement for practical use (e.g., , Hong et al. (2015) , Yogatama et al. (2015) , Parveen et al. (2015) ).",
"cite_spans": [
{
"start": 84,
"end": 110,
"text": "Almeida and Martins (2013)",
"ref_id": "BIBREF0"
},
{
"start": 113,
"end": 132,
"text": "Qian and Liu (2013)",
"ref_id": "BIBREF26"
},
{
"start": 135,
"end": 152,
"text": "Yao et al. (2015)",
"ref_id": "BIBREF30"
},
{
"start": 155,
"end": 177,
"text": "Banerjee et al. (2015)",
"ref_id": "BIBREF1"
},
{
"start": 180,
"end": 198,
"text": "Bing et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 428,
"end": 446,
"text": "Hong et al. (2015)",
"ref_id": "BIBREF13"
},
{
"start": 449,
"end": 471,
"text": "Yogatama et al. (2015)",
"ref_id": "BIBREF31"
},
{
"start": 474,
"end": 495,
"text": "Parveen et al. (2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The summarization research community is experiencing a paradigm shift from extractive to compressive or abstractive summarization. Currently our question is: \"Is extractive summariza-tion still useful research?\" To answer it, the ultimate limitations of the extractive summarization paradigm must be comprehended; that is, we have to determine its upper bound and compare it with the performance of the state-of-the-art summarization methods. Since ROUGE n is the de-facto automatic evaluation method and is employed in many text summarization studies, an oracle summary is defined as a set of sentences that have a maximum ROUGE n score. If the ROUGE n score of an oracle summary outperforms that of a system that employs another summarization approach, the extractive summarization paradigm is worthwhile to leverage research resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As another benefit, identifying an oracle summary for a set of reference summaries allows us to utilize yet another evaluation measure. Since both oracle and extractive summaries are sets of sentences, it is easy to check whether a system summary contains sentences in the oracle summary. As a result, F-measures, which are available to evaluate a system summary, are useful for evaluating classification-based extractive summarization (Mani and Bloedorn, 1998; Osborne, 2002; Hirao et al., 2002) . Since ROUGE n evaluation does not identify which sentence is important, an Fmeasure conveys useful information in terms of \"important sentence extraction.\" Thus, combining ROUGE n and an F-measure allows us to scrutinize the failure analysis of systems.",
"cite_spans": [
{
"start": 436,
"end": 461,
"text": "(Mani and Bloedorn, 1998;",
"ref_id": "BIBREF19"
},
{
"start": 462,
"end": 476,
"text": "Osborne, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 477,
"end": 496,
"text": "Hirao et al., 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note that more than one oracle summary might exist for a set of reference summaries because ROUGE n scores are based on the unweighted counting of n-grams. As a result, an F-measure might not be identical among multiple oracle summaries. Thus, we need to enumerate the oracle summaries for a set of reference summaries and compute the F-measures based on them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we first derive an Integer Linear Programming (ILP) problem to extract an oracle summary from a set of reference summaries and a source document(s). To the best of our knowledge, this is the first ILP formulation that extracts oracle summaries. Second, since it is difficult to enumerate oracle summaries for a set of reference summaries using ILP solvers, we propose an algorithm that efficiently enumerates all oracle summaries by exploiting the branch and bound technique. Our experimental results on the Document Understanding Conference (DUC) corpora showed the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Room still exists for the further improvement of extractive summarization, i.e., where the ROUGE n scores of the oracle summaries are significantly higher than those of the state-ofthe-art summarization systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. The F-measures derived from multiple oracle summaries obtain significantly stronger correlations with human judgment than those derived from single oracle summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first briefly describe ROUGE n . Given set of reference summaries R and system summary S, ROUGE n is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ROUGE n (R, S) = |R| k=1 |U (R k )| j=1 min{N (g n j , R k ), N (g n j , S)} |R| k=1 |U (R k )| j=1 N (g n j , R k ) .",
"eq_num": "(1)"
}
],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "R k denotes the multiple set of n-grams that occur in k-th reference summary R k , and S denotes the multiple set of n-grams that appear in system-generated summary S (a set of sentences). N (g n j , R k ) and N (g n j , S) return the number of occurrences of n-gram g n j in the k-th reference and system summaries, respectively. Function U (\u2022) transforms a multiple set into a normal set. ROUGE n takes values in the range of [0, 1], and when the n-gram occurrences of the system summary agree with those of the reference summary, the value is 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "In this paper, we focus on extractive summarization, employ ROUGE n as an evaluation measure, and define the oracle summaries as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "O = arg max S\u2286D ROUGE n (R, S) s.t. (S) \u2264 L max .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "(2) D is the set of all the sentences contained in the input document(s), and L max is the length limitation of the oracle summary. (S) indicates the number of words in the system summary. Eq. 2is an NP-hard combinatorial optimization problem, and no polynomial time algorithms exist that can attain an optimal solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "3 Related Work Lin and Hovy (2003) utilized a naive exhaustive search method to obtain oracle summaries in terms of ROUGE n and exploited them to understand the limitations of extractive summarization systems. Ceylan et al. 2010proposed another naive exhaustive search method to derive a probability density function from the ROUGE n scores of oracle summaries for the domains to which source documents belong. The computational complexity of naive exhaustive methods is exponential to the size of the sentence set. Thus, it may be possible to apply them to single document summarization tasks involving a dozen sentences, but it is infeasible to apply them to multiple document summarization tasks that involve several hundred sentences.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "Lin and Hovy (2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "To describe the difference between the ROUGE n scores of oracle and system summaries in multiple document summarization tasks, Riedhammer et al. (2008) proposed an approximate algorithm with a genetic algorithm (GA) to find oracle summaries. Moen et al. (2014) utilized a greedy algorithm for the same purpose. Although GA or greedy algorithms are widely used to solve NP-hard combinatorial optimization problems, the solutions are not always optimal. Thus, the summary does not always have a maximum ROUGE n score for the set of reference summaries. Both works called the summary found by their methods the oracle, but it differs from the definition in our paper.",
"cite_spans": [
{
"start": 127,
"end": 151,
"text": "Riedhammer et al. (2008)",
"ref_id": "BIBREF27"
},
{
"start": 242,
"end": 260,
"text": "Moen et al. (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "Since summarization systems cannot reproduce human-made reference summaries in most cases, oracle summaries, which can be reproduced by summarization systems, have been used as training data to tune the parameters of summarization systems. For example, Kulesza and Tasker (2011) and Sipos et al. (2012) trained their summarizers with oracle summaries found by a greedy algorithm. Peyrard and Eckle-Kohler (2016) proposed a method to find a summary that approximates a ROUGE score based on the ROUGE scores of individual sentences and exploited the framework to train their summarizer. As mentioned above, such summaries do not always agree with the oracle summaries defined in our paper. Thus, the quality of the training data is suspect. Moreover, since these studies fail to consider that a set of reference summaries has multiple oracle summaries, the score of the loss function defined between their oracle and system summaries is not appropriate in most cases.",
"cite_spans": [
{
"start": 253,
"end": 278,
"text": "Kulesza and Tasker (2011)",
"ref_id": "BIBREF15"
},
{
"start": 283,
"end": 302,
"text": "Sipos et al. (2012)",
"ref_id": "BIBREF28"
},
{
"start": 380,
"end": 411,
"text": "Peyrard and Eckle-Kohler (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "As mentioned above, no known efficient algorithm can extract \"exact\" oracle summaries, as defined in Eq. (2), i.e., because only a naive exhaustive search is available. Thus, such approximate algorithms as a greedy algorithm are mainly employed to obtain them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Extractive Oracle Summaries",
"sec_num": "2"
},
{
"text": "To extract an oracle summary from document(s) and a given set of reference summaries, we start by deriving an Integer Linear Programming (ILP) problem. Since the denominator of Eq. (1) is constant for a given set of reference summaries, we can find an oracle summary by maximizing the numerator of Eq. (1). Thus, the ILP formulation is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Summary Extraction as an Integer Linear Programming (ILP) Problem",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "maximize z |R| k=1 |U (R k )| j=1 z kj (3) s.t. |D| i=1 (s i )x i \u2264 L max (4) \u2200j : |D| i=1 N (g n j , s i )x i \u2265 z kj (5) \u2200j : N (g n j , R k ) \u2265 z kj (6) \u2200i : x i \u2208 {0, 1}",
"eq_num": "(7)"
}
],
"section": "Oracle Summary Extraction as an Integer Linear Programming (ILP) Problem",
"sec_num": "4"
},
{
"text": "\u2200j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Summary Extraction as an Integer Linear Programming (ILP) Problem",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z kj \u2208 Z + .",
"eq_num": "(8)"
}
],
"section": "Oracle Summary Extraction as an Integer Linear Programming (ILP) Problem",
"sec_num": "4"
},
{
"text": "Here, z kj is the count of the j-th n-gram of the k-th reference summary in the oracle summary, i.e., z kj = min{N (g n j , R k ), N (g n j , S)}. (\u2022) returns the number of words in the sentence, x i is a binary indicator, and x i = 1 denotes that the i-th sentence s i is included in Root Figure 1 : Example of a search tree the oracle summary. N (g n j , s i ) returns the number of occurrences of n-gram g n j in the i-th sentence. Constraints 5and (6) ensure",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Oracle Summary Extraction as an Integer Linear Programming (ILP) Problem",
"sec_num": "4"
},
{
"text": "that z kj = min{N (g n j , R k ), N (g n j , S)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Summary Extraction as an Integer Linear Programming (ILP) Problem",
"sec_num": "4"
},
{
"text": "Since enumerating oracle summaries with an ILP solver is difficult, we extend the exhaustive search approach by introducing a search and prune technique to enumerate the oracle summaries. The search pruning decision is made by comparing the current upper bound of the ROUGE n score with the maximum ROUGE n score in the search history.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Branch and Bound Technique for Enumerating Oracle Summaries",
"sec_num": "5"
},
{
"text": "The enumeration of oracle summaries can be regarded as a depth-first search on a tree whose nodes represent sentences. Fig. 1 shows an example of a search tree created in a naive exhaustive search. The nodes represent sentences and the path from the root node to an arbitrary node represents a summary. For example, the red path in Fig. 1 from the root node to node s 2 represents a summary consisting of sentences s 1 , s 2 . By utilizing the tree, we can enumerate oracle summaries by exploiting depth-first searches while excluding the summaries that violate length constraints. However, this naive exhaustive search approach is impractical for large data sets because the number of nodes inside the tree is 2 |D| . If we prune the unwarranted subtrees in each step of the depth-first search, we can make the search more efficient. The decision to search or prune is made by comparing the current upper bound of the ROUGE n score with the maximum ROUGE n score in the search history. For instance, in Fig. 1 , we reach node s 2 by following this path: \"Root \u2192 s 1 , \u2192 s 2 \". If we estimate the maximum ROUGE n score (upper bound) obtained by searching for the descendant of s 2 (the subtree in the blue rectangle), we can decide whether the depthfirst search should be continued. When the upper bound of the ROUGE n score exceeds the current maximum ROUGE n in the search history, we have to continue. When the upper bound is smaller than the current maximum ROUGE n score, no summary is optimal that contains s 1 , s 2 , so we can skip subsequent search activity on the subtree and proceed to check the next branch: \"Root \u2192 s 1 \u2192 s 3 \".",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 125,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 332,
"end": 339,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 1005,
"end": 1011,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "To estimate the upper bound of the ROUGE n score, we re-define it for two distinct sets of sentences, V and W , i.e., V \u2229 W = \u03c6, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ROUGE n (R, V \u222aW ) = ROUGE n (R, V ) + ROUGE n (R, V, W ).",
"eq_num": "(9)"
}
],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "Here ROUGE n is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ROUGE n (R, V, W ) = |R| k=1 tn\u2208U (R k ) min{N (t n , R k \\ V), N (t n , W)} |R| k=1 tn\u2208U (R k )) N (t n , R k ) .",
"eq_num": "(10)"
}
],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "V, W are the multiple sets of n-grams found in the sets of sentences V and W , respectively. Theorem 1. Eq. (9) is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "We omit the proof of Theorem 1 due to space limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ROUGE n Score for Two Distinct Sets of Sentences",
"sec_num": "5.1"
},
{
"text": "Let V be the set of sentences on the path from the current node to the root node in the search tree, and let W be the set of sentences that are the descendants of the current node. In Fig. 1 , V ={s 1 , s 2 } and W ={s 3 , s 4 , s 5 , s 6 }. According to Theorem 1, the upper bound of the ROUGE n score is defined as:",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 190,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "ROUGE n (R, V ) = ROUGE n (R, V ) + max \u2126\u2286W {ROUGE n (R, V, \u2126): (\u2126)\u2264L max \u2212 (V )}. (11) Algorithm 1 Algorithm to Find Upper Bound of ROUGE n 1: Function: ROUGEn(R, V ) 2: W \u2190 descendant(last(V )), W \u2190 \u03c6 3: U \u2190 ROUGE(R, V ) 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "for each w \u2208 W do 5: append(W ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "ROUGE n (R,V,{w}) (w) ) 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "end for 7: sort(W , 'descend') 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "for each w \u2208 W do 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "if Lmax \u2212 ({w}) \u2265 0 then 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "U \u2190 U + ROUGE n (R, V, {w}) 11: Lmax \u2190 Lmax \u2212 ({w}) 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "else Since the second term on the right side in Eq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "13: U \u2190 U + ROUGE n (R, V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "(11) is an NP-hard problem, we turn to the following relation by introducing inequality,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ROUGE n (R, V, \u2126) \u2264 \u03c9\u2208\u2126 ROUGE n (R, V, {\u03c9}), max \u2126\u2286W ROUGE n (R, V, \u2126): (\u2126)\u2264L max \u2212 (V ) \u2264max x |W | i=1 ROUGE n (R, V, {w i })x i : |W | i=1 ({w i })x i \u2264L max \u2212 (V ) .",
"eq_num": "(12)"
}
],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "Here, x = (x 1 , . . . , x |W | ) and x i \u2208 {0, 1}. The right side of Eq. (12) is a knapsack problem, i.e., a 0-1 ILP problem. Although we can obtain the optimal solution for it using dynamic programming or ILP solvers, we solve its linear programming relaxation version by applying a greedy algorithm for greater computation efficiency. The solution output by the greedy algorithm is optimal for the relaxed problem. Since the optimal solution of the relaxed problem is always larger than that of the original problem, the relaxed problem solution can be utilized as the upper bound. Algorithm 1 shows the pseudocode that attains the upper bound of ROUGE n . In the algorithm, U indicates the upper bound score of ROUGE n . We first set the initial score of upper bound U to ROUGE n (R, V ) (line 3). Then we compute the density of the ROUGE n scores (ROUGE n (R, V, {w})/ (w)) for each sentence w in W and sort them in descending order (lines 4 to 6). When we have room to add w to the summary, we update U by adding the ROUGE n (R, V, {w}) (line 10) and update length Algorithm 2 Greedy algorithm to obtain initial score 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "Function: GREEDY(R, D, Lmax) 2: L \u2190 0, S \u2190 \u03c6, E \u2190 D 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "while E = \u03c6 do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "s * \u2190 arg max s\u2208E ROUGEn(R, S \u222a {s})\u2212ROUGEn(R, S) ({s}) 5: L \u2190 L + ({s * }) 6: if L \u2264 Lmax then 7: S \u2190 S \u222a {s * } 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "end if 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "E \u2190 E \\ {s * } 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "end while 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "i * \u2190 arg max i\u2208D, ({i})\u2264Lmax ROUGEn(R, {i})",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "S * \u2190 arg max K\u2208{{i * },S} ROUGEn(R, K)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "13: return ROUGEn(R, S * ) 14: end constraint L max (line 11). When we do not have room to add w, we update U by adding the score obtained by multiplying the density of w by the remaining length, L max (line 13), and exit the while loop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upper Bound of ROUGE n",
"sec_num": "5.2"
},
{
"text": "Since the branch and bound technique prunes the search by comparing the best solution found so far with the upper bounds, obtaining a good solution in the early stage is critical for raising search efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Score for Search",
"sec_num": "5.3"
},
{
"text": "Since ROUGE n is a monotone submodular function (Lin and Bilmes, 2011), we can obtain a good approximate solution by a greedy algorithm (Khuller et al., 1999) . It is guaranteed that the score of the obtained approximate solution is larger than 1 2 (1 \u2212 1 e )OPT, where OPT is the score of the optimal solution. We employ the solution as the initial ROUGE n score of the candidate oracle summary.",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Khuller et al., 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Score for Search",
"sec_num": "5.3"
},
{
"text": "Algorithm 2 shows the greedy algorithm. In it, S denotes a summary and D denotes a set of sentences. The algorithm iteratively adds sentence s * that yields the largest gain in the ROUGE n score to current summary S, provided the length of the summary does not violate length constraint L max (line 4). After the while loop, the algorithm compares the ROUGE n score of S with the maximum ROUGE n score of the single sentence and outputs the larger of the two scores (lines 11 to 13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Score for Search",
"sec_num": "5.3"
},
{
"text": "By introducing threshold \u03c4 as the best ROUGE n score in the search history, pruning decisions involve the following three conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "Algorithm 3 Branch and bound technique to enumerate oracle summaries",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "1: Read R,D,Lmax 2: \u03c4 \u2190 GREEDY(R, D, Lmax),O\u03c4 \u2190 \u03c6 3: for each s \u2208 D do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "append(S, ROUGEn(R, {s}), s ) 5: end for 6: sort(S,'descend') 7: call FINDORACLE(S, C) 8: output O\u03c4 9: Procedure: FINDORACLE(Q, V ) 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "while Q = \u03c6 do 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "s \u2190shift(Q) 12: append(V, s) 13: if Lmax \u2212 (V ) \u2265 0 then 14: if ROUGEn(R, V ) \u2265 \u03c4 then 15: \u03c4 \u2190 ROUGEn(R, V ) 16: append(O\u03c4 , V ) 17: call FINDORACLE(Q, V ) 18: else if ROUGEn(R, V ) \u2265 \u03c4 then 19: call FINDORACLE(Q, V ) 20: end if 21: end if 22: pop(V ) 23: end while 24: end 1. ROUGE n (R, V ) \u2265 \u03c4 ; 2. ROUGE n (R, V ) < \u03c4 , ROUGE n (R, V ) < \u03c4 ; 3. ROUGE n (R, V ) < \u03c4 , ROUGE n (R, V ) \u2265 \u03c4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "With case 1, we update the oracle summary as V and continue the search. With case 2, because both ROUGE n (R, V ) and ROUGE n (R, V ) are smaller than \u03c4 , the subtree whose root node is the current node (last visited node) is pruned from the search space, and we continue the depthfirst search from the neighbor node. With case 3, we do not update oracle summary as V because ROUGE n (R, V ) is less than \u03c4 . However, we might obtain a better oracle summary by continuing the depth-first search because the upper bound of the ROUGE n score exceeds \u03c4 . Thus, we continue to search for the descendants of the current node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "Algorithm 3 shows the pseudocode that enumerates the oracle summaries. The algorithm reads a set of reference summaries R, length limitation L max , and set of sentences D (line 1) and initializes threshold \u03c4 as the ROUGE n score obtained by the greedy algorithm (Algorithm 2). It also initializes O \u03c4 , which stores oracle summaries whose ROUGE n scores are \u03c4 , and priority queue C, which stores the history of the depth-first search (line 2). Next, the algorithm computes the ROUGE n score for each sentence and stores S after sorting them in descending order. After that, we start a depth-first search by recursively call- ing procedure FINDORACLE. In the procedure, we extract the top sentence from priority queue Q and append it to priority queue V (lines 11 to 12). When the length of V is less than L max , if ROUGE n (R, V ) is larger than threshold \u03c4 (case 1), we update \u03c4 as the score and append current V to O \u03c4 . Then we continue the depth-first search by calling the procedure the FINDORACLE (lines 15 to 17). If ROUGE n (R, V ) is larger than \u03c4 (case 3), we do not update \u03c4 and O \u03c4 but reenter the depthfirst search by calling the procedure again (lines 18 to 19). If neither case 1 nor case 3 is true, we delete the last visited sentence from V and return to the top of the recurrence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enumeration of Oracle summaries",
"sec_num": "5.4"
},
{
"text": "We conducted experiments on the corpora developed for a multiple document summarization task in DUC 2001 to 2007. Table 1 show the statistics of the data. In particular, the DUC-2005 to -2007 data sets not only have very large numbers of sentences and words but also a long target length (the reference summary length) of 250 words. All the words in the documents were stemmed by Porter's stemmer (Porter, 1980) . We computed ROUGE 1 scores, excluding stopwords, and computed ROUGE 2 scores, keeping them. Owczarzak et al. (2012) suggested using ROUGE 1 and keeping stopwords. However, as Takamura et al. argued (Takamura and Okumura, 2009) , the summaries optimized with non-content words failed to consider the actual quality. Thus, we excluded stopwords for computing the ROUGE 1 scores.",
"cite_spans": [
{
"start": 397,
"end": 411,
"text": "(Porter, 1980)",
"ref_id": "BIBREF25"
},
{
"start": 612,
"end": 640,
"text": "(Takamura and Okumura, 2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "We enumerated the following two types of oracle summaries: those for a set of references for a given topic and those for each reference in the set of references. Table 2 shows the average ROUGE 1,2 scores of the oracle summaries obtained from both a set of references and each reference in the set (\"multi\" and \"single\"), those of the best conventional system (Peer), and those obtained from summaries produced by a greedy algorithm (Algorithm 2).",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "Oracle (single) obtained better ROUGE 1,2 scores than Oracle (multi). The results imply that it is easier to optimize a reference summary than a set of reference summaries. On the other hand, the ROUGE 1,2 scores of these oracle summaries are significantly higher than those of the best systems. The best systems obtained ROUGE 1 scores from 60% to 70% in \"multi\" and from 50% to 60% in \"single\" as well as ROUGE 2 scores from 40% to 55% in \"multi\" and from 30% to 40% in \"single\" for their oracle summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "Since the systems in Table 2 were developed over many years, we compared the ROUGE n scores of the oracle summaries with those of the current state-of-the-art systems using the DUC-2004 corpus and obtained summaries generated by different systems from a public repository 1 . The repository includes summaries produced by the following seven state-of-the-art summarization systems: CLASSY04 (Conroy et al., 2004) , CLASSY11 (Conroy et al., 2011) , Submodular (Lin and Bilmes, 2012) , DPP (Kulesza and Tasker, 2011) , RegSum , OCCAMS V (Davie et al., 2012; Conroy et al., 2013) , and ICSISumm . Table 3 shows the results.",
"cite_spans": [
{
"start": 391,
"end": 412,
"text": "(Conroy et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 424,
"end": 445,
"text": "(Conroy et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 459,
"end": 481,
"text": "(Lin and Bilmes, 2012)",
"ref_id": "BIBREF17"
},
{
"start": 488,
"end": 514,
"text": "(Kulesza and Tasker, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 526,
"end": 555,
"text": "OCCAMS V (Davie et al., 2012;",
"ref_id": null
},
{
"start": 556,
"end": 576,
"text": "Conroy et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 594,
"end": 601,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "Based on the results, RegSum achieved the best ROUGE 1 =0.331 result, while ICSISumm ) (a compressive summarizer) achieved the best result with ROUGE 2 =0.098. These systems outperformed the best systems (Peers 65 and 67 in Table 2 ), but the differences in the ROUGE n scores between the systems and the oracle summaries are still large. More recently, Hong et al. (2015) 400 .164 .452 .186 .434 .185 .427 .162 .445 .177 .491 .211 .506 .236 Oracle (single) . 500 .226 .515 .225 .525 .258 .519 .228 .574 .279 .607 .303 .622 .330 Greedy .387 .161 .438 .184 .424 .182 .412 .157 .430 .173 .473 .206 .495 .234 Peer .251 .080 .269 .080 .295 .094 .305 .092 .262 .073 .305 .095 .363 .0980 Table 3 : ROUGE 1,2 scores for state-of-the-art summarization systems on DUC-2004 corpus their summaries. In short, the ROUGE n scores of the oracle summaries are significantly higher than those of the current state-of-the-art summarization systems, both extractive and compressive summarization. These results imply that further improvement of the performance of extractive summarization is possible.",
"cite_spans": [
{
"start": 354,
"end": 372,
"text": "Hong et al. (2015)",
"ref_id": "BIBREF13"
},
{
"start": 373,
"end": 457,
"text": "400 .164 .452 .186 .434 .185 .427 .162 .445 .177 .491 .211 .506 .236 Oracle (single)",
"ref_id": null
},
{
"start": 460,
"end": 675,
"text": "500 .226 .515 .225 .525 .258 .519 .228 .574 .279 .607 .303 .622 .330 Greedy .387 .161 .438 .184 .424 .182 .412 .157 .430 .173 .473 .206 .495 .234 Peer .251 .080 .269 .080 .295 .094 .305 .092 .262 .073 .305 .095 .363",
"ref_id": null
}
],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 682,
"end": 689,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "R 2 R 1 R 2 R 1 R 2 R 1 R 2 R 1 R 2 R 1 R 2 R 1 R 2 Oracle (multi) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "On the other hand, the ROUGE n scores of the oracle summaries are far from ROUGE n = 1. We believe that the results are related to the summary's compression rate. The data set's compression rate was only 1 to 2%. Thus, under tight length constraints, extractive summarization basically fails to cover large numbers of n-grams in the reference summary. This reveals the limitation of the extractive summarization paradigm and suggests that we need another direction, compressive or abstractive summarization, to overcome the limitation. Table 4 : Jaccard Index between both oracle and greedy summaries scores of the oracle summaries and greedy summaries, those obtained from the greedy summaries achieved near optimal scores, i.e., approximation ratio of them are close to 0.9. These results are surprising since the algorithm's theoretical lower bound is 1 2 (1 \u2212 1 e )( 0.32)OPT. On the other hand, the results do not support that the differences between them are small at the sentence-level. Table 4 shows the average Jaccard Index between the oracle summaries and the corresponding greedy summaries for the DUC-2004 corpus. The results demonstrate that the oracle summaries are much less similar to the greedy summaries at the sentence-level. Thus, it might not be appropriate to use greedy summaries as training data for learning-based extractive summarization systems. Table 5 shows the median number of oracle summaries and the rates of the reference summaries that have multiple oracle summaries for each data set. Over 80% of the reference summaries and about 60% to 90% of the topics have multiple oracle summaries. Since the ROUGE n scores are based on the unweighted counting of n-grams, when many sentences have similar meanings, i.e., many redundant sentences, the number of oracle summaries that have the same ROUGE n scores increases. The source documents of multiple document summarization tasks are prone to have many such redundant sentences, and the amount of oracle summaries is large. Table 5 : Median number of oracle summaries and rates of reference summaries and topics with multiple oracle summaries for each data set",
"cite_spans": [],
"ref_spans": [
{
"start": 536,
"end": 543,
"text": "Table 4",
"ref_id": null
},
{
"start": 994,
"end": 1001,
"text": "Table 4",
"ref_id": null
},
{
"start": 1374,
"end": 1381,
"text": "Table 5",
"ref_id": null
},
{
"start": 2006,
"end": 2013,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "The oracle summaries offer significant benefit with respect to evaluating the extracted sentences. Since both the oracle and system summaries are sets of sentences, it is easy to check whether each sentence in the system summary is contained in one of the oracle summaries. Thus, we can exploit the F-measures, which are useful for evaluating classification-based extractive summarization (Mani and Bloedorn, 1998; Osborne, 2002; Hirao et al., 2002) . Here, we have to consider that the oracle summaries, obtained from a reference summary or a set of reference summaries, are not identical at the sentence-level (e.g., the average Jaccard Index between the oracle summaries for the DUC-2004 corpus is around 0.5). The F-measures are varied with the oracle summaries that are used for such computation. For example, assume that we have system summary S={s 1 , s 2 , s 3 , s 4 } and oracle summaries O 1 ={s 1 , s 2 , s 5 , s 6 } and O 2 ={s 1 , s 2 , s 3 }. The precision for O 1 is 0.5, while that for O 2 is 0.75; the recall for O 1 is 0.5, while that for O 2 is 1; the F-measure for O 1 is 0.5, while that for O 2 is 0.86.",
"cite_spans": [
{
"start": 389,
"end": 414,
"text": "(Mani and Bloedorn, 1998;",
"ref_id": "BIBREF19"
},
{
"start": 415,
"end": 429,
"text": "Osborne, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 430,
"end": 449,
"text": "Hirao et al., 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Enumeration",
"sec_num": "6.2.3"
},
{
"text": "Thus, we employ the scores gained by averaging all of the oracle summaries as evaluation measures. Precision, recall, and F-measure are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Enumeration",
"sec_num": "6.2.3"
},
{
"text": "P ={ O\u2208O all |O \u2229 S|/|S|}/|O all |, R={ O\u2208O all |O \u2229 S|/|O|}/|O all |, F-measure=2P R/(P + R).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Enumeration",
"sec_num": "6.2.3"
},
{
"text": "To demonstrate F-measure's effectiveness, we investigated the correlation between an F-measure and human judgment based on the evaluation results obtained from the DUC-2004 corpus. The results include summaries generated by 17 systems, each of which has a mean coverage score assigned by a human subject. We computed the correla-tion coefficients between the average F-measure and the average mean coverage score for 50 topics. Table 6 shows Pearson's r and Spearman's \u03c1. In the table, \"F-measure (R 1 )\" and \"F-measure (R 2 )\" indicate the F-measures calculated using oracle summaries optimized to ROUGE 1 and ROUGE 2 , respectively. \"M\" indicates the F-measure calculated using multiple oracle summaries, and \"S\" indicates F-measures calculated using randomly selected oracle summaries. \"multi\" indicates oracle summaries obtained from a set of references, and \"single\" indicates oracle summaries obtained from a reference summary in the set. For \"S,\" we randomly selected a single oracle summary and calculated the F-measure 100 times and took the average value with the 95% confidence interval of the F-measures by bootstrap resampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of Enumeration",
"sec_num": "6.2.3"
},
{
"text": "The results demonstrate that the F-measures are strongly correlated with human judgment. Their values are comparable with those of ROUGE 1,2 . In particular, F-measure (R 1 ) (single-M) achieved the best Spearman's \u03c1 result. When comparing \"single\" with \"multi,\" Pearson's r of \"multi\" was slightly lower than that of \"single,\" and the Spearman's r of \"multi\" was almost the same as those of \"single.\" \"M\" has significantly better performance than \"S.\" These results imply that F-measures based on oracle summaries are a good evaluation measure and that oracle summaries have the potential to be an alternative to human-made reference summaries in terms of automatic evaluation. Moreover, the enumeration of the oracle summaries for a given reference summary or a set of reference summaries is essential for automatic evaluation. Table 6 : Correlation coefficients between automatic evaluations and human judgments on DUC-2004 corpus",
"cite_spans": [],
"ref_spans": [
{
"start": 830,
"end": 837,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of Enumeration",
"sec_num": "6.2.3"
},
{
"text": "To demonstrate the efficiency of our search algorithm against the naive exhaustive search method, we compared the number of feasible solutions (sets of sentences that satisfy the length constraint) with the number of summaries that were checked in our search algorithm. Table 7 shows the median number of feasible solutions and checked summaries yielded by our method for each data set (in the case of \"single\"). The differences in the number of feasible solutions between ROUGE 1 and ROUGE 2 are very large. Input set (|D|) of ROUGE 1 is much larger than ROUGE 1 . On the other hand, the differences between ROUGE 1 and ROUGE 2 in our method are of the order of 10 to 10 2 . When comparing our method with naive exhaustive searches, its search space is significantly smaller. The differences are of the order of 10 7 to 10 30 with ROUGE 1 and 10 4 to 10 17 with ROUGE 2 . These results demonstrate the efficiency of our branch and bound technique.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 277,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Search Efficiency",
"sec_num": "6.2.4"
},
{
"text": "In addition, we show an example of the processing time for extracting one oracle summary and enumerating all of the oracle summaries for the reference summaries in the DUC-2004 corpus with a Linux machine (CPU: Intel R Xeon R X5675 (3.07GHz)) with 192 GB of RAM. We utilized CPLEX 12.1 to solve the ILP problem. Our algorithm was implemented in C++ and complied with GCC version 4.4.7. The results show that we needed 0.026 and 0.021 sec. to extract one oracle summary per reference summary and 0.047 and 0.031 sec. to extract one oracle summary per set of reference summaries for ROUGE 1 and ROUGE 2 , respectively. We needed 11.90 and 1.40 sec. to enumerate the oracle summaries per reference summary and 102.94 and 3.65 sec. per set of reference summaries for ROUGE 1 and ROUGE 1 ROUGE 2 Naive Proposed Naive Proposed 01 3.66\u00d710 13 5.75\u00d710 3 3.32\u00d710 7 1.00\u00d710 3 02 1.12\u00d710 12 4.64\u00d710 3 1.34\u00d710 7 8.87\u00d710 2 03 1.62\u00d710 11 3.65\u00d710 3 6.37\u00d710 6 8.19\u00d710 2 04 9.65\u00d710 10 4.47\u00d710 3 6.90\u00d710 6 9.83\u00d710 2 05 5.48\u00d710 36 2.32\u00d710 6 3.48\u00d710 21 7.03\u00d710 4 06 1.94\u00d710 32 1.97\u00d710 6 2.11\u00d710 20 5.08\u00d710 4 07 4.14\u00d710 28 1.40\u00d710 6 1.81\u00d710 19 2.60\u00d710 4 Table 7 : Median number of summaries checked by each search method ROUGE 2 , respectively. The extraction of one oracle summary for a reference summary can be achieved with the ILP solver in practical time and the enumeration of oracle summaries is also efficient. However, to enumerate oracle summaries, we needed several weeks for some topics in DUCs 2005 to 2007 since they hold a huge number of source sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 1132,
"end": 1139,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Search Efficiency",
"sec_num": "6.2.4"
},
{
"text": "To analyze the limitations and the future direction of extractive summarization, this paper proposed (1) Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n scores and (2) an algorithm that enumerates all oracle summaries to exploit F-measures that evaluate the sentences extracted by systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The evaluation results obtained from the corpora of DUCs 2001 to 2007 identified the following: (1) room still exists to improve the ROUGE n scores of extractive summarization systems even though the ROUGE n scores of the oracle summaries fell below the theoretical upper bound ROUGE n =1. (2) Over 80% of the reference summaries and from 60% to 90% of the sets of reference summaries have multiple oracle summaries, and the F-measures computed by utilizing the enumerated oracle summaries showed stronger correlation with human judgment than those computed from single oracle summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://www.cis.upenn.edu/\u02dcnlp/ corpora/sumrepo.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank three anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fast and robust compressive summarization with dual decomposition and multi-task learning",
"authors": [
{
"first": "Miguel",
"middle": [
"B"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "196--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel B. Almeida and Andr\u00e9 F.T. Martins. 2013. Fast and robust compressive summarization with dual de- composition and multi-task learning. In Proc. of the 51st Annual Meeting of the Association for Compu- tational Linguistics, pages 196-206.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multi-document abstractive summarization using ILP based multi-sentence compression",
"authors": [
{
"first": "Soddhartha",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Kazunari",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015)",
"volume": "",
"issue": "",
"pages": "1208--1214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2015. Multi-document abstractive sum- marization using ILP based multi-sentence compres- sion. In Proc. of the 24th International Joint Confer- ence on Artificial Intelligence (IJCAI 2015), pages 1208-1214.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Abstractive multi-document summarization via phrase selection and merging",
"authors": [
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Piji",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1587--1597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca J. Passonneau. 2015. Abstractive multi-document summarization via phrase selection and merging. In Proc. of the 53rd Annual Meet- ing of the Association for Computational Linguis- tics, pages 1587-1597.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quantifying the limits and success of extractive summarization systems across domains",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Hakan Ceylan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Umut\u00f6zertem",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palomar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of the Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "903--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hakan Ceylan, Rada Mihalcea, Umut\u00d6zertem, Elena Lloret, and Manuel Palomar. 2010. Quantifying the limits and success of extractive summarization sys- tems across domains. In Proc. of the Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 903-911.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Left-brain/rightbrain multi-document summarization",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Goldstein",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "Schlesinger",
"suffix": ""
},
{
"first": "Dianne",
"middle": [
"P"
],
"last": "O'leary",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the Document Understanding Conference (DUC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Conroy, Jade Goldstein, Judith D. Schlesinger, and Dianne P. O'Leary. 2004. Left-brain/right- brain multi-document summarization. In Proc. of the Document Understanding Conference (DUC).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Classy 2011 at TAC: Guided and multi-lingual summaries and evaluation metrics",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "Schlesinger",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Kubina",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"A"
],
"last": "Rankel",
"suffix": ""
},
{
"first": "Dianne",
"middle": [
"P"
],
"last": "O'leary",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Conroy, Judith D. Schlesinger, Jeff Kubina, Peter A. Rankel, and Dianne P. O'Leary. 2011. Classy 2011 at TAC: Guided and multi-lingual sum- maries and evaluation metrics. In Proc. of the Text Analysis Conference (TAC).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multilingual summarization: Dimensionality reduction and a step towards optimal term coverage",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sashka",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Yi-Kai",
"middle": [],
"last": "Kubina",
"suffix": ""
},
{
"first": "Dianne",
"middle": [
"P"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "O'leary",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schlesinger",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the MultiLing 2013 Workshop on Multilingual Multi-document Summarization",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Conroy, Sashka T. Davis, Jeff Kubina, Yi-Kai Liu, Dianne P. O'Leary, and Judith D Schlesinger. 2013. Multilingual summarization: Dimensionality reduction and a step towards optimal term coverage. In Proc. of the MultiLing 2013 Workshop on Mul- tilingual Multi-document Summarization, pages 55- 63.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "OCCAMS -an optimal combinatorial covering algorithm for multi-document summarization",
"authors": [
{
"first": "T",
"middle": [],
"last": "Sashka",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Davie",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schlesinger",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of the 12th IEEE International Conference on Data Mining Workshops, ICDM Workshops",
"volume": "",
"issue": "",
"pages": "454--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sashka T. Davie, John M. Conroy, and Judith D. Schlesinger. 2012. OCCAMS -an optimal com- binatorial covering algorithm for multi-document summarization. In Proc. of the 12th IEEE Inter- national Conference on Data Mining Workshops, ICDM Workshops, pages 454-463.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A scalable global model for summarization",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of the Workshop on Integer Linear Programming for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proc. of the Workshop on Integer Linear Programming for Natural Lan- guage Processing, pages 10-18.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The ICSI/UTD summarization system at TAC",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "Berndt",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Shasha",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xie",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of the Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Benoit Favre, Dilek Hakkani-Tur, Berndt Bohnet, Yang Liu, and Shasha Xie. 2009. The ICSI/UTD summarization system at TAC 2009. In Proc. of the Text Analysis Conference (TAC).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Extracting import sentences with support vector machines",
"authors": [
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Eisaku",
"middle": [],
"last": "Maeda",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 19th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "342--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsutomu Hirao, Hideki Isozaki, Eisaku Maeda, and Yuji Matsumoto. 2002. Extracting import sen- tences with support vector machines. In Proc. of the 19th International Conference on Computational Linguistics (COLING), pages 342-348.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving the estimation of word importance for news multidocument summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "712--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multi- document summarization. In Proc. of the 14th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 712-721.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A repository of state of the art and competitive baseline summaries for generic news summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Conroy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Favre",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "1608--1616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hong, John Conroy, Benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repository of state of the art and competitive baseline sum- maries for generic news summarization. In Proc. of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1608- 1616.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "System combination for multi-document summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hong, Mitchell Marcus, and Ani Nenkova. 2015. System combination for multi-document summa- rization. In Proc. of the 2015 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 107-117.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The budgeted maximum coverage problem. Information Processing Letters",
"authors": [
{
"first": "Samir",
"middle": [],
"last": "Khuller",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Moss",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Naor",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "70",
"issue": "",
"pages": "39--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samir Khuller, Anna Moss, and Joseph Naor. 1999. The budgeted maximum coverage problem. Infor- mation Processing Letters, 70(1):39-45.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning determinantal point process",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Tasker",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the 27th Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Kulesza and Ben Tasker. 2011. Learning deter- minantal point process. In Proc. of the 27th Confer- ence on Uncertainty in Artificial Intelligence.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A class of submodular functions for document summarization",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the 49th Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "510--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proc. of the 49th Association for Computational Linguistics: Human Language Technologies, pages 510-520.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning mixtures of submodular shells with application to document summarization",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of the 28th Conference on Uncertainty in Artificial Intelligence (UAI2012)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Lin and Jeff Bilmes. 2012. Learning mixtures of submodular shells with application to document summarization. In Proc. of the 28th Conference on Uncertainty in Artificial Intelligence (UAI2012).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The potential and limitations of automatic sentence extraction for summarization",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the HLT-NAACL 03 Text Summarization Workshop",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. The potential and limitations of automatic sentence extraction for summarization. In Proc. of the HLT-NAACL 03 Text Summarization Workshop, pages 73-80.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Machine learning of generic and user-focused summarization",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bloedorn",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "820--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani and Eric Bloedorn. 1998. Machine learning of generic and user-focused summarization. In Proceedings of the Fifteenth National/Tenth Con- ference on Artificial Intelligence/Innovative Appli- cations of Artificial Intelligence, pages 820-826.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On evaluation of automatically generated clinical discharge summaries",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Moen",
"suffix": ""
},
{
"first": "Juho",
"middle": [],
"last": "Heimonen",
"suffix": ""
},
{
"first": "Laura-Maria",
"middle": [],
"last": "Murtola",
"suffix": ""
},
{
"first": "Antti",
"middle": [],
"last": "Airola",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Pahikkala",
"suffix": ""
},
{
"first": "Virpi",
"middle": [],
"last": "Terv",
"suffix": ""
},
{
"first": "Riitta",
"middle": [],
"last": "Danielsson-Ojala",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Sanna",
"middle": [],
"last": "Salanter",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of the 2nd European Workshop on Practical Aspects of Health Informatics",
"volume": "",
"issue": "",
"pages": "101--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Moen, Juho Heimonen, Laura-Maria Murtola, Antti Airola, Tapio Pahikkala, Virpi Terv, Ri- itta Danielsson-Ojala, Tapio Salakoski, and Sanna Salanter. 2014. On evaluation of automatically generated clinical discharge summaries. In Proc. of the 2nd European Workshop on Practical Aspects of Health Informatics, pages 101-114.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Using maximum entropy for sentence extraction",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Automatic Summarization",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miles Osborne. 2002. Using maximum entropy for sentence extraction. In Proceedings of the ACL-02 Workshop on Automatic Summarization, pages 1-8.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "An assessment of the accuracy of automatic evaluation in summarization",
"authors": [
{
"first": "Karolina",
"middle": [],
"last": "Owczarzak",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An assessment of the accuracy of automatic evaluation in summariza- tion. In Proc. of Workshop on Evaluation Metrics and System Comparison for Automatic Summariza- tion, pages 1-9, June.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Topical coherence for graph-based extractive summarization",
"authors": [
{
"first": "Daraksha",
"middle": [],
"last": "Parveen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Hans-Martin Ramsl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1949--1954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daraksha Parveen, Hans-Martin Ramsl, and Michael Strube. 2015. Topical coherence for graph-based extractive summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1949-1954, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Optimizing an approximation of rouge -a problemreduction approach to extractive multi-document summarization",
"authors": [
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Eckle-Kohler",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1825--1836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxime Peyrard and Judith Eckle-Kohler. 2016. Op- timizing an approximation of rouge -a problem- reduction approach to extractive multi-document summarization. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1825- 1836, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "Martin",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "14",
"issue": "",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin F. Porter. 1980. An algorithm for suffix strip- ping. Program, 14(3):130-137.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Fast joint compression and summarization via graph cuts",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1492--1502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Qian and Yang Liu. 2013. Fast joint compression and summarization via graph cuts. In Proc. of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1492-1502.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Packing the meeting summarization knapsack",
"authors": [
{
"first": "Korbinian",
"middle": [],
"last": "Riedhammer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of the 9th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "2434--2437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Korbinian Riedhammer, Dan Gillick, Benoit Favre, and Dilek Hakkani-T\u00fcr. 2008. Packing the meeting summarization knapsack. In Proc. of the 9th Annual Conference of the International Speech Communi- cation Association, pages 2434-2437.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Large-margin learning of submodular summarization models",
"authors": [
{
"first": "Ruben",
"middle": [],
"last": "Sipos",
"suffix": ""
},
{
"first": "Pannaga",
"middle": [],
"last": "Shivaswamy",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "224--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruben Sipos, Pannaga Shivaswamy, and Thorsten Joachims. 2012. Large-margin learning of submod- ular summarization models. In Proc. of the 13th Conference of the European Chapter of the Associa- tion for Computational Linguistics, pages 224-233.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Text summarization model based on maximum coverage problem and its variant",
"authors": [
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of the 12th Conference of the European of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "781--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroya Takamura and Manabu Okumura. 2009. Text summarization model based on maximum coverage problem and its variant. In Proc. of the 12th Confer- ence of the European of the Association for Compu- tational Linguistics, pages 781-789.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Compressive document summarization via sparse optimization",
"authors": [
{
"first": "Jin-Ge",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015)",
"volume": "",
"issue": "",
"pages": "1376--1382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Compressive document summarization via sparse optimization. In Proc. of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), pages 1376-1382.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Extractive summarization by maximizing semantic volume",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1961--1966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Fei Liu, and Noah A. Smith. 2015. Extractive summarization by maximizing semantic volume. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 1961-1966, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"content": "<table><tr><td>6.2 Results and Discussion</td></tr><tr><td>6.2.1</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Impact of Oracle ROUGE n scores",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>System</td><td colspan=\"2\">ROUGE 1 ROUGE 2</td></tr><tr><td colspan=\"2\">Oracle (multi) .427</td><td>.162</td></tr><tr><td colspan=\"2\">Oracle (single) .519</td><td>.228</td></tr><tr><td>CLASSY04</td><td>.305</td><td>.0897</td></tr><tr><td>CLASSY11</td><td>.286</td><td>.0919</td></tr><tr><td>Submodular</td><td>.300</td><td>.0933</td></tr><tr><td>DPP</td><td>.309</td><td>.0960</td></tr><tr><td>RegSum</td><td>.331</td><td>.0974</td></tr><tr><td>OCCAMS V</td><td>.300</td><td>.0974</td></tr><tr><td>ICSISumm</td><td>.310</td><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "ROUGE 1,2 scores of oracle summaries, greedy summaries, and system summaries for each data set",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>also shows the ROUGE 1,2 scores of the</td></tr><tr><td>summaries obtained from the greedy algorithm</td></tr><tr><td>(greedy summaries). Although there are statisti-</td></tr><tr><td>cally significant differences between the ROUGE</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF7": {
"content": "<table><tr><td/><td/><td>Median</td><td/><td/><td/><td>Rate</td><td/><td/></tr><tr><td/><td>single</td><td/><td>multi</td><td/><td>single</td><td/><td>multi</td><td/></tr><tr><td colspan=\"2\">ROUGE 1 01 8</td><td>9</td><td>4</td><td>5</td><td>.854</td><td>.787</td><td>.833</td><td>.733</td></tr><tr><td>02</td><td>7.5</td><td>5.5</td><td>4</td><td>4</td><td>.897</td><td>.836</td><td>.814</td><td>.780</td></tr><tr><td>03</td><td>8</td><td>10.5</td><td>3.5</td><td>4</td><td>.833</td><td>.858</td><td>.800</td><td>.900</td></tr><tr><td>04</td><td>8</td><td>8</td><td>3.5</td><td>3</td><td>.865</td><td>.865</td><td>.780</td><td>.760</td></tr><tr><td>05</td><td>35</td><td>35.5</td><td>2</td><td>3</td><td>.916</td><td>.907</td><td>.580</td><td>.660</td></tr><tr><td>06</td><td>28</td><td>22</td><td>2.5</td><td>3</td><td>.877</td><td>.880</td><td>.700</td><td>.720</td></tr><tr><td>07</td><td>23</td><td>16</td><td>4</td><td>2</td><td>.910</td><td>.878</td><td>.733</td><td>711</td></tr></table>",
"type_str": "table",
"html": null,
"text": "ROUGE 2 ROUGE 1 ROUGE 2 ROUGE 1 ROUGE 2 ROUGE 1 ROUGE 2",
"num": null
}
}
}
}