ACL-OCL / Base_JSON /prefixC /json /C16 /C16-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:00:29.888863Z"
},
"title": "Exploring Text Links for Coherent Multi-Document Summarization",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Labortories Kyoto",
"location": {
"postCode": "619-0237",
"country": "Japan"
}
},
"email": "wang.xun@lab.ntt.co.jp"
},
{
"first": "Masaaki",
"middle": [],
"last": "Nishino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Labortories Kyoto",
"location": {
"postCode": "619-0237",
"country": "Japan"
}
},
"email": "nishino.masaaki@lab.ntt.co.jp"
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Labortories Kyoto",
"location": {
"postCode": "619-0237",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Labortories Kyoto",
"location": {
"postCode": "619-0237",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Labortories Kyoto",
"location": {
"postCode": "619-0237",
"country": "Japan"
}
},
"email": "nagata.masaaki@lab.ntt.co.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Summarization aims to represent source documents by a shortened passage. Existing methods focus on the extraction of key information, but often neglect coherence. Hence the generated summaries suffer from a lack of readability. To address this problem, we have developed a graph-based method by exploring the links between text to produce coherent summaries. Our approach involves finding a sequence of sentences that best represent the key information in a coherent way. In contrast to the previous methods that focus only on salience, the proposed method addresses both coherence and informativeness based on textual linkages. We conduct experiments on the DUC2004 summarization task data set. A performance comparison reveals that the summaries generated by the proposed system achieve comparable results in terms of the ROUGE metric, and show improvements in readability by human evaluation.",
"pdf_parse": {
"paper_id": "C16-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Summarization aims to represent source documents by a shortened passage. Existing methods focus on the extraction of key information, but often neglect coherence. Hence the generated summaries suffer from a lack of readability. To address this problem, we have developed a graph-based method by exploring the links between text to produce coherent summaries. Our approach involves finding a sequence of sentences that best represent the key information in a coherent way. In contrast to the previous methods that focus only on salience, the proposed method addresses both coherence and informativeness based on textual linkages. We conduct experiments on the DUC2004 summarization task data set. A performance comparison reveals that the summaries generated by the proposed system achieve comparable results in terms of the ROUGE metric, and show improvements in readability by human evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic summarization is extremely useful in this age of information overload. It provides readers with easier access to information without the labour of reading the source text. According to the number of documents dealt with, summarization falls into two categories: single document summarization and multi-document summarization. While they both aim to represent the source text using a shorten passage, the latter deals with a set of documents sharing the same topic. Based on the method adopted, existing approaches to summarization can be divided into two kinds: abstraction based or extraction based. The difference lies in the sentences they use to generate summaries: the former selects sentences (clauses, or other text units, hereafter we refer to all of them as sentences.) from source documents and the latter generates new sentences. Most existing summarization systems are extraction-based because abstractionbased methods require the use of natural language generation technology, which is still a growing field. This paper, without exception, also employs extraction-based methods and we focus on multi-documents summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Currently the extraction-based methods face some major challenges. One is informativeness, which means we need to maintain the important information of source documents in summaries. This is the focus of almost all research on summarization. Another challenge is presentation, namely that the extracted text should be well presented, i.e., it should contain little redundancy and be coherent so as to be readily understandable. Previous work has addressed the problem of redundancy, and some successful solutions like Maximum Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) have been proposed and widely adopted (e.g., (Li and Li, 2013) ), but very few try to deal with coherence. Therefore the generated summaries generally suffer as regards readability and are very difficult to use for practical applications. In the report of the TAC 2011 summarization task (Owczarzak and Dang, 2011) , it is stated that \"in general, automatic summaries are better than baselines 1 , except Readability.\" Such a statement suggests, as for summarization, coherence should be treated with the same as salience and redundancy.",
"cite_spans": [
{
"start": 551,
"end": 582,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 628,
"end": 645,
"text": "(Li and Li, 2013)",
"ref_id": "BIBREF21"
},
{
"start": 871,
"end": 897,
"text": "(Owczarzak and Dang, 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing work addresses coherence in summarization from different aspects. One kind of method employs reordering after selecting sentences, and the drawback is evident: coherence is considered after sentence selection. Another kind of widely adopted method takes discourse relations into consideration when selecting sentences, as discourse relations are believed to be essential for maintaining textual coherence. Hirao et al. (2013) formulated single document summarization as to extract a sub tree from the complete discourse tree and thus preserve the relations between extracted document units to form a readable text. Wang et al. (2015) extended it to multi-document summarization by regarding a document set as one document and developed a model which combined discourse parsing and summarization together. Christensen et al. (2013) proposed a graph-based model to bypass the tree constraints. They employed rich textual features to build a discourse relation graph for source documents with the aim of representing the relations between sentences (both inter and intra-document relations). Christensen et al. (2013) reported ROUGE scores lower than some baselines. This is because that, they claim, ROUGE is salience-focused and fails to notice the improvement in coherence. In a further human evaluation, they reported improvements in readability.",
"cite_spans": [
{
"start": 415,
"end": 434,
"text": "Hirao et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 624,
"end": 642,
"text": "Wang et al. (2015)",
"ref_id": "BIBREF39"
},
{
"start": 814,
"end": 839,
"text": "Christensen et al. (2013)",
"ref_id": "BIBREF6"
},
{
"start": 1098,
"end": 1123,
"text": "Christensen et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These discourse-based methods without exception have discourse analysis as a prerequisite. As we all know, discourse analysis is still under development thus preventing the expected improvement. Furthermore, languages other than English do not enjoy plenty of ready-to-use discourse analysis tools. This also limits the usage of these discourse-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Is it possible to consider coherence in summarization without discourse analysis? Before answering this question, we need to find out what is the key to coherence in text. According to the centering theory (Grosz et al., 1995; Walker et al., 1998) , the coherence of text is to a large extent maintained by entities and the relations between them. This indicates that discourse analysis is not a must to preserve coherence; we can directly take advantages of entities and their relations to generate coherence text.",
"cite_spans": [
{
"start": 206,
"end": 226,
"text": "(Grosz et al., 1995;",
"ref_id": "BIBREF15"
},
{
"start": 227,
"end": 247,
"text": "Walker et al., 1998)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on this point, we design a novel graph-based model for multi-document summarization that eliminates the effort of conducting discourse relation analysis (inter or intra document) and generates informative and readable summaries. We formulate the document set as a graph whose nodes corresponds to sentences. These nodes are connected with each other according to the entities they contains and the relations between their containing entities. Each path in the graph represents a piece of text and is evaluated using a novel scoring function that considers informativeness and coherence. To extract a summary is to find a path in the graph with the highest score. This is a weighted longest path problem. We further present a variant of the proposed model based on local coherence and explore decoding algorithms for both of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments are conducted on the Document Understanding Conference (DUC) 2004 multi-document summarization task data set. As ROUGE cannot fully capture our improvement in coherence which is one of the key contributions of this work, we also conduct a human evaluation. Results show that we obtain summaries comparative with state-of-the-art systems in terms of ROUGE metrics and get improvements in readability in human evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work provides a method of generating high quality summaries without the effort of discourse analysis. The proposed method can be easily extended to other languages without much efforts. It also provides inspiration as regards other tasks that require computers to generate coherent text. The rest of the papers is organized as follows: Section 2 presents the centering theory and a coherence model based on entities. Section 3 presents our model. Section 4 describes the experiments and results. Section 5 presents some previous work and Section 6 concludes this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The centering theory (Grosz et al., 1995) as a popular theory on discourse analysis, serves as the basis of some coherence evaluation methods (Barzilay and Lapata, 2008; Burstein et al., 2010; Li and Jurafsky, 2016) and enables us to measure the coherence score of any given text without discourse parsing solely based on the reappearance of entities. Entities here refer to noun/pronoun word/phrases 2 .",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Grosz et al., 1995)",
"ref_id": "BIBREF15"
},
{
"start": 142,
"end": 169,
"text": "(Barzilay and Lapata, 2008;",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 192,
"text": "Burstein et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 193,
"end": 215,
"text": "Li and Jurafsky, 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "According to the centering theory, we have the following assumptions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "1. Text that contains successive mentions of the same entities would be more coherent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "2. The main entities that are focused on tend to play an important grammatical role, such as the subject or object of the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "Therefore the key to the coherence of a text lies in what entities it contains and how their roles change. The coherence of a generated text can be evaluated accordingly. Barzilay and Lapata (2008) presented such a model. The key is to represent text as an entity grid. Assume text T contains n sentences {S 1 , S 2 ..., S n } and m entities. r k i represents the grammatical role of Entity e k in Sentence S i . Four kinds of roles are used, i.e., \"subj\", \"obj\", \"others\" and \"absent\". \"Others\" indicates that the entity is present, but is neither the subject nor the object. Then the grammatical roles of e k in text T can be expressed as a sequence: {r k 1 , r k 2 , ..., r k n }. For each entity in T , such a chain showing how the entity's grammatical roles change in T is extracted. Thus text T can be represented as an n * m matrix M (T ) where n is the number of sentences and m is the number of entities in T , and M (T ) ij corresponds to the grammatical roles of Entity j in Sentence i. M (T ) is referred to as the Entity Grid of T (Barzilay and Lapata, 2008) .",
"cite_spans": [
{
"start": 171,
"end": 197,
"text": "Barzilay and Lapata (2008)",
"ref_id": "BIBREF0"
},
{
"start": 1044,
"end": 1071,
"text": "(Barzilay and Lapata, 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "To calculate the coherence score of T , Barzilay and Lapata (2008) used M (T ) as a feature vector. They calculated the transition probability for |{s(subj), o(bj), x(others), \u2212(absent)} 2 | = 16 transition patterns from M (T ) without distinguishing between entities, to form a vector f (T ) for T , and a weight vector w was then learnt from training data so that w * f (T ) can be used as the coherence score for T .",
"cite_spans": [
{
"start": 40,
"end": 66,
"text": "Barzilay and Lapata (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "This kind of method has been adopted in many studies (Filippova and Strube, 2007; Barzilay and Lapata, 2008; Burstein et al., 2010) . In particular, Filatova and Hatzivassiloglou (2004) extends entity grids to model semantical relations between entities, which provides a possible further improvement for our models.",
"cite_spans": [
{
"start": 53,
"end": 81,
"text": "(Filippova and Strube, 2007;",
"ref_id": "BIBREF10"
},
{
"start": 82,
"end": 108,
"text": "Barzilay and Lapata, 2008;",
"ref_id": "BIBREF0"
},
{
"start": 109,
"end": 131,
"text": "Burstein et al., 2010)",
"ref_id": "BIBREF4"
},
{
"start": 149,
"end": 185,
"text": "Filatova and Hatzivassiloglou (2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory and Coherence Modelling",
"sec_num": "2"
},
{
"text": "The above model can only be used to measure coherence but summarization is much complex as it involves not only coherence bust also informativeness and redundancy. We design a much more sophisticated models leveraging entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Summarization",
"sec_num": "3"
},
{
"text": "Two models are presented below. Both of them are based on entities and consider coherence as well as informativeness. The first one is based on global coherence and the second one local coherence. The global coherence consider the full sequence when evaluating coherence and the local coherence is calculated based on relations between adjacent sentences. Intuitively, global coherence is better than local coherence, but considering the full sequence increases the time complexity. The model based on local coherence, on the other hand, reduces the time complexity and enables us to obtain an exact solution efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Summarization",
"sec_num": "3"
},
{
"text": "Assume we have K documents with n sentences in total. Note that we are dealing with multi-document summarization, and we do not distinguish between inter-document and intra-document relations. We construct a graph with n nodes, each of which corresponds to one sentence. Weighted directed edges are used to connect these nodes together. To each node, we assign a cost score, which is the number of words the corresponding sentence contains. To each path in the directed graph, we assign a gain score. The gain score is a comprehensive evaluation of the informativeness and coherence of the sequence of sentences represented by the path. The problem of extracting a good summary becomes the problem of extracting the best path. Note that it is an asymmetric graph. Gain scores for A \u2192 B \u2192 C and C \u2192 B \u2192 A are different. The direction determines the positions of corresponding sentences in the generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Set-up",
"sec_num": "3.1"
},
{
"text": "One more thing to consider is the redundancy. Instead of formulating redundancy explicitly, we remove edges connecting similar sentences to turn the complete graph into an incomplete graph. This ensures that similar sentences do not occupy adjacent positions in the generated summaries and thus reduce redundancy. The similarities of sentence pairs are based on word overlaps and we keep d% of all the edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Set-up",
"sec_num": "3.1"
},
{
"text": "Note that for temporal text removing edges can also help us maintain the temporal relations between sentences, though we do not explore this point here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Set-up",
"sec_num": "3.1"
},
{
"text": "To extract a summary is to find such a sequence of sentences Seq that maximizes Score(Seq).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Global 3 Coherence",
"sec_num": "3.2"
},
{
"text": "Score(Seq) = m \u2211 k=1 a k F k F k = \u220f i p e k (r k i r k i+1 ), S i , S i+1 \u2208 Seq s.t. \u2211 S i \u2208Seq length(S i ) \u2264 threshold (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Global 3 Coherence",
"sec_num": "3.2"
},
{
"text": "a k is the weight of Entity e k . r k i is the state of Entity e k in Sentence S i . Here we use four states: \"s\", \"o\", \"x\", \"-\", which represent \"subj\", \"obj\", \"present\" and \"absent\" respectively. It is also possible to use more or fewer states. p e k ( * * ) is the transition probability between two states for e k . For each document set, the transition probabilities for each entity is estimated using p e k (ab) = #e k (a)e k (b) n\u2212K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Global 3 Coherence",
"sec_num": "3.2"
},
{
"text": ". #e k (a)e k (b) marks the times that Entity e k presents as grammatical role a in the preceding sentence and as grammatical role b in the following one. n \u2212 K denotes the total number of adjacent sentence pairs in a document set with K documents and n sentences. F k is the coherence score contributed by e k in the extracted sequence Seq. F k is based on the transitions of e k between adjacent sentences in Seq. We use Score(Seq) which considers salience, coherence and redundancy as an index as to how suitable the extracted sentence sequence Seq is as a summary. This model is a weighted longest path problem with a fixed length. This is an NP-hard problem. Due to the time cost, we adopt the simple randomized algorithm as shown in Algorithm 1 to obtain an approximated solution. Other decoding algorithms like greedy algorithms Algorithm 1 A randomized algorithm for the weighted longest path problem Initialization: Set U \u2190\u2212 all the sentences in the current doc set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Global 3 Coherence",
"sec_num": "3.2"
},
{
"text": "Set S \u2190\u2212 EmptySet Queue Q \u2190\u2212 EmptySet repeat randomly select sentence s \u2208 U &s / \u2208 Q; if length(s) + \u2211 i length(s i ) <= threshold, s i \u2208 Q then push s to the rear of Q else push Q into S, Queue Q \u2190\u2212 EmptySet end if until 10K times return argmax Q F (T ), Q \u2208 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Global 3 Coherence",
"sec_num": "3.2"
},
{
"text": "can also be employed. But none of them are capable of obtaining an exact solution. Below we present another model considering local coherence. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Global 3 Coherence",
"sec_num": "3.2"
},
{
"text": "The above model considers global coherence which is calculated according to the whole text. The model presented below is directly based on local coherence and enables us to obtain an exact solution. We want to maximize Score(Seq):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Score(Seq) = \u2211 S i \u2208Seq (\u03b1 \u2211 e k \u2208S i a k + (1 \u2212 \u03b1)gain i,(i+1) ) s.t. \u2211 S i \u2208Seq length(S i ) \u2264 threshold",
"eq_num": "(2)"
}
],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "This formulation contains two parts. \u2211 e k \u2208S i a k implies the weight of Sentence S i , which is the sum of its containing entities' weights. gain i,(i+1) is the gain score for Edge(S i , S i+1 ). \u03b1 manipulates the impacts of the two parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "gain(S i , S i+1 ) = \u2211 e k \u2208S i \u222a S i+1 p e k (r k i r k i+1 )",
"eq_num": "(3)"
}
],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "As is stated, r k i is the state of Entity e k in Sentence S i . For the convenience of decoding, we turn the above model to an integer linear programming (ILP) problem. We add two dummy nodes, called Start and End Node. All paths start from Start and end with End. The costs of both Start and End are 0. The gains of edges connected with Start or End are 0. Note that although here we present a full connected graph for simplicity, in reality we deleted several edges to reduce redundancy. Following such a setting, an arbitrary path in the old graph (the one without dummy Start and End nodes) can be represented as a path from Start to End. We write the Start node as Node 0 and the End node as Node t. Then we formulate the problem of the weighted longest path as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "maximize\u03b1 \u2211 i ( \u2211 e k \u2208S i a k )x i + (1 \u2212 \u03b1) \u2211 i,j gain i,j y ij subject to \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1) \u2211 i cost i x i \u2264 threshold 2) \u2211 i y 0i = 1 3) \u2211 i y it = 1 4) \u2211 i y ij + y 0j \u2212 ( \u2211 i y ji + y jt ) = 0, \u2200j 5) \u2211 i y ij + y 0j \u2212 x j = 0, \u2200j 6)x i \u2208 {0, 1}, \u2200i 7)y ij \u2208 {0, 1}, \u2200i, j",
"eq_num": "(4)"
}
],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "Equations 2 and 3 are used to ensure we have only one start and one end node. Equation 4 ensures that the in degree equals the out degree for all nodes. Equation 5 ensures that the in degree is either 0 or 1 and equals x a for all nodes. x i = 1 indicates that S i is selected for the summary. x i = 0 means S i is not contained in the summary. y ij = 1 means S i and S j are selected and placed as adjacent sentences in the summary. cost i is the number of words in S i (length of S i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "We resolve this ILP problem using the dual simplex method provided by IBM CPLEX optimizer 4 which is a powerful optimization software package. CPLEX provides both a primal simplex method and a dual simplex method for ILP problems. Here we adopt the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Considering Local Coherence",
"sec_num": "3.3"
},
{
"text": "Experiments are conducted on the data set of the DUC2004 Summarization Task, which is a multidocument summarization task. 50 document clusters, each of which consists of 10 documents, are given. One summary is to be generated for each cluster. The target length is up to 100 words. Weights of entities are learnt by logistic regression as is adopted by Takamura and Okumura (2009) 5 . For entities that are not contained in DUC2003, we assign tf-based weights to them as Barzilay and Lapata (2008) did.",
"cite_spans": [
{
"start": 353,
"end": 380,
"text": "Takamura and Okumura (2009)",
"ref_id": "BIBREF35"
},
{
"start": 381,
"end": 382,
"text": "5",
"ref_id": null
},
{
"start": 471,
"end": 497,
"text": "Barzilay and Lapata (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "For the evaluation we firstly use the generally acknowledged metric for summarization: ROUGE metric. It essentially calculates n-gram overlaps between automatically generated summaries and human written (the gold standard) summaries. A high level of overlap indicates a high level of shared information between the two summaries. Among others, we focus on ROUGE-1 in the discussion of the result, because ROUGE-1 has proved to have a strong correlation with human annotation (Lin, 2004) .",
"cite_spans": [
{
"start": 475,
"end": 486,
"text": "(Lin, 2004)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "Some necessary preprocessing includes stemming, removing stop-words and simple simplification. In previous work, there is usually no co-reference resolution and different words are regarded as different entities. Here we use Stanford CoreNLP toolkit (Manning et al., 2014) to deal with the co-reference problem. The Stanford CoreNLP toolkit contains a ready-to-use entity identification tool and a coreference resolution tool. The co-reference resolution is not a must, though preferred if reliable tools are available.",
"cite_spans": [
{
"start": 250,
"end": 272,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "After the co-reference resolution, different forms of the same entities are replaced by their unified forms. For each document set, we need to estimate the transition probabilities for each entity according to the documents contained in the cluster as stated above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "Parameters are tuned using the DUC2003 dataset. d is the threshold of redundancy. We keep d percent of all edges and d varies from 10 to 100 with an interval of 10. We tune the parameter using the randomized algorithm and evaluate the results using ROUGE-1 Recall. In the following experiments, we set d = 80, which means we keep 80% of the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "As for the model presented in Section 3.3, we need to tune \u03b1. Using the same data, we try \u03b1 from 0 to 1 with an interval of 0.1 and eventually choose \u03b1 = 0.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "We compare our models with state-of-the-art multi-document summarization systems using ROUGE and human evaluation. The former aims to evaluate informativeness and the latter targets readability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation & Discussion",
"sec_num": "4.2"
},
{
"text": "ROUGE Evaluation MCKP is the maximum coverage methods proposed by Takamura and Okumura (2009) . Lin is a model that uses a class of submodular functions (Lin and Bilmes, 2011) . Christ is a graph based model proposed by Christensen et al. (2013) . DPP is the determinantal point processes model Borodin (2009) and ICSI is another model based on maximum coverage Gillick et al. (2008) . The results of DPP and ICSI comes from the repository presented in Hong et al. (2014) . M1 is our model described in Section 3.1. M2 is the model described in Section 3.3, which is resolved using an ILP method. MEAD Radev et al. (2004a) is a baseline that employs ranking algorithms to generate multi-document summaries.",
"cite_spans": [
{
"start": 66,
"end": 93,
"text": "Takamura and Okumura (2009)",
"ref_id": "BIBREF35"
},
{
"start": 153,
"end": 175,
"text": "(Lin and Bilmes, 2011)",
"ref_id": "BIBREF23"
},
{
"start": 220,
"end": 245,
"text": "Christensen et al. (2013)",
"ref_id": "BIBREF6"
},
{
"start": 295,
"end": 309,
"text": "Borodin (2009)",
"ref_id": null
},
{
"start": 362,
"end": 383,
"text": "Gillick et al. (2008)",
"ref_id": "BIBREF13"
},
{
"start": 453,
"end": 471,
"text": "Hong et al. (2014)",
"ref_id": "BIBREF17"
},
{
"start": 597,
"end": 622,
"text": "MEAD Radev et al. (2004a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation & Discussion",
"sec_num": "4.2"
},
{
"text": "The results are shown in Table 1 . As we can see, our system (M1 and M2) produces comparable results to the state-of-the-art systems. With the MCKP method, all content words are used as concepts. But in our systems, only nouns and pronouns are regarded as entities. There are fewer nouns and pronouns than content words. This has a negative impact on the evaluation of information coverage. But according to the experiment results, our approach still obtain satisfying results based on these entities. It proves that even with much simpler feature settings of just nouns and pronouns, the proposed model generates summaries with good coverage of the important information in source documents. We have addressed that ROUGE is merely an index of informativeness and cannot evaluate our improvements in readability as has been proved by Christ, another coherence-focused model (Christensen et al., 2013) . So we also conduct a human evaluation.",
"cite_spans": [
{
"start": 874,
"end": 900,
"text": "(Christensen et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation & Discussion",
"sec_num": "4.2"
},
{
"text": "Human Evaluation As some of the systems mentioned in Table 1 are not accessible, in this work we compare summaries produced by some typical systems: M2 (the best proposed system evaluated by ROUGE), MCKP (one of the state-of-the-art salience-focused methods) and humans (the gold standard).",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation & Discussion",
"sec_num": "4.2"
},
{
"text": "We asked four professional annotators (who are not the authors of this paper and have rich experience in annotating various NLP tasks and are fluent in English) to assign a score to each summary regarding its readability. We randomly selected 48 summaries (16+16+16) from the three systems, and asked them to assign a readability score to each document without reading the source documents (summarization is useful because we do not need to read source documents). The score is an integer between 1 (very poor) and 5 (very good).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation & Discussion",
"sec_num": "4.2"
},
{
"text": "The average scores for the 3 systems are Human = 4.3; M 2 = 3.5; M CKP = 3.1. Significance testing (significance level \u03b1 = 0.05) shows that the summaries generated by the proposed method show improvements in readability compared with previous salience-focused work. In our model, we assume the states of entities can be formulated as Markov chains. Although sophisticated models can be employed, such assumptions help simplify the model and they are proved to be of use. Also we can use more or fewer grammatical roles for entities. We tried using just two kinds of roles: presence and absence, and the performance we obtained was unsatisfying.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation & Discussion",
"sec_num": "4.2"
},
{
"text": "A summary is much shorter than the original documents but still needs to provide readers with sufficient information. Hence the summarization systems need to identify important information and keep as much of it as possible. Most existing research follows such a guideline and takes salience as its sole focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Salience-focused systems cannot guarantee the readability of the generated text as they fail to take coherence into consideration. Sentence reordering, as a post processing task has began to develop. Apparently, it cannot make up for the flaws of salience-focused systems because it is simply a reorganization of sentences. Besides, it also faces problems when dealing with temporal text (Yan et al., 2011; Ge et al., 2015) . A better solution is to consider coherence when selecting sentences. Such comprehensive models have been proposed. Most of them are discourse driven and sacrifice informativeness for coherence. In this sense, our model is novel in dealing with coherence without discourse analysis.",
"cite_spans": [
{
"start": 388,
"end": 406,
"text": "(Yan et al., 2011;",
"ref_id": "BIBREF40"
},
{
"start": 407,
"end": 423,
"text": "Ge et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "As stated, the summarization systems need to identify the important information and keep as much of it in the generated summaries as possible. One straightforward method is Maximum Marginal Relevance (Carbonell and Goldstein, 1998) (MMR) . It is a greedy method, and is proposed to select sentences that are most relevant but not too similar to the already selected ones. It tries to keep a balance between relevance and redundancy. MMR is also widely employed to avoid redundancy in summarization systems. Among existing research, one popular kind is the ranking method (e.g., Textrank (Mihalcea and Tarau, 2004) , Lexrank (Erkan and Radev, 2004) and its variants (Wan et al., 2007; Wang et al., 2012) ), which construct a graph between text units and use ranking algorithms to select top sentences to build summaries. Another kind is the optimization method. Our work is one of this kind. It formulates summarization as finding a subset that optimizes certain objective functions without violating certain constraints. To find such an optimal subset is a combinatorial optimization problem, which is an NP hard problem and hence cannot be solved in linear time (McDonald, 2007) .",
"cite_spans": [
{
"start": 200,
"end": 237,
"text": "(Carbonell and Goldstein, 1998) (MMR)",
"ref_id": null
},
{
"start": 587,
"end": 613,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF28"
},
{
"start": 624,
"end": 647,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 665,
"end": 683,
"text": "(Wan et al., 2007;",
"ref_id": "BIBREF37"
},
{
"start": 684,
"end": 702,
"text": "Wang et al., 2012)",
"ref_id": "BIBREF38"
},
{
"start": 1163,
"end": 1179,
"text": "(McDonald, 2007)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salience-Focused Method",
"sec_num": "5.1"
},
{
"text": "Recently, maximum coverage methods have been proposed and yield good results Takamura and Okumura, 2009) . Maximum coverage methods formulate summarization as a maximum knapsack problem (MKMC). In MKMC methods, the meanings of sentences are believed to be made up by concepts, which usually refer to content words. And summarization involves extracting a subset of sentences that covers as many important concepts as possible without violating the length constraint. It is usually formulated as an integer linear problem. And some algorithms are proposed for obtaining approximated solutions (Takamura and Okumura, 2009; . Lin and Bilmes (2011) design a class of submodular functions for document summarization. The functions they use combine two parts, encouraging the summary to be representative of the corpus, and rewarding diversity separately. Other methods that have been applied to summarization include centroid-based methods (Radev et al., 2004b; Saggion and Gaizauskas, 2004) , and minimum dominating set methods (Shen and Li, 2010) . All these methods suffer in coherence.",
"cite_spans": [
{
"start": 77,
"end": 104,
"text": "Takamura and Okumura, 2009)",
"ref_id": "BIBREF35"
},
{
"start": 592,
"end": 620,
"text": "(Takamura and Okumura, 2009;",
"ref_id": "BIBREF35"
},
{
"start": 935,
"end": 956,
"text": "(Radev et al., 2004b;",
"ref_id": "BIBREF32"
},
{
"start": 957,
"end": 986,
"text": "Saggion and Gaizauskas, 2004)",
"ref_id": "BIBREF33"
},
{
"start": 1024,
"end": 1043,
"text": "(Shen and Li, 2010)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salience-Focused Method",
"sec_num": "5.1"
},
{
"text": "Sentence reordering methods are developed to correct the salience-focused models. Sentence reordering tries to generate a more coherent text by reordering its contents. Rich semantic and syntactic features are used to find a better permutation for input sentences (Barzilay et al., 2001; Bollegala et al., 2010; Okazaki et al., 2004) .",
"cite_spans": [
{
"start": 264,
"end": 287,
"text": "(Barzilay et al., 2001;",
"ref_id": "BIBREF1"
},
{
"start": 288,
"end": 311,
"text": "Bollegala et al., 2010;",
"ref_id": "BIBREF2"
},
{
"start": 312,
"end": 333,
"text": "Okazaki et al., 2004)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-Focused Method",
"sec_num": "5.2"
},
{
"text": "The drawback to sentence reordering is obvious. The preceding sentence selection focuses solely on informativeness and totally neglects coherence. Thus it prevents the improvements expected from permutation. This is confirmed by the fact that the above methods all reports limited improvement. A consideration of coherence during sentence selection leads to new methods, and these are mainly discourse driven models. Some of the summarization methods encode discourse analysis results in feature presentations together with other frequency based features for sentence selection/compression. The problem is that these discourse based features usually play secondary roles, because the models all try to improve information coverage, which are evaluated by ROUGE. And ROUGE, as is commonly known, is not sensitive to coherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-Focused Method",
"sec_num": "5.2"
},
{
"text": "Some others work directly on discourse analysis results, and they usually try to derive a passage from a given parse tree. The problem of summarization is regarded as finding a text T so that T = arg max F (T |T r) for a given tree T r. Here F is the objective function. Early representative work of this kind includes that of Marcu (1998) and that of Daum\u00e9 III and Marcu (2002) . Recently, Hirao et al. (2013) has viewed summarization as a knapsack problem on trees, and uses an integer linear problem (ILP) to formulate it. A sub tree that maximizes some objective function and obeys some given constraints is extracted from the original parse tree as the summary.",
"cite_spans": [
{
"start": 327,
"end": 339,
"text": "Marcu (1998)",
"ref_id": "BIBREF26"
},
{
"start": 352,
"end": 378,
"text": "Daum\u00e9 III and Marcu (2002)",
"ref_id": "BIBREF7"
},
{
"start": 391,
"end": 410,
"text": "Hirao et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-Focused Method",
"sec_num": "5.2"
},
{
"text": "Discourse tree based methods cannot be extended to multi-document summarization. Christensen et al. (2013) propose a graph model that bypasses the tree constraints. They build a graph to represent discourse relations between sentences and then extract summaries accordingly.",
"cite_spans": [
{
"start": 81,
"end": 106,
"text": "Christensen et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-Focused Method",
"sec_num": "5.2"
},
{
"text": "Recently the neural network based discourse analysis Ji and Eisenstein, 2014) provides us with an alternative way of conducting discourse analysis without traditional feature engineering. It can be used in our future work of modelling coherence using semantic relations.",
"cite_spans": [
{
"start": 53,
"end": 77,
"text": "Ji and Eisenstein, 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-Focused Method",
"sec_num": "5.2"
},
{
"text": "Previous summarization methods have usually focused on salience and neglected coherence. This work proposed a novel summarization system that combines coherence with salience. By taking entities and links between them into consideration, our weighted longest path model successfully improves the quality of summaries. The proposed model does not require discourse analysis and hence can be applied to languages which do not enjoy plenty of ready-to-use discourse analysis tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper only syntactic linkages are used for modelling coherence. In the future, we can take advantage of the semantic relations between entities to evaluate coherence and to further improve our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/1 The baseline they used is the lead paragraph method and summaries are evaluated by human and ROUGE (Recall-Oriented Understudy for Gisting Evaluation(Lin, 2004)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In some previous work on summarization(Takamura and Okumura, 2009;Hirao et al., 2013), concepts are used to measure informativeness. Concepts can be used to refer any non functional words, including adjectives, adverbs. All the entities can be regarded as concepts, but some concept words (non-nominal words) are not entities. Entity is a subset of Concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\"Global\" means the model considers coherence according to the whole text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www-03.ibm.com/software/products/en/ibmilogcpleoptistud/ 5 This method was first proposed byYih et al. (2007) and then improved byTakamura and Okumura (2009). Here we follow the same steps withTakamura and Okumura (2009).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modeling local coherence: An entity-based approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1-34.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sentence ordering in multidocument summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 1st International Conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay, Noemie Elhadad, and Kathleen R McKeown. 2001. Sentence ordering in multidocument sum- marization. In Proceedings of the 1st International Conference on Human Language Technology Research, pages 1-7. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A bottom-up approach to sentence ordering for multi-document summarization",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Information Processing & Management",
"volume": "46",
"issue": "1",
"pages": "89--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Naoaki Okazaki, and Mitsuru Ishizuka. 2010. A bottom-up approach to sentence ordering for multi-document summarization. Information Processing & Management, 46(1):89-109.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using entity-based features to model coherence in student essays",
"authors": [
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Slava",
"middle": [],
"last": "Andreyev",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceeding of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics ? Human Language Technologies",
"volume": "",
"issue": "",
"pages": "681--684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in stu- dent essays. In Proceeding of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics ? Human Language Technologies, pages 681-684. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The use of mmr, diversity-based reranking for reordering documents and producing summaries",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "335--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 335-336. Association for Computing Machinery.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards coherent multi-document summarization",
"authors": [
{
"first": "Janara",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"Soderland"
],
"last": "Mausam",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1163--1173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janara Christensen, Stephen Soderland Mausam, and Oren Etzioni. 2013. Towards coherent multi-document summarization. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies, pages 1163-1173.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A noisy-channel model for document compression",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "449--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2002. A noisy-channel model for document compression. In Proceedings of the 40th Annual Meeting of Computational Linguistics, pages 449-456. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "22",
"issue": "1",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summa- rization. Journal of Artificial Intelligence Research, 22(1):457-479.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Event-based extractive summarization",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Filatova",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of Computational Linguistics Workshop on Summarization",
"volume": "111",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Filatova and Vasileios Hatzivassiloglou. 2004. Event-based extractive summarization. In Proceedings of the 42nd Annual Meeting of Computational Linguistics Workshop on Summarization, volume 111.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Extending the entity-grid coherence model to semantically related entities",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "139--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Filippova and Michael Strube. 2007. Extending the entity-grid coherence model to semantically related entities. In Proceedings of the 11th European Workshop on Natural Language Generation, pages 139-142. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bring you to the past: Automatic generation of topically relevant event chronicles",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Wenzhe",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL2015)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Ge, Wenzhe Pei, Heng Ji, Sujian Li, Baobao Chang, and Zhifang Sui. 2015. Bring you to the past: Automatic generation of topically relevant event chronicles. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL2015).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A scalable global model for summarization",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 10-18. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The icsi summarization system at tac",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Text Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Benoit Favre, and Dilek Hakkani-Tur. 2008. The icsi summarization system at tac 2008. In Proceed- ings of the Text Understanding Conference.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A global optimization framework for meeting summarization",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Korbinian",
"middle": [],
"last": "Riedhammer",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2009,
"venue": "Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4769--4772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2009. A global optimization framework for meeting summarization. In Acoustics, Speech and Signal Processing, 2009. IEEE International Conference on, pages 4769-4772. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Centering: A framework for modeling the local coherence of discourse",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barbara",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "Aravind K",
"middle": [],
"last": "Weinstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational linguistics",
"volume": "21",
"issue": "2",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational linguistics, 21(2):203-225.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Singledocument summarization as a tree knapsack problem",
"authors": [
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nishino",
"suffix": ""
},
{
"first": "Norihito",
"middle": [],
"last": "Yasuda",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1515--1520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single- document summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1515-1520. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A repository of state of the art and competitive baseline summaries for generic news summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Conroy",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Favre",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "1608--1616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hong, John M Conroy, Benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repository of state of the art and competitive baseline summaries for generic news summarization. In LREC, pages 1608-1616.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Representation learning for text-level discourse parsing",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of Computational Linguistics, pages 13-24.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A model of coherence based on distributed sentence representation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2039--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Eduard H Hovy. 2014. A model of coherence based on distributed sentence representation. In EMNLP, pages 2039-2048.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural net models for open-domain discourse coherence",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01545"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2016. Neural net models for open-domain discourse coherence. arXiv preprint arXiv:1606.01545.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Evolutionary hierarchical dirichlet process for timeline summarization",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "556--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Sujian Li. 2013. Evolutionary hierarchical dirichlet process for timeline summarization. In ACL (2), pages 556-560. Citeseer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recursive deep models for discourse parsing",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rumeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2061--2069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Rumeng Li, and Eduard H Hovy. 2014. Recursive deep models for discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 2061-2069.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A class of submodular functions for document summarization",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics -Human Language Technologies",
"volume": "",
"issue": "",
"pages": "510--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics -Human Language Technologies, pages 510-520. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL'04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL'04 Workshop, pages 74-81.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Computa- tional Linguistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving summarization through rhetorical parsing tuning",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 1998,
"venue": "The 6th Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "206--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 1998. Improving summarization through rhetorical parsing tuning. In The 6th Workshop on Very Large Corpora, pages 206-215.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A study of global inference algorithms in multi-document summarization",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Textrank: Bringing order into texts",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, volume 4, page 275. Barcelona, Spain.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving chronological sentence ordering by precedence relation",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoaki Okazaki, Yutaka Matsuo, and Mitsuru Ishizuka. 2004. Improving chronological sentence ordering by precedence relation. In Proceedings of the 20th International Conference on Computational Linguistics, page 750. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Overview of the tac 2011 summarization track: Guided task and aesop task",
"authors": [
{
"first": "Karolina",
"middle": [],
"last": "Owczarzak",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karolina Owczarzak and Hoa Trang Dang. 2011. Overview of the tac 2011 summarization track: Guided task and aesop task. In Proceedings of the 2011 Text Analysis Conference.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Mead-a platform for multidocument multilingual text summarization",
"authors": [
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "Sasha",
"middle": [],
"last": "Blair-Goldensohn",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Arda",
"middle": [],
"last": "Celebi",
"suffix": ""
},
{
"first": "Stanko",
"middle": [],
"last": "Dimitrov",
"suffix": ""
},
{
"first": "Elliott",
"middle": [],
"last": "Drabek",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Hakim",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Danyu",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 4th Language Resources and Evaluation Conference. Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir Radev, Timothy Allison, Sasha Blair-Goldensohn, John Blitzer, Arda Celebi, Stanko Dimitrov, Elliott Drabek, Ali Hakim, Wai Lam, Danyu Liu, et al. 2004a. Mead-a platform for multidocument multilingual text summarization. In Proceedings of the 4th Language Resources and Evaluation Conference. Language Resources and Evaluation Conference.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Centroid-based summarization of multiple documents",
"authors": [
{
"first": "Hongyan",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
},
{
"first": "Ma\u0142gorzata",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Sty\u015b",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tam",
"suffix": ""
}
],
"year": 2004,
"venue": "Information Processing & Management",
"volume": "40",
"issue": "6",
"pages": "919--938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir R Radev, Hongyan Jing, Ma\u0142gorzata Sty\u015b, and Daniel Tam. 2004b. Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919-938.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multi-document summarization by cluster/profile relevance and redundancy removal",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Document Understanding Conference",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horacio Saggion and Robert Gaizauskas. 2004. Multi-document summarization by cluster/profile relevance and redundancy removal. In Proceedings of the Document Understanding Conference, pages 6-7.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Multi-document summarization via the minimum dominating set",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "984--992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Shen and Tao Li. 2010. Multi-document summarization via the minimum dominating set. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 984-992. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Text summarization model based on maximum coverage problem and its variant",
"authors": [
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "781--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroya Takamura and Manabu Okumura. 2009. Text summarization model based on maximum coverage prob- lem and its variant. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 781-789. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Centering theory in discourse",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marilyn",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"Krishna"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Ellen",
"middle": [
"Friedman"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prince",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A Walker, Aravind Krishna Joshi, and Ellen Friedman Prince. 1998. Centering theory in discourse. Oxford University Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianwu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual Meeting-Association for Computational Linguistics",
"volume": "45",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Towards an iterative reinforcement approach for simul- taneous document summarization and keyword extraction. In Annual Meeting-Association for Computational Linguistics, volume 45, page 552.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring simultaneous keyword and key sentence extraction: improve graph-based ranking using wikipedia",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2619--2622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Wang, Lei Wang, Jiwei Li, and Sujian Li. 2012. Exploring simultaneous keyword and key sentence extrac- tion: improve graph-based ranking using wikipedia. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pages 2619-2622. ACM.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Summarization based on task-oriented discourse parsing",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing",
"volume": "23",
"issue": "",
"pages": "1358--1367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Wang, Yasuhisa Yoshida, Tsutomu Hirao, Katsuhito Sudoh, and Masaaki Nagata. 2015. Summarization based on task-oriented discourse parsing. IEEE/ACM Transactions on Audio, Speech and Language Processing, 23:1358-1367.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Evolutionary timeline summarization: a balanced optimization framework via iterative substitution",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jahna",
"middle": [],
"last": "Otterbacher",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "745--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Yan, Xiaojun Wan, Jahna Otterbacher, Liang Kong, Xiaoming Li, and Yan Zhang. 2011. Evolutionary timeline summarization: a balanced optimization framework via iterative substitution. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 745- 754. ACM.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Multi-document summarization by maximizing informative content-words",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2007,
"venue": "IJCAI",
"volume": "7",
"issue": "",
"pages": "1776--1782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Joshua Goodman, Lucy Vanderwende, and Hisami Suzuki. 2007. Multi-document summarization by maximizing informative content-words. In IJCAI, volume 7, pages 1776-1782.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A Complete Graph with Dummy Start and End Nodes",
"type_str": "figure",
"num": null,
"uris": null
}
}
}
}