| { |
| "paper_id": "P11-1049", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:46:41.660075Z" |
| }, |
| "title": "Jointly Learning to Extract and Compress", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California at Berkeley", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Gillick", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California at Berkeley", |
| "location": {} |
| }, |
| "email": "dgillick@cs.berkeley.edu" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California at Berkeley", |
| "location": {} |
| }, |
| "email": "klein@cs.berkeley.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.", |
| "pdf_parse": { |
| "paper_id": "P11-1049", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Applications of machine learning to automatic summarization have met with limited success, and, as a result, many top-performing systems remain largely ad-hoc. One reason learning may have provided limited gains is that typical models do not learn to optimize end summary quality directly, but rather learn intermediate quantities in isolation. For example, many models learn to score each input sentence independently (Teufel and Moens, 1997; Shen et al., 2007; Schilder and Kondadadi, 2008) , and then assemble extractive summaries from the top-ranked sentences in a way not incorporated into the learning process. This extraction is often done in the presence of a heuristic that limits redundancy. As another example, Yih et al. (2007) learn predictors of individual words' appearance in the references, but in isolation from the sentence selection procedure. Exceptions are Li et al. (2009) who take a max-margin approach to learning sentence values jointly, but still have ad hoc constraints to handle redundancy. One main contribution of the current paper is the direct optimization of summary quality in a single model; we find that our learned systems substantially outperform unlearned counterparts on both automatic and manual metrics.", |
| "cite_spans": [ |
| { |
| "start": 419, |
| "end": 443, |
| "text": "(Teufel and Moens, 1997;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 444, |
| "end": 462, |
| "text": "Shen et al., 2007;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 463, |
| "end": 492, |
| "text": "Schilder and Kondadadi, 2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 722, |
| "end": 739, |
| "text": "Yih et al. (2007)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 879, |
| "end": 895, |
| "text": "Li et al. (2009)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While pure extraction is certainly simple and does guarantee some minimal readability, Lin (2003) showed that sentence compression (Knight and Marcu, 2001; McDonald, 2006; Clarke and Lapata, 2008) has the potential to improve the resulting summaries. However, attempts to incorporate compression into a summarization system have largely failed to realize large gains. For example, Zajic et al (2006) use a pipeline approach, pre-processing to yield additional candidates for extraction by applying heuristic sentence compressions, but their system does not outperform state-of-the-art purely extractive systems. Similarly, Gillick and Favre (2009) , though not learning weights, do a limited form of compression jointly with extraction. They report a marginal increase in the automatic wordoverlap metric ROUGE (Lin, 2004) , but a decline in manual Pyramid (Nenkova and Passonneau, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 97, |
| "text": "Lin (2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 131, |
| "end": 155, |
| "text": "(Knight and Marcu, 2001;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 156, |
| "end": 171, |
| "text": "McDonald, 2006;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 172, |
| "end": 196, |
| "text": "Clarke and Lapata, 2008)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 381, |
| "end": 399, |
| "text": "Zajic et al (2006)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 623, |
| "end": 647, |
| "text": "Gillick and Favre (2009)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 811, |
| "end": 822, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 857, |
| "end": 887, |
| "text": "(Nenkova and Passonneau, 2004)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A second contribution of the current work is to show a system for jointly learning to jointly compress and extract that exhibits gains in both ROUGE and content metrics over purely extractive systems. Both Martins and Smith (2009) and Woodsend and Lapata (2010) build models that jointly extract and compress, but learn scores for sentences (or phrases) using independent classifiers. Daum\u00e9 III (2006) learns parameters for compression and extraction jointly using an approximate training procedure, but his results are not competitive with state-of-the-art extractive systems, and he does not report improvements on manual content or quality metrics.", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 230, |
| "text": "Martins and Smith (2009)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 235, |
| "end": 261, |
| "text": "Woodsend and Lapata (2010)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 385, |
| "end": 401, |
| "text": "Daum\u00e9 III (2006)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In our approach, we define a linear model that scores candidate summaries according to features that factor over the n-gram types that appear in the summary and the structural compressions used to create the sentences in the summary. We train these parameters jointly using a margin-based objective whose loss captures end summary quality through the ROUGE metric. Because of the exponentially large set of candidate summaries, we use a cutting plane algorithm to incrementally detect and add active constraints efficiently. To make joint learning possible we introduce a new, manually-annotated data set of extracted, compressed sentences. Inference in our model can be cast as an integer linear program (ILP) and solved in reasonable time using a generic ILP solver; we also introduce a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published comparable results (ROUGE) to date on our test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We focus on the task of multi-document summarization. The input is a collection of documents, each consisting of multiple sentences. The output is a summary of length no greater than L max . Let x be the input document set, and let y be a representation of a summary as a vector. For an extractive summary, y is as a vector of indicators y = (y s : s \u2208 x), one indicator y s for each sentence s in x. A sentence s is present in the summary if and only if its indicator y s = 1 (see Figure 1a ). Let Y (x) be the set of valid summaries of document set x with length no greater than L max .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 482, |
| "end": 491, |
| "text": "Figure 1a", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "While past extractive methods have assigned value to individual sentences and then explicitly represented the notion of redundancy (Carbonell and Goldstein, 1998) , recent methods show greater success by using a simpler notion of coverage: bigrams contribute content, and redundancy is implicitly encoded in the fact that redundant sentences cover fewer bigrams (Nenkova and Vanderwende, 2005; Gillick and Favre, 2009) . This later approach is associated with the following objective function:", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 162, |
| "text": "(Carbonell and Goldstein, 1998)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 362, |
| "end": 393, |
| "text": "(Nenkova and Vanderwende, 2005;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 394, |
| "end": 418, |
| "text": "Gillick and Favre, 2009)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max y\u2208Y (x) b\u2208B(y) v b", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Here, v b is the value of bigram b, and B(y) is the set of bigrams present in the summary encoded by y. Gillick and Favre (2009) produced a state-of-the-art system 1 by directly optimizing this objective. They let the value v b of each bigram be given by the number of input documents the bigram appears in. Our implementation of their system will serve as a baseline, referred to as EXTRACTIVE BASELINE. We extend objective 1 so that it assigns value not just to the bigrams that appear in the summary, but also to the choices made in the creation of the summary. In our complete model, which jointly extracts and compresses sentences, we choose whether or not to cut individual subtrees in the constituency parses of each sentence. This is in contrast to the extractive case where choices are made on full sentences.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 128, |
| "text": "Gillick and Favre (2009)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "max y\u2208Y (x) b\u2208B(y) v b + c\u2208C(y) v c (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "C(y) is the set of cut choices made in y, and v c assigns value to each. Next, we present details of our representation of compressive summaries. Assume a constituency parse t s for every sentence s. We represent a compressive summary as a vector y = (y n : n \u2208 t s , s \u2208 x) of indicators, one for each non-terminal node in each parse tree of the sentences in the document set x. A word is present in the output summary if and only if its parent parse tree node n has y n = 1 (see Figure 1b ). In addition to the length constraint on the members of Y (x), we require that each node n may have y n = 1 only if its parent \u03c0(n) has y \u03c0(n) = 1. This ensures that only subtrees may be deleted. While we use constituency parses rather than dependency parses, this model has similarities with the vine-growth model of Daum\u00e9 III (2006) .", |
| "cite_spans": [ |
| { |
| "start": 811, |
| "end": 827, |
| "text": "Daum\u00e9 III (2006)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 481, |
| "end": 490, |
| "text": "Figure 1b", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For the compressive model we define the set of cut choices C(y) for a summary y to be the set of edges in each parse that are broken in order to delete a subtree (see Figure 1b) . We require that each subtree has a non-terminal node for a root, and say that an edge (n, \u03c0(n)) between a node and its parent is broken if the parent has y \u03c0(n) = 1 but the child has y n = 0. Notice that breaking a single edge deletes an entire subtree.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 177, |
| "text": "Figure 1b)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Joint Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Before learning weights in Section 3, we parameterize objectives 1 and 2 using features. This entails to parameterizing each bigram score v b and each subtree deletion score v c . For weights w \u2208 R d and feature functions g(b,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "x) \u2208 R d and h(c, x) \u2208 R d we let: v b = w T g(b, x) v c = w T h(c, x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For example, g(b, x) might include a feature the counts the number of documents in x that b appears in, and h(c, x) might include a feature that indicates whether the deleted subtree is an SBAR modifying a noun. This parameterization allows us to cast summarization as structured prediction. We can define a feature function f (y, x) \u2208 R d which factors over summaries y through B(y) and C(y):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "f (y, x) = b\u2208B(y) g(b, x) + c\u2208C(y) h(c, x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Using this characterization of summaries as feature vectors we can define a linear predictor for summarization:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "d(x; w) = arg max y\u2208Y (x) w T f (y, x) (3) = arg max y\u2208Y (x) b\u2208B(y) v b + c\u2208C(y) v c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The arg max in Equation 3 optimizes Objective 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Learning weights for Objective 1 where Y (x) is the set of extractive summaries gives our LEARNED EXTRACTIVE system. Learning weights for Objective 2 where Y (x) is the set of compressive summaries, and C(y) the set of broken edges that produce subtree deletions, gives our LEARNED COM-PRESSIVE system, which is our joint model of extraction and compression.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameterization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Discriminative training attempts to minimize the loss incurred during prediction by optimizing an objective on the training set. We will perform discriminative training using a loss function that directly measures end-to-end summarization quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structured Learning", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In Section 4 we show that finding summaries that optimize Objective 2, Viterbi prediction, is efficient. Online learning algorithms like perceptron or the margin-infused relaxed algorithm (MIRA) (Crammer and Singer, 2003) are frequently used for structured problems where Viterbi inference is available. However, we find that such methods are unstable on our problem. We instead turn to an approach that optimizes a batch objective which is sensitive to all constraints on all instances, but is efficient by adding these constraints incrementally.", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 221, |
| "text": "(Crammer and Singer, 2003)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structured Learning", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For our problem the data set consists of pairs of document sets and label summaries,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max-margin objective", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "D = {(x i , y * i ) : i \u2208 1, . . . , N }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max-margin objective", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Note that the label summaries can be expressed as vectors y * because our training summaries are variously extractive or extractive and compressive (see Section 5). We use a soft-margin support vector machine (SVM) (Vapnik, 1998) objective over the full structured output space (Taskar et al., 2003; Tsochantaridis et al., 2004) of extractive and compressive summaries:", |
| "cite_spans": [ |
| { |
| "start": 215, |
| "end": 229, |
| "text": "(Vapnik, 1998)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 278, |
| "end": 299, |
| "text": "(Taskar et al., 2003;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 300, |
| "end": 328, |
| "text": "Tsochantaridis et al., 2004)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max-margin objective", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "min w 1 2 w 2 + C N N i=1 \u03be i (4) s.t. \u2200i, \u2200y \u2208 Y (x i ) (5) w T f (y * i , x i ) \u2212 f (y, x i ) \u2265 (y, y * i ) \u2212 \u03be i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max-margin objective", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The constraints in Equation 5 require that the difference in model score between each possible summary y and the gold summary y * i be no smaller than the loss (y, y * i ), padded by a per-instance slack of \u03be i . We use bigram recall as our loss function (see Section 3.3). C is the regularization constant. When the output space Y (x i ) is small these constraints can be explicitly enumerated. In this case it is standard to solve the dual, which is a quadratic program. Unfortunately, the size of the output space of extractive summaries is exponential in the number of sentences in the input document set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max-margin objective", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The cutting-plane algorithm deals with the exponential number of constraints in Equation 5 by performing constraint induction (Tsochantaridis et al., 2004) . It alternates between solving Objective 4 with a reduced set of currently active constraints, and adding newly active constraints to the set. In our application, this approach efficiently solves the structured SVM training problem up to some specified tolerance .", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 155, |
| "text": "(Tsochantaridis et al., 2004)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Suppose\u0175 and\u03be optimize Objective 4 under the currently active constraints on a given iteration. Notice that the\u0177 i satisfyin\u011d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y i = arg max y\u2208Y (x i ) \u0175 T f (y, x i ) + (y, y * i )", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "corresponds to the constraint in the fully constrained problem, for training instance (x i , y * i ), most violated by\u0175 and\u03be. On each round of constraint induction the cutting-plane algorithm computes the arg max in Equation 6 for a training instance, which is referred to as loss-augmented prediction, and adds the corresponding constraint to the active set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The constraints from Equation (5) are equivalent to:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2200i w T f (y * i , x i ) \u2265 max y\u2208Y (x i ) w T f (y, x i ) + (y, y * i ) \u2212 \u03be i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Thus, if loss-augmented prediction turns up no new constraints on a given iteration, the current solution to the reduced problem,\u0175 and\u03be, is the solution to the full SVM training problem. In practice, constraints are only added if the right hand side of Equation (5) exceeds the left hand side by at least . Tsochantaridis et al. (2004) prove that only O( N ) constraints are added before constraint induction finds a C -optimal solution.", |
| "cite_spans": [ |
| { |
| "start": 307, |
| "end": 335, |
| "text": "Tsochantaridis et al. (2004)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Loss-augmented prediction is not always tractable. Luckily, our choice of loss function, bigram recall, factors over bigrams. Thus, we can easily perform loss-augmented prediction using the same procedure we use to perform Viterbi prediction (described in Section 4). We simply modify each bigram value v b to include bigram b's contribution to the total loss. We solve the intermediate partially-constrained max-margin problems using the factored sequential minimal optimization (SMO) algorithm (Platt, 1999; Taskar et al., 2004) . In practice, for = 10 \u22124 , the cutting-plane algorithm converges after only three passes through the training set when applied to our summarization task.", |
| "cite_spans": [ |
| { |
| "start": 496, |
| "end": 509, |
| "text": "(Platt, 1999;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 510, |
| "end": 530, |
| "text": "Taskar et al., 2004)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cutting-plane algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In the simplest case, 0-1 loss, the system only receives credit for exactly identifying the label summary. Since there are many reasonable summaries we are less interested in exactly matching any specific training instance, and more interested in the degree to which a predicted summary deviates from a label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss function", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The standard method for automatically evaluating a summary against a reference is ROUGE, which we simplify slightly to bigram recall. With an extractive reference denoted by y * , our loss function is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss function", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "(y, y * ) = |B(y) B(y * )| |B(y * )|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss function", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We verified that bigram recall correlates well with ROUGE and with manual metrics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss function", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We show how to perform prediction with the extractive and compressive models by solving ILPs. For many instances, a generic ILP solver can find exact solutions to the prediction problems in a matter of seconds. For difficult instances, we present a fast approximate algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Prediction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Gillick and Favre 2009 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for extraction", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max y,z b v b z b s.t. s l s y s \u2264 L max \u2200b s Q sb \u2264 z b (7) \u2200s, b y s Q sb \u2265 z b", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "ILP for extraction", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Constraints 7 and 8 ensure consistency between sentences and bigrams. Notice that the Constraint 7 requires that selecting a sentence entails selecting all its bigrams, and Constraint 8 requires that selecting a bigram entails selecting at least one sentence that contains it. Solving the ILP is fast in practice. Using the GNU Linear Programming Kit (GLPK) on a 3.2GHz Intel machine, decoding took less than a second on most instances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for extraction", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We can extend the ILP formulation of extraction to solve the compressive problem. Let l n be the number of words node n has as children. With this notation we can write the length restriction as n l n y n \u2264 L max . Let the presence of each cut c in C(y) be indicated by the binary variable z c , which is active if and only if y n = 0 but y \u03c0(n) = 1, where node \u03c0(n) is the parent of node n. The constraints on z c are diagrammed in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 433, |
| "end": 441, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "While it is possible to let B(y) contain all bigrams present in the compressive summary, the re- duction of B(y) makes the ILP formulation efficient. We omit from B(y) bigrams that are the result of deleted intermediate words. As a result the required number of variables z b is linear in the length of a sentence. The constraints on z b are given in Figure 2 . They can be expressed in terms of the variables y n .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 351, |
| "end": 359, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "By solving the following ILP we can compute the arg max required for prediction in the joint model:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max y,z b v b z b + c v c z c s.t. n l n y n \u2264 L max \u2200n y n \u2264 y \u03c0(n)", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2200b", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z b = 1 b \u2208 B(y)", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2200c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z c = 1 c \u2208 C(y)", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Constraint 9 encodes the requirement that only full subtrees may be deleted. For simplicity, we have written Constraints 10 and 11 in implicit form. These constraints can be encoded explicitly using O(N ) linear constraints, where N is the number of words in the document set x. The reduction of B(y) to include only bigrams not resulting from deleted intermediate words avoids O(N 2 ) required constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In practice, solving this ILP for joint extraction and compression is, on average, an order of magnitude slower than solving the ILP for pure extraction, and for certain instances finding the exact solution is prohibitively slow.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ILP for joint compression and extraction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "One common way to quickly approximate an ILP is to solve its LP relaxation (and round the results). We found that, while very fast, the LP relaxation of the joint ILP gave poor results, finding unacceptably suboptimal solutions. This appears possibly to have been problematic for Martins and Smith (2009) as well. We developed an alternative fast approximate joint extractive and compressive solver that gives better results in terms of both objective value and bigram recall of resulting solutions.", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 304, |
| "text": "Martins and Smith (2009)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fast approximate prediction", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The approximate joint solver first extracts a subset of the sentences in the document set that total no more than M words. In a second step, we apply the exact joint extractive and compressive summarizer (see Section 4.2) to the resulting extraction. The objective we maximize in performing the initial extraction is different from the one used in extractive summarization. Specifically, we pick an extraction that maximizes s\u2208y b\u2208s v b . This objective rewards redundant bigrams, and thus is likely to give the joint solver multiple options for including the same piece of relevant content.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fast approximate prediction", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "M is a parameter that trades-off between approximation quality and problem difficulty. When M is the size of the document set x, the approximate solver solves the exact joint problem. In Figure 3 we plot the trade-off between approximation quality and computation time, comparing to the exact joint solver, an exact solver that is limited to extractive solutions, and the LP relaxation solver. The results show that the approximate joint solver yields substantial improvements over the LP relaxation, and can achieve results comparable to those produced by the exact solver with a 5-fold reduction in computation time. On particularly difficult instances the parameter M can be decreased, ensuring that all instances are solved in a reasonable time period.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 187, |
| "end": 195, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Fast approximate prediction", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We use the data from the Text Analysis Conference (TAC) evaluations from 2008 and 2009, a total of 92 multi-document summarization problems. Each problem asks for a 100-word-limited summary of 10 related input documents and provides a set of four abstracts written by experts. These are the nonupdate portions of the TAC 2008 and 2009 tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To train the extractive system described in Section 2, we use as our labels y * the extractions with the largest bigram recall values relative to the sets of references. While these extractions are inferior to the abstracts, they are attainable by our model, a quality found to be advantageous in discriminative training for machine translation (Liang et al., 2006; COUNT ", |
| "cite_spans": [ |
| { |
| "start": 345, |
| "end": 365, |
| "text": "(Liang et al., 2006;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 366, |
| "end": 366, |
| "text": "", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5" |
| }, |
| { |
| "text": "All two-and three-way conjunctions of COUNT, STOP, and POSITION features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONJ:", |
| "sec_num": null |
| }, |
| { |
| "text": "Bias feature, active on all bigrams. Chiang et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 57, |
| "text": "Chiang et al., 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BIAS:", |
| "sec_num": null |
| }, |
| { |
| "text": "Previous work has referred to the lack of extracted, compressed data sets as an obstacle to joint learning for summarizaiton (Daum\u00e9 III, 2006; Martins and Smith, 2009) . We collected joint data via a Mechanical Turk task. To make the joint annotation task more feasible, we adopted an approximate approach that closely matches our fast approximate prediction procedure. Annotators were shown a 150-word maximum bigram recall extractions from the full document set and instructed to form a compressed summary by deleting words until 100 or fewer words remained. Each task was performed by two annotators. We chose the summary we judged to be of highest quality from each pair to add to our corpus. This gave one gold compressive summary y * for each of the 44 problems in the TAC 2009 set. We used these labels to train our joint extractive and compressive system described in Section 2. Of the 288 total sentences presented to annotators, 38 were unedited, 45 were deleted, and 205 were compressed by an average of 7.5 words.", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 142, |
| "text": "(Daum\u00e9 III, 2006;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 143, |
| "end": 167, |
| "text": "Martins and Smith, 2009)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BIAS:", |
| "sec_num": null |
| }, |
| { |
| "text": "Here we describe the features used to parameterize our model. Relative to some NLP tasks, our feature sets are small: roughly two hundred features on bigrams and thirteen features on subtree deletions. This is because our data set is small; with only 48 training documents we do not have the statistical support to learn weights for more features. For larger training sets one could imagine lexicalized versions of the features we describe.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Indicates phrase involved in coordination. Four versions of this feature: NP, VP, S, SBAR. S-ADJUNCT: Indicates a child of an S, adjunct to and left of the matrix verb. Four version of this feature: CC, PP, ADVP, SBAR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COORD:", |
| "sec_num": null |
| }, |
| { |
| "text": "Indicates a relative clause, SBAR modifying a noun. ATTR-C:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "REL-C:", |
| "sec_num": null |
| }, |
| { |
| "text": "Indicates a sentence-final attribution clause, e.g. 'the senator announced Friday.' ATTR-PP:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "REL-C:", |
| "sec_num": null |
| }, |
| { |
| "text": "Indicates a PP attribution, e.g. 'according to the senator.' TEMP-PP: Indicates a temporal PP, e.g. 'on Friday.' TEMP-NP: Indicates a temporal NP, e.g. 'Friday.' BIAS:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "REL-C:", |
| "sec_num": null |
| }, |
| { |
| "text": "Bias feature, active on all subtree deletions. (c, x) that we use to characterize the subtree deleted by cutting edge c = (n, \u03c0(n)) in the joint extractive and compressive model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 47, |
| "end": 53, |
| "text": "(c, x)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "REL-C:", |
| "sec_num": null |
| }, |
| { |
| "text": "Our bigram features include document counts, the earliest position in a document of a sentence that contains the bigram, and membership of each word in a standard set of stopwords. We also include all possible two-and three-way conjuctions of these features. Table 1 describes the features in detail. We use stemmed bigrams and prune bigrams that appear in fewer than three input documents. Table 2 gives a description of our subtree tree deletion features. Of course, by training to optimize a metric like ROUGE, the system benefits from restrictions on the syntactic variety of edits; the learning is therefore more about deciding when an edit is worth the coverage trade-offs rather than finegrained decisions about grammaticality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 259, |
| "end": 266, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 391, |
| "end": 398, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bigram features", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We constrain the model to only allow subtree deletions where one of the features in Table 2 (aside from BIAS) is active. The root, and thus the entire sentence, may always be cut. We choose this particular set of allowed deletions by looking at human annotated data and taking note of the most common types of edits. Edits which are made rarely by humans should be avoided in most scenarios, and we simply don't have enough data to learn when to do them safely.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 91, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Subtree deletion features", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "R-2 R-SU4 Pyr LQ LAST DOCUMENT 4.00 5.85 9.39 23.5 7.2 EXT. BASELINE 6.85 10.05 13.00 35.0 6.2 LEARNED EXT. 7.43 11.05 13.86 38.4 6.6 LEARNED COMP. 7.75 11.70 14.38 41.3 6.5 Table 3 : Bigram Recall (BR), ROUGE (R-2 and R-SU4) and Pyramid (Pyr) scores are multiplied by 100; Linguistic Quality (LQ) is scored on a 1 (very poor) to 10 (very good) scale.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 173, |
| "text": "EXT. 7.43 11.05 13.86 38.4 6.6 LEARNED COMP. 7.75 11.70 14.38 41.3 6.5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 174, |
| "end": 181, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BR", |
| "sec_num": null |
| }, |
| { |
| "text": "We set aside the TAC 2008 data set (48 problems) for testing and use the TAC 2009 data set (44 problems) for training, with hyper-parameters set to maximize six-fold cross-validation bigram recall on the training set. We run the factored SMO algorithm until convergence, and run the cutting-plane algorithm until convergence for = 10 \u22124 . We used GLPK to solve all ILPs. We solved extractive ILPs exactly, and joint extractive and compressive ILPs approximately using an intermediate extraction size of 1000. Constituency parses were produced using the Berkeley parser (Petrov and Klein, 2007) . We show results for three systems, EXTRACTIVE BASE-LINE, LEARNED EXTRACTIVE, LEARNED COM-PRESSIVE, and the standard baseline that extracts the first 100 words in the the most recent document, LAST DOCUMENT.", |
| "cite_spans": [ |
| { |
| "start": 569, |
| "end": 593, |
| "text": "(Petrov and Klein, 2007)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 7.1 Experimental setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our evaluation results are shown in Table 3 . ROUGE-2 (based on bigrams) and ROUGE-SU4 (based on both unigrams and skip-bigrams, separated by up to four words) are given by the official ROUGE toolkit with the standard options (Lin, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 237, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 36, |
| "end": 43, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Pyramid (Nenkova and Passonneau, 2004 ) is a manually evaluated measure of recall on facts or Semantic Content Units appearing in the reference summaries. It is designed to help annotators distinguish information content from linguistic quality. Two annotators performed the entire evaluation without overlap by splitting the set of problems in half.", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 37, |
| "text": "(Nenkova and Passonneau, 2004", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "To evaluate linguistic quality, we sent all the summaries to Mechanical Turk (with two times redun- Table 4 : Summary statistics for the summaries generated by each system: Average number of sentences per summary, average number of words per summary sentence, and average number of non-stopword word types per summary. dancy), using the template and instructions designed by Gillick and Liu (2010) . They report that Turkers can faithfully reproduce experts' rankings of average system linguistic quality (though their judgements of content are poorer). The table shows average linguistic quality.", |
| "cite_spans": [ |
| { |
| "start": 375, |
| "end": 397, |
| "text": "Gillick and Liu (2010)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 107, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "All the content-based metrics show substantial improvement for learned systems over unlearned ones, and we see an extremely large improvement for the learned joint extractive and compressive system over the previous state-of-the-art EXTRACTIVE BASELINE. The ROUGE scores for the learned joint system, LEARNED COMPRESSIVE, are, to our knowledge, the highest reported on this task. We cannot compare Pyramid scores to other reported scores because of annotator difference. As expected, the LAST DOCUMENT baseline outperforms other systems in terms of linguistic quality. But, importantly, the gains achieved by the joint extractive and compressive system in content-based metrics do not come at the cost of linguistic quality when compared to purely extractive systems. Table 4 shows statistics on the outputs of the systems we evaluated. The joint extractive and compressive system fits more word types into a summary than the extractive systems, but also produces longer sentences on average. Reading the output summaries more carefully suggests that by learning to extract and compress jointly, our joint system has the flexibility to use or create reasonable, mediumlength sentences, whereas the extractive systems are stuck with a few valuable long sentences, but several less productive shorter sentences. Example summaries produced by the joint system are given in Figure 4 along with reference summaries produced by humans.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 768, |
| "end": 775, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1370, |
| "end": 1378, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "LEARNED COMPRESSIVE: The country's work safety authority will release the list of the first batch of coal mines to be closed down said Wang Xianzheng, deputy director of the National Bureau of Production Safety Supervision and Administration. With its coal mining safety a hot issue, attracting wide attention from both home and overseas, China is seeking solutions from the world to improve its coal mining safety system. Despite government promises to stem the carnage the death toll in China's disaster-plagued coal mine industry is rising according to the latest statistics released by the government Friday. Fatal coal mine accidents in China rose 8.5 percent in the first eight months of this year with thousands dying despite stepped-up efforts to make the industry safer state media said Wednesday. REFERENCE: China's accident-plagued coal mines cause thousands of deaths and injuries annually. 2004 saw over 6,000 mine deaths. January through August 2005, deaths rose 8.5% over the same period in 2004. Most accidents are gas explosions, but fires, floods, and caveins also occur. Ignored safety procedures, outdated equipment, and corrupted officials exacerbate the problem. Official responses include shutting down thousands of ill-managed and illegally-run mines, punishing errant owners, issuing new safety regulations and measures, and outlawing local officials from investing in mines. China also sought solutions at the Conference on South African Coal Mining Safety Technology and Equipment held in Beijing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "LEARNED COMPRESSIVE: Karl Rove the White House deputy chief of staff told President George W. Bush and others that he never engaged in an effort to disclose a CIA operative's identity to discredit her husband's criticism of the administration's Iraq policy according to people with knowledge of Rove's account in the investigation. In a potentially damaging sign for the Bush administration special counsel Patrick Fitzgerald said that although his investigation is nearly complete it's not over. Lewis Scooter Libby Vice President Dick Cheney's chief of staff and a key architect of the Iraq war was indicted Friday on felony charges of perjury making false statements to FBI agents and obstruction of justice for impeding the federal grand jury investigating the CIA leak case. REFERENCE: Special Prosecutor Patrick Fitzgerald is investigating who leaked to the press that Valerie Plame, wife of former Ambassador Joseph Wilson, was an undercover CIA agent. Wilson was a critic of the Bush administration. Administration staffers Karl Rove and I. Lewis Libby are the focus of the investigation. NY Times correspondent Judith Miller was jailed for 85 days for refusing to testify about Libby. Libby was eventually indicted on five counts: 2 false statements, 1 obstruction of justice, 2 perjury. Libby resigned immediately. He faces 30 years in prison and a fine of $1.25 million if convicted. Libby pleaded not guilty. Figure 4 : Example summaries produced by our learned joint model of extraction and compression. These are each 100-word-limited summaries of a collection of ten documents from the TAC 2008 data set. Constituents that have been removed via subtree deletion are grayed out. References summaries produced by humans are provided for comparison.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1421, |
| "end": 1429, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Jointly learning to extract and compress within a unified model outperforms learning pure extraction, which in turn outperforms a state-of-the-art extractive baseline. Our system gives substantial increases in both automatic and manual content metrics, while maintaining high linguistic quality scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "See Text Analysis Conference results in 2008 and 2009.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their comments. This project is supported by DARPA under grant N10AP20007.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The use of MMR, diversity-based reranking for reordering documents and producing summaries", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Goldstein", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of SIGIR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Carbonell and J. Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proc. of SIGIR.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Online largemargin training of syntactic and structural translation features", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Marton", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Chiang, Y. Marton, and P. Resnik. 2008. Online large- margin training of syntactic and structural translation features. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Global Inference for Sentence Compression: An Integer Linear Programming Approach", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "31", |
| "issue": "", |
| "pages": "399--429", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Clarke and M. Lapata. 2008. Global Inference for Sen- tence Compression: An Integer Linear Programming Approach. Journal of Artificial Intelligence Research, 31:399-429.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Ultraconservative online algorithms for multiclass problems", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "951--991", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Crammer and Y. Singer. 2003. Ultraconservative on- line algorithms for multiclass problems. Journal of Machine Learning Research, 3:951-991.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Practical structured learning techniques for natural language processing", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "C" |
| ], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H.C. Daum\u00e9 III. 2006. Practical structured learning techniques for natural language processing. Ph.D. thesis, University of Southern California.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A scalable global model for summarization", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gillick", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Favre", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of ACL Workshop on Integer Linear Programming for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Gillick and B. Favre. 2009. A scalable global model for summarization. In Proc. of ACL Workshop on In- teger Linear Programming for Natural Language Pro- cessing.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Non-Expert Evaluation of Summarization Systems is Risky", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gillick", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of NAACL Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Gillick and Y. Liu. 2010. Non-Expert Evaluation of Summarization Systems is Risky. In Proc. of NAACL Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Statistics-based summarization-step one: Sentence compression", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Knight and D. Marcu. 2001. Statistics-based summarization-step one: Sentence compression. In Proc. of AAAI.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Enhancing diversity, coverage and balance for summarization through structure learning", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "R" |
| ], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zha", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of the 18th International Conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, K. Zhou, G.R. Xue, H. Zha, and Y. Yu. 2009. Enhancing diversity, coverage and balance for summa- rization through structure learning. In Proc. of the 18th International Conference on World Wide Web.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "An end-to-end discriminative approach to machine translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bouchard-C\u00f4t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Liang, A. Bouchard-C\u00f4t\u00e9, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to ma- chine translation. In Proc. of the ACL.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Improving summarization performance by sentence compression: a pilot study", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of ACL Workshop on Information Retrieval with Asian Languages", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C.Y. Lin. 2003. Improving summarization performance by sentence compression: a pilot study. In Proc. of ACL Workshop on Information Retrieval with Asian Languages.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Rouge: A package for automatic evaluation of summaries", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of ACL Workshop on Text Summarization Branches Out", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C.Y. Lin. 2004. Rouge: A package for automatic evalua- tion of summaries. In Proc. of ACL Workshop on Text Summarization Branches Out.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Summarization with a joint model for sentence extraction and compression", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "F T" |
| ], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of NAACL Workshop on Integer Linear Programming for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A.F.T. Martins and N.A. Smith. 2009. Summarization with a joint model for sentence extraction and com- pression. In Proc. of NAACL Workshop on Integer Lin- ear Programming for Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Discriminative sentence compression with soft syntactic constraints", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. McDonald. 2006. Discriminative sentence compres- sion with soft syntactic constraints. In Proc. of EACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Evaluating content selection in summarization: The pyramid method", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Passonneau", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Nenkova and R. Passonneau. 2004. Evaluating con- tent selection in summarization: The pyramid method. In Proc. of NAACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The impact of frequency on summarization", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Nenkova and L. Vanderwende. 2005. The impact of frequency on summarization. Technical report, MSR- TR-2005-101. Redmond, Washington: Microsoft Re- search.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning and inference for hierarchically split PCFGs", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Petrov and D. Klein. 2007. Learning and inference for hierarchically split PCFGs. In AAAI.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Fast training of support vector machines using sequential minimal optimization", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "C" |
| ], |
| "last": "Platt", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Advances in Kernel Methods", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.C. Platt. 1999. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods. MIT press.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Fastsum: Fast and accurate query-based multi-document summarization", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Schilder", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kondadadi", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Schilder and R. Kondadadi. 2008. Fastsum: Fast and accurate query-based multi-document summarization. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Document summarization using conditional random fields", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "T" |
| ], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Shen, J.T. Sun, H. Li, Q. Yang, and Z. Chen. 2007. Document summarization using conditional random fields. In Proc. of IJCAI.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Max-margin Markov networks", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin Markov networks. In Proc. of NIPS.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Max-margin parsing", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Taskar, D. Klein, M. Collins, D. Koller, and C. Man- ning. 2004. Max-margin parsing. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Sentence extraction as a classification task", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Teufel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of ACL Workshop on Intelligent and Scalable Text Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Teufel and M. Moens. 1997. Sentence extraction as a classification task. In Proc. of ACL Workshop on Intelligent and Scalable Text Summarization.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Support vector machine learning for interdependent and structured output spaces", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Tsochantaridis", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Altun", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdepen- dent and structured output spaces. In Proc. of ICML.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Statistical learning theory", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [ |
| "N" |
| ], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V.N. Vapnik. 1998. Statistical learning theory. John Wiley and Sons, New York.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Automatic generation of story highlights", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Woodsend", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Woodsend and M. Lapata. 2010. Automatic genera- tion of story highlights. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Multi-document summarization by maximizing informative content-words", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Yih, J. Goodman, L. Vanderwende, and H. Suzuki. 2007. Multi-document summarization by maximizing informative content-words. In Proc. of IJCAI.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Sentence compression as a component of a multidocument summarization system", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zajic", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "J" |
| ], |
| "last": "Dorr", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of the 2006 Document Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Zajic, B.J. Dorr, R. Schwartz, and J. Lin. 2006. Sentence compression as a component of a multi- document summarization system. In Proc. of the 2006 Document Understanding Workshop.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Diagram of (a) extractive and (b) joint extractive and compressive summarization models. Variables y s indicate the presence of sentences in the summary. Variables y n indicate the presence of parse tree nodes. Note that there is intentionally a bigram missing from (a).", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "express the optimization of Objective 1 for extractive summarization as an ILP. We begin here with their algorithm. Let each input sentence s have length l s . Let the presence of each bigram b in B(y) be indicated by the binary variable z b . Let Q sb be an indicator of the presence of bigram b in sentence s. They specify the following ILP over binary variables y and z:", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "Diagram of ILP for joint extraction and compression. Variables z b indicate the presence of bigrams in the summary. Variables z c indicate edges in the parse tree that have been cut in order to remove subtrees. The figure suppresses bigram variables z stopped,in and z france,he to reduce clutter. Note that the edit shown is intentionally bad. It demonstrates a loss of bigram coverage.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "Plot of objective value, bigram recall, and elapsed time for the approximate joint extractive and compressive solver against size of intermediate extraction set. Also shown are values for an LP relaxation approximate solver, a solver that is restricted to extractive solutions, and finally the exact compressive solver. These solvers do not use an intermediate extraction. Results are for 44 document sets, averaging about 5000 words per document set.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "text": ": 1(docCount(b) = \u2022) where docCount(b) is the number of documents containing b. STOP: 1(isStop(b1) = \u2022, isStop(b2) = \u2022) where isStop(w) indicates a stop word. POSITION: 1(docPosition(b) = \u2022) where docPosition(b) is the earliest position in a document of any sentence containing b, buckets earliest positions \u2265 4.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "text": "Bigram features: component feature functions in g(b, x) that we use to characterize the bigram b in both the extractive and compressive models.", |
| "html": null, |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "text": "Subtree deletion features: component feature functions in h", |
| "html": null, |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |