| { |
| "paper_id": "D07-1001", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:19:45.145876Z" |
| }, |
| "title": "Modelling Compression with Discourse Constraints", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jclarke@ed.ac.uk" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Sentence compression holds promise for many applications ranging from summarisation to subtitle generation and subtitle generation. The task is typically performed on isolated sentences without taking the surrounding context into account, even though most applications would operate over entire documents. In this paper we present a discourse informed model which is capable of producing document compressions that are coherent and informative. Our model is inspired by theories of local coherence and formulated within the framework of Integer Linear Programming. Experimental results show significant improvements over a stateof-the-art discourse agnostic approach.", |
| "pdf_parse": { |
| "paper_id": "D07-1001", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Sentence compression holds promise for many applications ranging from summarisation to subtitle generation and subtitle generation. The task is typically performed on isolated sentences without taking the surrounding context into account, even though most applications would operate over entire documents. In this paper we present a discourse informed model which is capable of producing document compressions that are coherent and informative. Our model is inspired by theories of local coherence and formulated within the framework of Integer Linear Programming. Experimental results show significant improvements over a stateof-the-art discourse agnostic approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The computational treatment of sentence compression has recently attracted much attention in the literature. The task can be viewed as producing a summary of a single sentence that retains the most important information and remains grammatically correct (Jing 2000) . Sentence compression is commonly expressed as a word deletion problem: given an input sentence of words W = w 1 , w 2 , . . . , w n , the aim is to produce a compression by removing any subset of these words (Knight and Marcu 2002) .", |
| "cite_spans": [ |
| { |
| "start": 254, |
| "end": 265, |
| "text": "(Jing 2000)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 476, |
| "end": 499, |
| "text": "(Knight and Marcu 2002)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Sentence compression can potentially benefit many applications. For example, in summarisation, a compression mechanism could improve the conciseness of the generated summaries (Jing 2000; Lin 2003) . Sentence compression could be also used to automatically generate subtitles for television programs; the transcripts cannot usually be used verbatim due to the rate of speech being too high (Vandeghinste and Pan 2004) . Other applications include compressing text to be displayed on small screens (Corston-Oliver 2001) such as mobile phones or PDAs, and producing audio scanning devices for the blind (Grefenstette 1998) .", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 187, |
| "text": "(Jing 2000;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 188, |
| "end": 197, |
| "text": "Lin 2003)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 390, |
| "end": 417, |
| "text": "(Vandeghinste and Pan 2004)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 601, |
| "end": 620, |
| "text": "(Grefenstette 1998)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Most work to date has focused on a rather simple formulation of sentence compression that does not allow any rewriting operations, besides word removal. Moreover, compression is performed on isolated sentences without taking into account their surrounding context. An advantage of this simple view is that it renders sentence compression amenable to a variety of learning paradigms ranging from instantiations of the noisy-channel model (Galley and McKeown 2007; Knight and Marcu 2002; Turner and Charniak 2005) to Integer Linear Programming (Clarke and Lapata 2006a) and large-margin online learning (McDonald 2006) .", |
| "cite_spans": [ |
| { |
| "start": 437, |
| "end": 462, |
| "text": "(Galley and McKeown 2007;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 463, |
| "end": 485, |
| "text": "Knight and Marcu 2002;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 486, |
| "end": 511, |
| "text": "Turner and Charniak 2005)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 542, |
| "end": 567, |
| "text": "(Clarke and Lapata 2006a)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 601, |
| "end": 616, |
| "text": "(McDonald 2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we take a closer look at one of the simplifications associated with the compression task, namely that sentence reduction can be realised in isolation without making use of discourse-level information. This is clearly not true -professional abstracters often rely on contextual cues while creating summaries (Endres-Niggemeyer 1998) . Furthermore, determining what information is important in a sentence is influenced by a variety of contextual factors such as the discourse topic, whether the sentence introduces new entities or events that have not been mentioned before, and the reader's background knowledge.", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 345, |
| "text": "(Endres-Niggemeyer 1998)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The simplification is also at odds with most applications of sentence compression which aim to create a shorter document rather than a single sentence. The resulting document must not only be grammat-ical but also coherent if it is to function as a replacement for the original. However, this cannot be guaranteed without knowing how the discourse progresses from sentence to sentence. To give a simple example, a contextually aware compression system could drop a word or phrase from the current sentence, simply because it is not mentioned anywhere else in the document and is therefore deemed unimportant. Or it could decide to retain it for the sake of topic continuity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We are interested in creating a compression model that is appropriate for documents and sentences. To this end, we assess whether discourse-level information is helpful. Our analysis is informed by two popular models of discourse, Centering Theory (Grosz et al. 1995) and lexical chains (Morris and Hirst 1991) . Both approaches model local coherencethe way adjacent sentences bind together to form a larger discourse. Our compression model is an extension of the integer programming formulation proposed by Clarke and Lapata (2006a) . Their approach is conceptually simple: it consists of a scoring function coupled with a small number of syntactic and semantic constraints. Discourse-related information can be easily incorporated in the form of additional constraints. We employ our model to perform sentence compression throughout a whole document (by compressing sentences sequentially) and evaluate whether the resulting text is understandable and informative using a question-answering task. Our method yields significant improvements over a discourse agnostic state-of-the-art compression model (McDonald 2006) .", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 267, |
| "text": "(Grosz et al. 1995)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 287, |
| "end": 310, |
| "text": "(Morris and Hirst 1991)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 508, |
| "end": 533, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1103, |
| "end": 1118, |
| "text": "(McDonald 2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Sentence compression has been extensively studied across different modelling paradigms and has received both generative and discriminative formulations. Most generative approaches (Galley and McKeown 2007; Knight and Marcu 2002; Turner and Charniak 2005) are instantiations of the noisychannel model, whereas discriminative formulations include decision-tree learning (Knight and Marcu 2002) , maximum entropy (Riezler et al. 2003) , support vector machines (Nguyen et al. 2004) , and large-margin learning (McDonald 2006) . These models are trained on a parallel corpus of long source sentences and their target compressions. Using a rich feature set derived from parse trees, the models learn either which constituents to delete or which words to place adjacently in the compression output. Relatively few approaches dispense with the parallel corpus and generate compressions in an unsupervised manner using either a scoring function (Clarke and Lapata 2006a; Hori and Furui 2004) or compression rules that are approximated from a nonparallel corpus such as the Penn Treebank (Turner and Charniak 2005) .", |
| "cite_spans": [ |
| { |
| "start": 180, |
| "end": 205, |
| "text": "(Galley and McKeown 2007;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 206, |
| "end": 228, |
| "text": "Knight and Marcu 2002;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 229, |
| "end": 254, |
| "text": "Turner and Charniak 2005)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 368, |
| "end": 391, |
| "text": "(Knight and Marcu 2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 410, |
| "end": 431, |
| "text": "(Riezler et al. 2003)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 458, |
| "end": 478, |
| "text": "(Nguyen et al. 2004)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 507, |
| "end": 522, |
| "text": "(McDonald 2006)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 937, |
| "end": 962, |
| "text": "(Clarke and Lapata 2006a;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 963, |
| "end": 983, |
| "text": "Hori and Furui 2004)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1079, |
| "end": 1105, |
| "text": "(Turner and Charniak 2005)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our work differs from previous approaches in two key respects. First, we present a compression model that is contextually aware; decisions on whether to remove or retain a word (or phrase) are informed by its discourse properties (e.g., whether it introduces a new topic, whether it is semantically related to the previous sentence). Second, we apply our compression model to entire documents rather than isolated sentences. This is more in the spirit of real-world applications where the goal is to generate a condensed and coherent text. Previous work on summarisation has also utilised discourse information (e.g., Barzilay and Elhadad 1997; Daum\u00e9 III and Marcu 2002; Marcu 2000; Teufel and Moens 2002) . However, its application to document compression is novel to our knowledge.", |
| "cite_spans": [ |
| { |
| "start": 618, |
| "end": 644, |
| "text": "Barzilay and Elhadad 1997;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 645, |
| "end": 670, |
| "text": "Daum\u00e9 III and Marcu 2002;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 671, |
| "end": 682, |
| "text": "Marcu 2000;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 683, |
| "end": 705, |
| "text": "Teufel and Moens 2002)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Obtaining an appropriate representation of discourse is the first step towards creating a compression model that exploits contextual information. In this work we focus on the role of local coherence as this is prerequisite for maintaining global coherence. Ideally, we would like our compressed document to maintain the discourse flow of the original. For this reason, we automatically annotate the source document with discourse-level information which is subsequently used to inform our compression procedure. We first describe our algorithms for obtaining discourse annotations and then present our compression model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Representation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Centering Theory (Grosz et al. 1995) is an entityorientated theory of local coherence and salience. Although an utterance in discourse may contain several entities, it is assumed that a single entity is salient or \"centered\", thereby representing the current focus. One of the main claims underlying centering is that discourse segments in which succes-sive utterances contain common centers are more coherent than segments where the center repeatedly changes.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 36, |
| "text": "(Grosz et al. 1995)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Each utterance U i in a discourse segment has a list of forward-looking centers, C f (U i ) and a unique backward-looking center, C b (U i ). C f (U i ) represents a ranking of the entities invoked by U i according to their salience. The C b of the current utterance U i , is the highest-ranked element in C f (U i\u22121 ) that is also in U i . The C b thus links U i to the previous discourse, but it does so locally since", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "C b (U i ) is chosen from U i\u22121 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Centering Algorithm So far we have presented centering without explicitly stating how the concepts \"utterance\", \"entities\" and \"ranking\" are instantiated. A great deal of research has been devoted into fleshing these out and many different instantiations have been developed in the literature (see Poesio et al. 2004 for details). Since our aim is to identify centers in discourse automatically, our parameter choice is driven by two considerations, robustness and ease of computation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We therefore follow previous work (e.g., Miltsakaki and Kukich 2000) in assuming that the unit of an utterance is the sentence (i.e., a main clause with accompanying subordinate and adjunct clauses). This is in line with our compression task which also operates over sentences. We determine which entities are invoked by a sentence using two methods. First, we perform named entity identification and coreference resolution on each document using LingPipe 1 , a publicly available system. Named entities and all remaining nouns are added to the C f list. Entity matching between sentences is required to determine the C b of a sentence. This is done using the named entity's unique identifier (as provided by LingPipe) or by the entity's surface form in the case of nouns not classified as named entities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Entities are ranked according to their grammatical roles; subjects are ranked more highly than objects, which are in turn ranked higher than other grammatical roles (Grosz et al. 1995) ; ties are broken using left-to-right ordering of the grammatical roles in the sentence (Tetreault 2001) . We identify grammatical roles with RASP (Briscoe and Carroll 2002) . Formally, our centering algorithm is as follows (where U i corresponds to sentence i):", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 184, |
| "text": "(Grosz et al. 1995)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 273, |
| "end": 289, |
| "text": "(Tetreault 2001)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 332, |
| "end": 358, |
| "text": "(Briscoe and Carroll 2002)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1. Extract entities from U i . 2. Create C f (U i ) by ranking the entities in U i according to their grammatical role (subjects > objects > others).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Centering Theory", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "C f (U i\u22121 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Find the highest ranked entity in", |
| "sec_num": "3." |
| }, |
| { |
| "text": "which occurs in C f (U i ), set the entity to be C b (U i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Find the highest ranked entity in", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The above procedure involves several automatic steps (named entity recognition, coreference resolution, identification of grammatical roles) and will unavoidably produce some noisy annotations. So, there is no guarantee that the right C b will be identified or that all sentences will be marked with a C b . The latter situation also occurs in passages that contain abrupt changes in topic. In such cases, none of the entities realised in U i will occur in C f (U i\u22121 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Find the highest ranked entity in", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Rather than accept that discourse information may be absent in a sentence, we turn to lexical chains as an alternative means of capturing topical content within a document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Find the highest ranked entity in", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Lexical cohesion refers to the degree of semantic relatedness observed among lexical items in a document. The term was coined by Halliday and Hasan (1976) who observed that coherent documents tend to have more related terms or phrases than incoherent ones. A number of linguistic devices can be used to signal cohesion; these range from repetition, to synonymy, hyponymy and meronymy. Lexical chains are a representation of lexical cohesion as sequences of semantically related words (Morris and Hirst 1991) and provide a useful means for describing the topic flow in discourse. For instance, a document with many different lexical chains will probably contain several topics. And main topics will tend to be represented by dense and long chains. Words participating in such chains are important for our compression task -they reveal what the document is about -and in all likelihood should not be deleted. Barzilay and Elhadad (1997) describe a technique for text summarisation based on lexical chains. Their algorithm uses Word-Net to build chains of nouns (and noun compounds). These are ranked heuristically by a score based on their length and homogeneity. A summary is then produced by extracting sentences corresponding to strong chains, i.e., chains whose score is two standard deviations above the average score.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 154, |
| "text": "Halliday and Hasan (1976)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 484, |
| "end": 507, |
| "text": "(Morris and Hirst 1991)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 907, |
| "end": 934, |
| "text": "Barzilay and Elhadad (1997)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Chains", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Like Barzilay and Elhadad (1997) , we wish to determine which lexical chains indicate the most prevalent discourse topics. Our assumption is that terms belonging to these chains are indicative of the document's main focus and should therefore be retained in the compressed output. Barzilay and Elhadad's scoring function aims to identify sentences (for inclusion in a summary) that have a high concentration of chain members. In contrast, we are interested in chains that span several sentences. We thus score chains according to the number of sentences their terms occur in. For example, the chain {house 3 , home 3 , loft 3 , house 5 } (where word i denotes word occurring in sentence i) would be given a score of two as the terms only occur in two sentences. We assume that a chain signals a prevalent discourse topic if it occurs throughout more sentences than the average chain. The scoring algorithm is outlined more formally below:", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 32, |
| "text": "Barzilay and Elhadad (1997)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Chains Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Compute the lexical chains for the document. 2. Score(Chain) = Sentences(Chain). 3. Discard chains if Score(Chain) < Avg(Score). 4. Mark terms from the remaining chains as being the focus of the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Chains Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the method of Galley and McKeown (2003) to compute lexical chains for each document. 2 This is an improved version of Barzilay and Elhadad's (1997) original algorithm. Before compression takes place, all documents are pre-processed using the centering and lexical chain algorithms described above. In each sentence we mark the center C b (U i ) if one exists. Words (or phrases) that are present in the current sentence and function as the center in the next sentence C b (U i+1 ) are also flagged. Finally, words are marked if they are part of a prevalent chain. An example of our discourse annotation is given in Figure 1 .", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 46, |
| "text": "Galley and McKeown (2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 125, |
| "end": 154, |
| "text": "Barzilay and Elhadad's (1997)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 622, |
| "end": 630, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical Chains Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "Our model is an extension of the approach put forward in Clarke and Lapata (2006a) . Their work tackles sentence compression as an optimisation problem. Given a long sentence, a compression is formed by retaining the words that maximise a scoring func- tion. The latter is essentially a language model coupled with a few constraints ensuring that the resulting output is grammatical. The language model and the constraints are encoded as linear inequalities whose solution is found using Integer Linear Programming (ILP, Vanderbei 2001; Winston and Venkataramanan 2003) . We selected this model for several reasons. First it does not require a parallel corpus and thus can be ported across domains and text genres, whilst delivering state-of-the-art results (see Clarke and Lapata 2006a for details). Second, discourse-level information can be easily incorporated by augmenting the constraint set. This is not the case for other approaches (e.g., those based on the noisy channel model) where compression is modelled by grammar rules indicating which constituents to delete in a syntactic context. Third, the ILP framework delivers a globally optimal solution by searching over the entire compression space 3 without employing heuristics or approximations during decoding.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 82, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 515, |
| "end": 536, |
| "text": "(ILP, Vanderbei 2001;", |
| "ref_id": null |
| }, |
| { |
| "start": 537, |
| "end": 569, |
| "text": "Winston and Venkataramanan 2003)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We begin by recapping the formulation of Clarke and Lapata (2006a) . Let W = w 1 , w 2 , . . . , w n denote a sentence for which we wish to generate a compression. A set of binary decision variables represent whether each word w i should be included in the compression or not. Let:", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 66, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "y i = 1 if w i is in the compression 0 otherwise \u2200i \u2208 [1 . . . n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A trigram language model forms the backbone of the compression model. The language model is formulated as an integer program with the introduction of extra decision variables indicating which word sequences should be retained or dropped from the compression. Let:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "p i = 1 if w i starts the compression 0 otherwise \u2200i \u2208 [1 . . . n] q i j = \uf8f1 \uf8f2 \uf8f3 1 if sequence w i , w j ends the compression \u2200i \u2208 [1 . . . n \u2212 1] 0 otherwise \u2200 j \u2208 [i + 1 . . . n] x i jk = \uf8f1 \uf8f2 \uf8f3 1 if sequence w i , w j , w k \u2200i \u2208 [1 . . . n \u2212 2] is in the compression \u2200 j \u2208 [i + 1 . . . n \u2212 1] 0 otherwise \u2200k \u2208 [ j + 1 . . . n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The objective function is expressed in Equation (1). It is the sum of all possible trigrams multiplied by the appropriate decision variable. The objective function also includes a significance score for each word multiplied by the decision variable for that word (see the last summation term in (1)). This score highlights important content words in a sentence and is defined in Section 4.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max z = n \u2211 i=1 p i \u2022 P(w i |start) + n\u22122 \u2211 i=1 n\u22121 \u2211 j=i+1 n \u2211 k= j+1 x i jk \u2022 P(w k |w i , w j ) + n\u22121 \u2211 i=0 n \u2211 j=i+1 q i j \u2022 P(end|w i , w j ) + n \u2211 i=1 y i \u2022 I(w i )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "subject to:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y i , p i , q i j , x i jk = 0 or 1", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A set of sequential constraints 4 are added to the problem to only allow results which combine valid trigrams.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Compression Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The significance score is an attempt at capturing the gist of a sentence. It gives more weight to content words that appear in the deepest level of embedding in the syntactic tree representing the source sentence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Significance Score", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "I(w i ) = l N \u2022 f i log F a F i", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Significance Score", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The score is computed over a large corpus where w i is a content word (i.e., a noun or verb), f i and F i are the frequencies of w i in the document and corpus respectively, and F a is the sum of all content words in the corpus. l is the number of clause constituents above w i , and N is the deepest level of embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Significance Score", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The model also contains a small number of sentence-level constraints. Their aim is to preserve the meaning and structure of the original sentence as much as possible. The majority of constraints revolve around modification and argument structure and are defined over parse trees or grammatical relations. For example, the following constraint template disallows the inclusion of modifiers (e.g., nouns, adjectives) without their head words:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "y i \u2212 y j \u2265 0 (4) \u2200i, j : w j modifies w i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Other constraints force the presence of modifiers when the head is retained in the compression. This way, it is ensured that negation will be preserved in the compressed output:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "y i \u2212 y j = 0 (5) \u2200i, j : w j modifies w i \u2227 w j = not", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Argument structure constraints make sure that the resulting compression has a canonical argument structure. For instance a constraint ensures that if a verb is present in the compression then so are its arguments:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "y i \u2212 y j = 0 (6) \u2200i, j : w j \u2208 subject/object of verb w i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Finally, Clarke and Lapata (2006a) propose one discourse constraint which forces the system to preserve personal pronouns in the compressed output:", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 34, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "y i = 1 (7) \u2200i : w i \u2208 personal pronouns", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentential Constraints", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In addition to the constraints described above, our model includes constraints relating to the centering and lexical chains representations discussed in Section 3. Recall that after some pre-processing, each sentence is marked with: its own center C b (U i ), the center C b (U i+1 ) of the sentence following it and words that are members of high scoring chains corresponding to the document's focus. We introduce two new types of constraints based on these additional knowledge sources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The first constraint is the centering constraint which operates over adjacent sentences. It ensures that the C b identified in the source sentence is retained in the target compression. If present, the entity realised as the C b in the following sentence is also retained:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "y i = 1 (8) \u2200i : w i \u2208 {C b (U i ),C b (U i+1 )}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Consider for example the discourse in Figure 1 . The constraints generated from Equation (8) will require the compression to retain lava in the first two sentences and debris in sentences two and three.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 46, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We also add a lexical chain constraint. This applies only to nouns which are members of prevalent chains:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "y i = 1 (9) \u2200i : w i \u2208 document focus lexical chain", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "This constraint is complementary to the centering constraint; the sentences it applies to do not have to be adjacent and the entities under consideration are not restricted to a specific syntactic role (e.g., subject or object). See for instance the words flow and rate in Figure 1 which are members of the same chain (marked with subscript one). According to constraint (9) both words must be included in the compressed document.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 273, |
| "end": 281, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The constraints just described ensure that the compressed document will retain the discourse flow of the original and will preserve terms indicative of important topics. We argue that these constraints will additionally benefit sentence-level compression, as words which are not signalled as discourse relevant can be dropped.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Constraints", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our compression system is given a (sentence separated) document as input. The ILP model just presented is then applied sequentially to all sentences to generate a compressed version of the original. We thus create and solve an ILP for every sentence. 5 In the formulation of Clarke and Lapata (2006a) a significance score (see Section 4.1) highlights which nouns and verbs to include in the compression. As far as nouns are concerned, our discourse constraints perform a similar task. Thus, when a sentence contains discourse annotations, we are inclined to trust them more and only calculate the significance score for verbs.", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 252, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 275, |
| "end": 300, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applying the Constraints", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "During development it was observed that applying all discourse constraints simultaneously (see Equations (7)-(9)) results in relatively long compressions. To counter this, we employ these constraints using a back-off strategy that relies on progressively less reliable information. Our back-off model works as follows: if centering information is present, we apply the appropriate constraints (Equation (8)). If no centers are present, we back-off to the lexical chain information using Equation 9, and in the absence of the latter we back-off to the pronoun constraint (Equation (7)). Finally, if discourse information is entirely absent from the sentence, we default to the significance score. Sentential constraints (see Section 4.2) are applied throughout irrespectively of discourse constraints. In our test data (see Section 5 for details), the centering constraint was used in 68.6% of the sentences. The model backed off to lexical chains for 13.7% of the test sentences, whereas the pronoun constraint was applied in 8.5%. Finally, the noun and verb significance score was used on the remaining 9.2%. An example of our system's output for the text in Figure 1 is given in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1160, |
| "end": 1168, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1181, |
| "end": 1189, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Applying the Constraints", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In this section we present our experimental set-up. We briefly introduce the model used for comparison with our approach and give details regarding our compression corpus and parameter estimation. Finally, we describe our evaluation methodology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Bad weather dashed hopes to halt the flow during what was seen as lull in lava's momentum. Experts say that even if eruption stopped, the pressure of lava piled would bring debris cascading. Some estimate volcano is pouring million tons of debris from fissure opened in mid-December. The Army yesterday detonated 400lb of dynamite. Comparison with state-of-the-art An obvious evaluation experiment would involve comparing the ILP model without any discourse constraints against the discourse informed model presented in this work. Unfortunately, the two models obtain markedly different compression rates 6 which renders the comparison of their outputs problematic. To put the comparison on an equal footing, we evaluated our approach against a state-of-the-art model that achieves a compression rate similar to ours without taking discourse-level information into account. McDonald (2006) formalises sentence compression in a discriminative large-margin learning framework as a classification task: pairs of words from the source sentence are classified as being adjacent or not in the target compression. A large number of features are defined over words, parts of speech, phrase structure trees and dependencies. These are gathered over adjacent words in the compression and the words in-between which were dropped.", |
| "cite_spans": [ |
| { |
| "start": 874, |
| "end": 889, |
| "text": "McDonald (2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "It is important to note that McDonald (2006) is not a straw-man system. It achieves highly competitive performance compared with Knight and Marcu's (2002) noisy channel and decision tree models. Due to its discriminative nature, the model is able to use a large feature set and to optimise compression accuracy directly. In other words, McDonald's model has a head start against our own model which does not utilise a parallel corpus and has only a few constraints. The comparison of the two systems allows us to investigate whether discourse information is redundant when using a powerful sentence compression model.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 154, |
| "text": "Knight and Marcu's (2002)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Corpus Previous work on sentence compression has used almost exclusively the Ziff-Davis, a compression corpus derived automatically from document-abstract pairs (Knight and Marcu 2002) . Unfortunately, this corpus is not suitable for our purposes since it consists of isolated sentences. We thus created a document-based compression corpus manually. Following Clarke and Lapata (2006a) , we asked annotators to produce compressions for 82 stories (1,629 sentences) from the BNC and the LA Times Washington Post. 7 48 documents (962 sentences) were used for training, 3 for development (63 sentences), and 31 for testing (604 sentences).", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 184, |
| "text": "(Knight and Marcu 2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 360, |
| "end": 385, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our parameters for the ILP model followed closely Clarke and Lapata (2006a) . We used a language model trained on 25 million tokens from the North American News corpus. The significance score was based on 25 million tokens from the same corpus. Our reimplementation of McDonald (2006) used an identical feature set, and a slightly modified loss function to encourage compression on our data set. 8", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 75, |
| "text": "Clarke and Lapata (2006a)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 269, |
| "end": 284, |
| "text": "McDonald (2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "Evaluation Previous studies evaluate how wellformed the automatically derived compressions are out of context. The target sentences are typically rated by naive subjects on two dimensions, grammaticality and importance (Knight and Marcu 2002) . Automatic evaluation measures have also been proposed. Riezler et al. (2003) compare the grammatical relations found in the system output against those found in a gold standard using F-score which Clarke and Lapata (2006b) show correlates reliably with human judgements.", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 242, |
| "text": "(Knight and Marcu 2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 300, |
| "end": 321, |
| "text": "Riezler et al. (2003)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 442, |
| "end": 467, |
| "text": "Clarke and Lapata (2006b)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "Following previous work, sentence-based compressions were evaluated automatically using Fscore computed over grammatical relations which we obtained by RASP (Briscoe and Carroll 2002) . Besides individual sentences, our goal was to evaluate the compressed document as whole. Our evaluation methodology was motivated by two questions:", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 183, |
| "text": "(Briscoe and Carroll 2002)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "(1) are the documents readable? and (2) how much key information is preserved between the source document and its target compression? We assume here that the compressed document is to function as a replacement for the original. We can thus measure the extent to which the compressed version can be What is posing a threat to the town? (lava) What hindered attempts to stop the lava flow? (bad weather) What did the Army do first to stop the lava flow? (detonate explosives) Figure 3 : Example questions with answer key. used to find answers for questions which are derived from the original and represent its core content.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 474, |
| "end": 482, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "We therefore employed a question-answering evaluation paradigm which has been previously used for summarisation evaluation and text comprehension (Mani et al. 2002; Morris et al. 1992) . The overall objective of our Q&A task is to determine how accurate each document (generated by different compression systems) is at answering questions. For this we require a methodology for constructing Q&A pairs and for scoring each document.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 164, |
| "text": "(Mani et al. 2002;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 165, |
| "end": 184, |
| "text": "Morris et al. 1992)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "Two annotators were independently instructed to create Q&A pairs for the original documents in the test set. Each annotator read the document and then drafted no more than ten questions and answers related to its content. Annotators were asked to create factual-based questions which required an unambiguous answer; these were typically who/what/where/when/how style questions. Annotators then compared and revised their questionanswer pairs to create a common agreed upon set. Revisions typically involved merging questions, rewording and simplifying questions, and in some cases splitting a question into multiple questions. Documents for which too few questions were created or for which questions or answers were too ambiguous were removed. This left an evaluation set of six documents with between five to eight concise questions per document. Some example questions corresponding to the document from Figure 1 are given in Figure 3 ; correct answers are shown in parentheses.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 907, |
| "end": 915, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 929, |
| "end": 937, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "Compressed documents and their accompanying questions were presented to human subjects who were asked to provide answers as best they could. We elicited answers for six documents in three compression conditions: gold standard, using the ILP discourse model, and McDonald's (2006) from seeing two different compressions of the same document.", |
| "cite_spans": [ |
| { |
| "start": 262, |
| "end": 279, |
| "text": "McDonald's (2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "The study was conducted remotely over the Internet. Participants were presented with a set of instructions that explained the Q&A task and provided examples. Subjects were first asked to read the compressed document and rate its readability. Questions were then presented one at a time and participants were allowed to consult the document for the answer. Once a participant had provided an answer they were not allowed to modify it. Thirty unpaid volunteers took part in our Q&A study. All were self reported native English speakers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "The answers provided by the participants were scored against the answer key. Answers were considered correct if they were identical to the answer key or subsumed by it. For instance, Mount Etna was considered a right answer to the first question from Figure 3 . A compressed document receives a full score if subjects have answered all questions relating to it correctly.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 259, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": null |
| }, |
| { |
| "text": "As a sanity check, we first assessed the compressions produced by our model and McDonald (2006) on a sentence-by-sentence basis without taking the documents into account. There is no hope for generating shorter documents if the compressed sentences are either too wordy or too ungrammatical. Table 1 shows the compression rates (CompR) for the two systems and evaluates the quality of their output using F-score based on grammatical relations. As can be seen, the Discourse ILP compressions are slightly longer than McDonald (65.4% vs. 60.1%) but closer to the human gold standard (70.3%). This is not surprising, the Discourse ILP model takes the entire document into account, and compression decisions will be slightly more conservative. The Discourse ILP's output is significantly better than McDonald in terms of F-score, indicating that discourse-level information is generally helpful. Both systems could use further improvement as inter-annotator agreement on this data yields an F-score of 65.8%.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 292, |
| "end": 299, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Let us now consider the results of our documentbased evaluation. Table 2 shows the mean readability ratings obtained for each system and the percentage of questions answered correctly. We used an Analysis of Variance (ANOVA) to examine the effect of compression type (McDonald, Discourse ILP, Gold Standard). The ANOVA revealed a reliable effect on both readability and Q&A. Post-hoc Tukey tests showed that McDonald and the Discourse ILP model do not differ significantly in terms of readability. However, they are significantly less readable than the gold standard (\u03b1 < 0.05). For the Q&A task we observe that our system is significantly better than McDonald (\u03b1 < 0.05) and not significantly worse than the gold standard.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "These results indicate that the automatic systems lag behind the human gold standard in terms of readability. When reading entire documents, subjects are less tolerant of ungrammatical constructions. We also find out that despite relatively low readability, the documents are overall understandable. The discourse informed model generates more informative documents -the number of questions answered correctly increases by 15% in comparison to McDonald. This is an encouraging result suggesting that there may be advantages in developing compression models that exploit contextual information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper we proposed a novel method for automatic sentence compression. Central in our approach is the use of discourse-level information which we argue is an important prerequisite for document (as opposed to sentence) compression. Our model uses integer programming for inferring globally optimal compressions in the presence of lin-guistically motivated constraints. Our discourse constraints aim to capture local coherence and are inspired by centering theory and lexical chains. We showed that our model can be successfully employed to produce compressed documents that preserve most of the original's core content.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our approach to document compression differs from most summarisation work in that our summaries are fairly long. However, we believe this is the first step into understanding how compression can help summarisation. In the future, we will interface our compression model with sentence extraction. The discourse annotations can help guide the extraction method into selecting topically related sentences which can consequently be compressed together. The compression rate can be tailored through additional constraints which act on the output length to ensure precise word limits are obeyed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We also plan to study the effect of global discourse structure (Daum\u00e9 III and Marcu 2002) on the compression task. In general, we will assess the impact of discourse information more systematically by incorporating it into generative and discriminative modelling paradigms.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 89, |
| "text": "(Daum\u00e9 III and Marcu 2002)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "LingPipe can be downloaded from http://www. alias-i.com/lingpipe/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The software is available from http://www1.cs. columbia.edu/\u02dcgalley/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For a sentence of length n, there are 2 n compressions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We have omitted sequential constraints due to space limitations. The full details are given inClarke and Lapata (2006a).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the publicly available lp solve solver (http:// www.geocities.com/lpsolve/).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The discourse agnostic ILP model has a compression rate of 81.2%; when discourse constraints are include the rate drops to 65.4%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The corpus is available from http://homepages.inf. ed.ac.uk/s0460084/data/.8McDonald's (2006) results are reported on the Ziff-Davis corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We are grateful to Ryan Mc-Donald for his help with the re-implementation of his system and our annotators Vasilis Karaiskos and Sarah Luger. Thanks to Simone Teufel, Alex Lascarides, Sebastian Riedel, and Bonnie Webber for insightful comments and suggestions. Lapata acknowledges the support of EPSRC (grant GR/T04540/01).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Using lexical chains for text summarization", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Intelligent Scalable Text Summarization Workshop (ISTS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barzilay, R. and M. Elhadad. 1997. Using lexical chains for text summarization. In Proceedings of the Intelligent Scalable Text Summarization Work- shop (ISTS), ACL-97.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Robust accurate statistical annotation of general text", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "J" |
| ], |
| "last": "Briscoe", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 3rd International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Briscoe, E. J. and J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proceed- ings of the 3rd International Conference on Lan- guage Resources and Evaluation (LREC-2002).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Constraint-based sentence compression: An integer programming approach", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions", |
| "volume": "", |
| "issue": "", |
| "pages": "144--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clarke, James and Mirella Lapata. 2006a. Constraint-based sentence compression: An integer programming approach. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Sydney, Australia, pages 144-151.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Models for sentence compression: A comparison across domains, training requirements and evaluation measures", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "377--384", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clarke, James and Mirella Lapata. 2006b. Models for sentence compression: A comparison across domains, training requirements and evaluation measures. In Proceedings of the 21st Inter- national Conference on Computational Linguis- tics and 44th Annual Meeting of the Association for Computational Linguistics. Sydney, Australia, pages 377-384.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Text Compaction for Display on Very Small Screens", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Corston-Oliver", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the NAACL Workshop on Automatic Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "89--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Corston-Oliver, Simon. 2001. Text Compaction for Display on Very Small Screens. In Proceedings of the NAACL Workshop on Automatic Summariza- tion. Pittsburgh, PA, pages 89-98.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A noisychannel model for document compression", |
| "authors": [ |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "449--456", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daum\u00e9 III, Hal and Daniel Marcu. 2002. A noisy- channel model for document compression. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002). Philadelphia, PA, pages 449-456.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Summarising Information", |
| "authors": [ |
| { |
| "first": "Brigitte", |
| "middle": [], |
| "last": "Endres-Niggemeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Endres-Niggemeyer, Brigitte. 1998. Summarising Information. Springer, Berlin.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Improving word sense disambiguation in lexical chaining", |
| "authors": [ |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of 18th International Joint Conference on Artificial Intelligence (IJCAI-03)", |
| "volume": "", |
| "issue": "", |
| "pages": "1486--1488", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Galley, Michel and Kathleen McKeown. 2003. Improving word sense disambiguation in lexi- cal chaining. In Proceedings of 18th Interna- tional Joint Conference on Artificial Intelligence (IJCAI-03). pages 1486-1488.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Lexicalized markov grammars for sentence compression", |
| "authors": [ |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT-2007)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Galley, Michel and Kathleen McKeown. 2007. Lex- icalized markov grammars for sentence compres- sion. In In Proceedings of the North Ameri- can Chapter of the Association for Computational Linguistics (NAACL-HLT-2007). Rochester, NY.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Producing Intelligent Telegraphic Text Reduction to Provide an Audio Scanning Service for the Blind", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the AAAI Symposium on Intelligent Text Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "111--117", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grefenstette, Gregory. 1998. Producing Intelligent Telegraphic Text Reduction to Provide an Audio Scanning Service for the Blind. In Proceedings of the AAAI Symposium on Intelligent Text Summa- rization. Stanford, CA, pages 111-117.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Centering: a framework for modeling the local coherence of discourse", |
| "authors": [ |
| { |
| "first": "Barbara", |
| "middle": [ |
| "J" |
| ], |
| "last": "Grosz", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [ |
| "K" |
| ], |
| "last": "Scott Weinstein", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "21", |
| "issue": "2", |
| "pages": "203--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grosz, Barbara J., Scott Weinstein, and Aravind K. Joshi. 1995. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics 21(2):203-225.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Cohesion in English", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "A K" |
| ], |
| "last": "Halliday", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruqaiya", |
| "middle": [], |
| "last": "Hasan", |
| "suffix": "" |
| } |
| ], |
| "year": 1976, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Halliday, M. A. K. and Ruqaiya Hasan. 1976. Cohe- sion in English. Longman, London.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Speech summarization: an approach through word extraction and a method for evaluation", |
| "authors": [ |
| { |
| "first": "Chiori", |
| "middle": [], |
| "last": "Hori", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadaoki", |
| "middle": [], |
| "last": "Furui", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "IEICE Transactions on Information and Systems E87-D", |
| "volume": "", |
| "issue": "1", |
| "pages": "15--25", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hori, Chiori and Sadaoki Furui. 2004. Speech sum- marization: an approach through word extraction and a method for evaluation. IEICE Transactions on Information and Systems E87-D(1):15-25.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Sentence reduction for automatic text summarization", |
| "authors": [ |
| { |
| "first": "Hongyan", |
| "middle": [], |
| "last": "Jing", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 6th conference on Applied Natural Language Processing (ANLP-2000)", |
| "volume": "", |
| "issue": "", |
| "pages": "310--315", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jing, Hongyan. 2000. Sentence reduction for auto- matic text summarization. In Proceedings of the 6th conference on Applied Natural Language Pro- cessing (ANLP-2000). Seattle, WA, pages 310- 315.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Summarization beyond sentence extraction: a probabilistic approach to sentence compression", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Artificial Intelligence", |
| "volume": "139", |
| "issue": "1", |
| "pages": "91--107", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Knight, Kevin and Daniel Marcu. 2002. Summa- rization beyond sentence extraction: a probabilis- tic approach to sentence compression. Artificial Intelligence 139(1):91-107.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Improving summarization performance by sentence compression -a pilot study", |
| "authors": [ |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 6th International Workshop on Information Retrieval with Asian Languages", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Chin-Yew. 2003. Improving summarization performance by sentence compression -a pilot study. In Proceedings of the 6th International Workshop on Information Retrieval with Asian Languages. Sapporo, Japan, pages 1-8.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "SUMMAC: A text summarization evaluation", |
| "authors": [ |
| { |
| "first": "Inderjeet", |
| "middle": [], |
| "last": "Mani", |
| "suffix": "" |
| }, |
| { |
| "first": "Gary", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "House", |
| "suffix": "" |
| }, |
| { |
| "first": "Lynette", |
| "middle": [], |
| "last": "Hirschman", |
| "suffix": "" |
| }, |
| { |
| "first": "Therese", |
| "middle": [], |
| "last": "Firmin", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Sundheim", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Natural Language Engineering", |
| "volume": "8", |
| "issue": "1", |
| "pages": "43--68", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mani, Inderjeet, Gary Klein, David House, Lynette Hirschman, Therese Firmin, and Beth Sundheim. 2002. SUMMAC: A text summarization evalua- tion. Natural Language Engineering 8(1):43-68.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The Theory and Practice of Discourse Parsing and Summarization", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcu, Daniel. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Discriminative sentence compression with soft syntactic constraints", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 11th EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McDonald, Ryan. 2006. Discriminative sentence compression with soft syntactic constraints. In Proceedings of the 11th EACL. Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The role of centering theory's rough-shift in the teaching and evaluation of writing skills", |
| "authors": [ |
| { |
| "first": "Eleni", |
| "middle": [], |
| "last": "Miltsakaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Kukich", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL-2000)", |
| "volume": "", |
| "issue": "", |
| "pages": "408--415", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miltsakaki, Eleni and Karen Kukich. 2000. The role of centering theory's rough-shift in the teach- ing and evaluation of writing skills. In Proceed- ings of the 38th Annual Meeting of the Associa- tion for Computational Linguistics (ACL-2000). pages 408-415.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "The effects and limitations of automated text condensing on reading comprehension performance", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Morris", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Kasper", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Adams", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Systems Research", |
| "volume": "3", |
| "issue": "1", |
| "pages": "17--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Morris, A., G. Kasper, and D. Adams. 1992. The effects and limitations of automated text condens- ing on reading comprehension performance. In- formation Systems Research 3(1):17-35.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text", |
| "authors": [ |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Morris", |
| "suffix": "" |
| }, |
| { |
| "first": "Graeme", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Computational Linguistics", |
| "volume": "17", |
| "issue": "1", |
| "pages": "21--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Morris, Jane and Graeme Hirst. 1991. Lexical cohe- sion computed by thesaural relations as an indi- cator of the structure of text. Computational Lin- guistics 17(1):21-48.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Probabilistic sentence reduction using support vector machines", |
| "authors": [ |
| { |
| "first": "Minh", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Akira", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Susumu", |
| "middle": [], |
| "last": "Shimazu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tu", |
| "middle": [ |
| "Bao" |
| ], |
| "last": "Horiguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaru", |
| "middle": [], |
| "last": "Ho", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fukushi", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 20th COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "743--749", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nguyen, Minh Le, Akira Shimazu, Susumu Horiguchi, Tu Bao Ho, and Masaru Fukushi. 2004. Probabilistic sentence reduction using support vector machines. In Proceedings of the 20th COLING. Geneva, Switzerland, pages 743-749.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Centering: a parametric theory and its instantiations", |
| "authors": [ |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Rosemary", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [ |
| "Di" |
| ], |
| "last": "Eugenio", |
| "suffix": "" |
| }, |
| { |
| "first": "Janet", |
| "middle": [], |
| "last": "Hitzeman", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "3", |
| "pages": "309--363", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Poesio, Massimo, Rosemary Stevenson, Barbara Di Eugenio, and Janet Hitzeman. 2004. Centering: a parametric theory and its instantiations. Compu- tational Linguistics 30(3):309-363.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "Tracy", |
| "middle": [ |
| "H" |
| ], |
| "last": "Stefan", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "King", |
| "suffix": "" |
| }, |
| { |
| "first": "Annie", |
| "middle": [], |
| "last": "Crouch", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zaenen", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the HLT/NAACL. Edmonton", |
| "volume": "", |
| "issue": "", |
| "pages": "118--125", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riezler, Stefan, Tracy H. King, Richard Crouch, and Annie Zaenen. 2003. Statistical sentence con- densation using ambiguity packing and stochas- tic disambiguation methods for lexical-functional grammar. In Proceedings of the HLT/NAACL. Ed- monton, Canada, pages 118-125.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A corpus-based evaluation of centering and pronoun resolution", |
| "authors": [ |
| { |
| "first": "Joel", |
| "middle": [ |
| "R" |
| ], |
| "last": "Tetreault", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "4", |
| "pages": "507--520", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tetreault, Joel R. 2001. A corpus-based evaluation of centering and pronoun resolution. Computa- tional Linguistics 27(4):507-520.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Summarizing scientific articles -experiments with relevance and rhetorical status", |
| "authors": [ |
| { |
| "first": "Simone", |
| "middle": [], |
| "last": "Teufel", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Computational Linguistics", |
| "volume": "28", |
| "issue": "4", |
| "pages": "409--446", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Teufel, Simone and Marc Moens. 2002. Summa- rizing scientific articles -experiments with rele- vance and rhetorical status. Computational Lin- guistics 28(4):409-446.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Supervised and unsupervised learning for sentence compression", |
| "authors": [ |
| { |
| "first": "Jenine", |
| "middle": [], |
| "last": "Turner", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "290--297", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Turner, Jenine and Eugene Charniak. 2005. Su- pervised and unsupervised learning for sentence compression. In Proceedings of the 43rd ACL. Ann Arbor, MI, pages 290-297.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Sentence compression for automated subtitling: A hybrid approach", |
| "authors": [ |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Vandeghinste", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Pan", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the ACL Workshop on Text Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "89--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vandeghinste, Vincent and Yi Pan. 2004. Sentence compression for automated subtitling: A hybrid approach. In Proceedings of the ACL Workshop on Text Summarization. Barcelona, Spain, pages 89-95.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Linear Programming: Foundations and Extensions", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [ |
| "J" |
| ], |
| "last": "Vanderbei", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vanderbei, Robert J. 2001. Linear Programming: Foundations and Extensions. Kluwer Academic Publishers, Boston, 2nd edition.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Introduction to Mathematical Programming", |
| "authors": [ |
| { |
| "first": "Wayne", |
| "middle": [ |
| "L" |
| ], |
| "last": "Winston", |
| "suffix": "" |
| }, |
| { |
| "first": "Munirpallam", |
| "middle": [], |
| "last": "Venkataramanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Winston, Wayne L. and Munirpallam Venkatara- manan. 2003. Introduction to Mathematical Pro- gramming. Brooks/Cole.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Excerpt of document from our test set with discourse annotations. Centers are in double boxes; terms occurring in lexical chains are in oval boxes. Words with the same subscript are members of the same chain (e.g., today, day, second, yesterday)", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "System output on excerpt fromFigure 1.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "content": "<table><tr><td>model.</td></tr></table>", |
| "html": null, |
| "text": "Compression results: compression rate and relation-based F-score; * sig. diff. from Discourse ILP (p < 0.05 using the Student t test).", |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "text": "Human Evaluation Results: average readability ratings and average percentage of questions answered correctly.", |
| "num": null |
| } |
| } |
| } |
| } |