ACL-OCL / Base_JSON /prefixP /json /P15 /P15-1045.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P15-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:12:23.843751Z"
},
"title": "Event-Driven Headline Generation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Wuhan University",
"location": {
"country": "China"
}
},
"email": "ruisun@whu.edu.cn"
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore University of Technology and Design",
"location": {}
},
"email": "yuezhang@sutd.edu.sg"
},
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "meishanzhang@sutd.edu.sg"
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Wuhan University",
"location": {
"country": "China"
}
},
"email": "dhji@whu.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose an event-driven model for headline generation. Given an input document, the system identifies a key event chain by extracting a set of structural events that describe them. Then a novel multi-sentence compression algorithm is used to fuse the extracted events, generating a headline for the document. Our model can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Standard evaluation shows that our model achieves the best performance compared with previous state-of-the-art systems.",
"pdf_parse": {
"paper_id": "P15-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose an event-driven model for headline generation. Given an input document, the system identifies a key event chain by extracting a set of structural events that describe them. Then a novel multi-sentence compression algorithm is used to fuse the extracted events, generating a headline for the document. Our model can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Standard evaluation shows that our model achieves the best performance compared with previous state-of-the-art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Headline generation (HG) is a text summarization task, which aims to describe an article (or a set of related paragraphs) using a single short sentence. The task is useful in a number of practical scenarios, such as compressing text for mobile device users (Corston-Oliver, 2001 ), generating table of contents (Erbs et al., 2013) , and email summarization (Wan and McKeown, 2004) . This task is challenging in not only informativeness and readability, which are challenges to common summarization tasks, but also the length reduction, which is unique for headline generation.",
"cite_spans": [
{
"start": 257,
"end": 278,
"text": "(Corston-Oliver, 2001",
"ref_id": "BIBREF8"
},
{
"start": 311,
"end": 330,
"text": "(Erbs et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 357,
"end": 380,
"text": "(Wan and McKeown, 2004)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous headline generation models fall into two main categories, namely extractive HG and abstractive HG (Woodsend et al., 2010; Alfonseca et al., 2013) .",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "(Woodsend et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 131,
"end": 154,
"text": "Alfonseca et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both consist of two steps: candidate extraction and headline generation. Extractive models choose a set of salient sentences in candidate extraction, and then exploit sentence compression techniques to achieve headline generation (Dorr et al., 2003 Figure 1 : System framework. Zajic et al., 2005) . Abstractive models choose a set of informative phrases for candidate extraction, and then exploit sentence synthesis techniques for headline generation (Soricut and Marcu, 2007; Woodsend et al., 2010; Xu et al., 2010) . Extractive HG and abstractive HG have their respective advantages and disadvantages. Extractive models can generate more readable headlines, because the final title is derived by tailoring human-written sentences.",
"cite_spans": [
{
"start": 230,
"end": 248,
"text": "(Dorr et al., 2003",
"ref_id": "BIBREF10"
},
{
"start": 278,
"end": 297,
"text": "Zajic et al., 2005)",
"ref_id": "BIBREF40"
},
{
"start": 452,
"end": 477,
"text": "(Soricut and Marcu, 2007;",
"ref_id": "BIBREF31"
},
{
"start": 478,
"end": 500,
"text": "Woodsend et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 501,
"end": 517,
"text": "Xu et al., 2010)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 249,
"end": 257,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, extractive models give less informative titles (Alfonseca et al., 2013) , because sentences are very sparse, making high-recall candidate extraction difficult. In contrast, abstractive models use phrases as the basic processing units, which are much less sparse. However, it is more difficult for abstractive HG to ensure the grammaticality of the generated titles, given that sentence synthesis is still very inaccurate based on a set of phrases with little grammatical information (Zhang, 2013) .",
"cite_spans": [
{
"start": 56,
"end": 80,
"text": "(Alfonseca et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 492,
"end": 505,
"text": "(Zhang, 2013)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose an event-driven model for headline generation, which alleviates the disadvantages of both extractive and abstractive HG. The framework of the proposed model is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we use events as the basic processing units for candidate extraction. We use structured tuples to represent the subject, predicate and object of an event. This form of event representation is widely used in open information extraction (Fader et al., 2011; Qiu and Zhang, 2014) . Intuitively, events can be regarded as a trade-off between sentences and phrases. Events are meaningful structures, containing necessary grammatical information, and yet are much less sparse than sentences. We use salience measures of both sentences and phrases for event extraction, and thus our model can be regarded as a combination of extractive and abstractive HG.",
"cite_spans": [
{
"start": 250,
"end": 270,
"text": "(Fader et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 271,
"end": 291,
"text": "Qiu and Zhang, 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "During the headline generation step, A graphbased multi-sentence compression (MSC) model is proposed to generate a final title, given multiple events. First a directed acyclic word graph is constructed based on the extracted events, and then a beam-search algorithm is used to find the best title based on path scoring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct experiments on standard datasets for headline generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The results show that headline generation can benefit not only from exploiting events as the basic processing units, but also from the proposed graph-based MSC model. Both our candidate extraction and headline generation methods outperform competitive baseline methods, and our model achieves the best results compared with previous state-of-the-art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous extractive and abstractive models take two main steps, namely candidate extraction and headline generation. Here, we introduce these two types of models according to the two steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Candidate Extraction. Extractive models exploit sentences as the basic processing units in this step. Sentences are ranked by their salience according to specific strategies (Dorr et al., 2003; Erkan and Radev, 2004; Zajic et al., 2005) . One of the stateof-the-art approaches is the work of Erkan and Radev (2004) , which exploits centroid, position and length features to compute sentence salience. We re-implemented this method as our baseline sentence ranking method. In this paper, we use SentRank to denote this method.",
"cite_spans": [
{
"start": 174,
"end": 193,
"text": "(Dorr et al., 2003;",
"ref_id": "BIBREF10"
},
{
"start": 194,
"end": 216,
"text": "Erkan and Radev, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 217,
"end": 236,
"text": "Zajic et al., 2005)",
"ref_id": "BIBREF40"
},
{
"start": 292,
"end": 314,
"text": "Erkan and Radev (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extractive Headline Generation",
"sec_num": "2.1"
},
{
"text": "Headline Generation. Given a set of sentences, extractive models exploit sentence compression techniques to generate a final title. Most previous work exploits single-sentence compression (SSC) techniques. Dorr et al. (2003) proposed the Hedge Trimmer algorithm to compress a sentence by making use of handcrafted linguistically-based rules. Alfonseca et al. (2013) introduce a multi-sentence compression (MSC) model into headline generation, using it as a baseline in their work. They indicated that the most important information is distributed across several sentences in the text.",
"cite_spans": [
{
"start": 206,
"end": 224,
"text": "Dorr et al. (2003)",
"ref_id": "BIBREF10"
},
{
"start": 342,
"end": 365,
"text": "Alfonseca et al. (2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extractive Headline Generation",
"sec_num": "2.1"
},
{
"text": "Candidate Extraction. Different from extractive models, abstractive models exploit phrases as the basic processing units. A set of salient phrases are selected according to specific principles during candidate extraction (Schwartz, 01; Soricut and Marcu, 2007; Xu et al., 2010; Woodsend et al., 2010) . Xu et al. (2010) propose to rank phrases using background knowledge extracted from Wikipedia. Woodsend et al. (2010) use supervised models to learn the salience score of each phrase. Here, we use the work of Soricut and Marcu (2007) , namely PhraseRank, as our baseline phrase ranking method, which is an unsupervised model without external resources. The method exploits unsupervised topic discovery to find a set of salient phrases.",
"cite_spans": [
{
"start": 221,
"end": 235,
"text": "(Schwartz, 01;",
"ref_id": null
},
{
"start": 236,
"end": 260,
"text": "Soricut and Marcu, 2007;",
"ref_id": "BIBREF31"
},
{
"start": 261,
"end": 277,
"text": "Xu et al., 2010;",
"ref_id": "BIBREF38"
},
{
"start": 278,
"end": 300,
"text": "Woodsend et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 303,
"end": 319,
"text": "Xu et al. (2010)",
"ref_id": "BIBREF38"
},
{
"start": 397,
"end": 419,
"text": "Woodsend et al. (2010)",
"ref_id": "BIBREF37"
},
{
"start": 511,
"end": 535,
"text": "Soricut and Marcu (2007)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Headline Generation",
"sec_num": "2.2"
},
{
"text": "Headline Generation. In the headline generation step, abstractive models exploit sentence synthesis technologies to accomplish headline generation. Zajic et al. (2005) exploit unsupervised topic discovery to find key phrases, and use the Hedge Trimmer algorithm to compress candidate sentences. One or more key phrases are added into the compressed fragment according to the length of the headline. Soricut and Marcu (2007) employ WIDL-expressions to generate headlines. Xu et al. (2010) employ keyword clustering based on several bag-of-words models to construct a headline.",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "Zajic et al. (2005)",
"ref_id": "BIBREF40"
},
{
"start": 399,
"end": 423,
"text": "Soricut and Marcu (2007)",
"ref_id": "BIBREF31"
},
{
"start": 471,
"end": 487,
"text": "Xu et al. (2010)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Headline Generation",
"sec_num": "2.2"
},
{
"text": "Woodsend et al. (2010) use quasi-synchronous grammar (QG) to optimize phrase selection and surface realization preferences jointly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Headline Generation",
"sec_num": "2.2"
},
{
"text": "Similar to extractive and abstractive models, the proposed event-driven model consists of two steps, namely candidate extraction and headline generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Model",
"sec_num": "3"
},
{
"text": "We exploit events as the basic units for candidate extraction. Here an event is a tuple (S, P, O), where S is the subject, P is the predicate and O is the object. For example, for the sentence \"Ukraine Delays Announcement of New Government\", the event is (Ukraine, Delays, Announcement). This type of event structures has been used in open information extraction (Fader et al., 2011) , and has a range of NLP applications (Ding et al., 2014; Ng et al., 2014) .",
"cite_spans": [
{
"start": 363,
"end": 383,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF14"
},
{
"start": 422,
"end": 441,
"text": "(Ding et al., 2014;",
"ref_id": "BIBREF9"
},
{
"start": 442,
"end": 458,
"text": "Ng et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": "A sentence is a well-formed structure with complete syntactic information, but can contain redundant information for text summarization, which makes sentences very sparse. Phrases can be used to avoid the sparsity problem, but with little syntactic information between phrases, fluent headline generation is difficult. Events can be regarded as a trade-off between sentences and phrases. They are meaningful structures without redundant components, less sparse than sentences and containing more syntactic information than phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": "In our system, candidate event extraction is performed on a bipartite graph, where the two types of nodes are lexical chains (Section 3.1.2) and events (Section 3.1.1), respectively. Mutual Reinforcement Principle (Zha, 2002) is applied to jointly learn chain and event salience on the bipartite graph for a given input. We obtain the top-k candidate events by their salience measures.",
"cite_spans": [
{
"start": 214,
"end": 225,
"text": "(Zha, 2002)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": "We apply an open-domain event extraction approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Events",
"sec_num": "3.1.1"
},
{
"text": "Different from traditional event extraction, for which types and arguments are predefined, open event extraction does not have a closed set of entities and relations (Fader et al., 2011) . We follow Hu's work (Hu et al., 2013) to extract events.",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF14"
},
{
"start": 209,
"end": 226,
"text": "(Hu et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Events",
"sec_num": "3.1.1"
},
{
"text": "Given a text, we first use the Stanford dependency parser 1 to obtain the Stanford typed dependency structures of the sentences (Marneffe and Manning, 2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Events",
"sec_num": "3.1.1"
},
{
"text": "Then we focus on For example, given the sentence, \"the Keenans could demand the Aryan Nations' assets\", Figure 2 present its partial parsing tree. Based on the parsing results, two event arguments are obtained: nsubj(demand, Keenans) and dobj(demand, assets). The two event arguments are merged into one event: (Keenans, demand, assets).",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 113,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extracting Events",
"sec_num": "3.1.1"
},
{
"text": "Lexical chains are used to link semanticallyrelated words and phrases (Morris and Hirst, 1991; Barzilay and Elhadad, 1997) . A lexical chain is analogous to a semantic synset. Compared with words, lexical chains are less sparse for event ranking.",
"cite_spans": [
{
"start": 70,
"end": 94,
"text": "(Morris and Hirst, 1991;",
"ref_id": "BIBREF25"
},
{
"start": 95,
"end": 122,
"text": "Barzilay and Elhadad, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Lexical Chains",
"sec_num": "3.1.2"
},
{
"text": "Given a text, we follow Boudin and Morin (2013) to construct lexical chains based on the following principles:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Lexical Chains",
"sec_num": "3.1.2"
},
{
"text": "1. All words that are identical after stemming are treated as one word;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Lexical Chains",
"sec_num": "3.1.2"
},
{
"text": "2. All NPs with the same head word fall into one lexical chain; 2 3. A pronoun is added to the corresponding lexical chain if it refers to a word in the chain (The coreference resolution is performed using the Stanford Coreference Resolution system); 3 4. Lexical chains are merged if their main words are in the same synset of WordNet. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Lexical Chains",
"sec_num": "3.1.2"
},
{
"text": "At initialization, each word in the document is a lexical chain. We repeatedly merge existing chains by the four principles above until convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Lexical Chains",
"sec_num": "3.1.2"
},
{
"text": "In particular, we focus on content words only, including verbs, nouns and adjective words. After the merging, each lexical chain represents a word cluster, and the first occuring word in it can be used as the main word of chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Lexical Chains",
"sec_num": "3.1.2"
},
{
"text": "Intuitively, one word should be more important if it occurs in more important events. Similarly, one event should be more important if it includes more important words. Inspired by this, we construct a bipartite graph between lexical chains and events, shown in Figure 3 , and then exploit MRP to jointly learn the salience of lexical chains and events. MRP has been demonstrated effective for jointly learning the vertex weights of a bipartite graph (Zhang et al., 2008; Ventura et al., 2013) .",
"cite_spans": [
{
"start": 451,
"end": 471,
"text": "(Zhang et al., 2008;",
"ref_id": "BIBREF42"
},
{
"start": 472,
"end": 493,
"text": "Ventura et al., 2013)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 262,
"end": 270,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "Given a text, we construct bipartite graph between the lexical chains and events, with an edge being constructed between a lexical chain and an event if the event contains a word in the lexical chain. Suppose that there are n events We compute the final sal(e) and sal(l) iteratively by MRP. At each step, sal(e i ) and sal(l j ) are computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sal(e i ) \u221d m j=1 r ij \u00d7 sal(l j ) sal(l j ) \u221d n i=1 r ij \u00d7 sal(e i ) r ij = (l j ,e i )\u2208G bi w(l j ) \u2022 w(e i ) A",
"eq_num": "(1)"
}
],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "where r ij \u2208 R denotes the cohesion between lexicon chain l i and event e j , A is a normalization factor, sal(\u2022) denotes the salience, and the initial values of sal(e) and sal(t) can be assigned randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "The remaining problem is how to define the salience score of a given lexicon chain l i and a given event e j . In this work, we use the guidance of abstractive and extractive models to compute Lexical Chains Events Figure 3 : Bipartite graph where two vertex sets denote lexical chains and events, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "sal(l j ) and sal(e i ), respectively, as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w(l j ) = w\u2208l j sal abs (w) w(e i ) = s\u2208Sen(e i ) sal ext (s)",
"eq_num": "(2)"
}
],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "where sal abs (\u2022) denotes the word salience score of an abstractive model, sal ext (\u2022) denotes the sentence salience score of an extractive model, and Sen(e i ) denotes the sentence set where e i is extracted from. We exploit our baseline sentence ranking method, SentRank, to obtain the sentence salience score, and use our baseline phrase ranking method, PhraseRank, to obtain the phrase salience score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Salient Events",
"sec_num": "3.1.3"
},
{
"text": "We use a graph-based multi-sentence compression (MSC) model to generate the final title for the proposed event-driven model. The model is inspired by Filippova (2010) . First, a weighted directed acyclic word graph is built, with a start node and an end node in the graph. A headline can be obtained by any path from the start node to the end node. We measure each candidate path by a scoring function. Based on the measurement, we exploit a beam-search algorithm to find the optimum path.",
"cite_spans": [
{
"start": 150,
"end": 166,
"text": "Filippova (2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Headline Generation",
"sec_num": "3.2"
},
{
"text": "Given a set of candidate events CE, we extract all the sentences that contain the events. In particular, we add two artificial words, S and E , to the start position and end position of all sentences, respectively. Following Filippova (2010), we extract all words in the sentences as graph vertexes, and then construct edges based on these words. Filippova 2010 for all the word pairs that are adjacent in one sentence. The title generated using this strategy can mistakenly contain common word bigrams( i.e. adjacent words) in different sentences. To address this, we change the strategy slightly, by adding edges for all word pairs of one sentence in the original order. In another words, if word w j occurs after w i in one sentence, then we add an edge w i \u2192 w j for the graph. Figure 4 gives an example of the word graph. The search space of the graph is larger compared with that of Filippova (2010) because of more added edges. Different from Filippova (2010), salience information is introduced into the calculation of the weights of vertexes. One word that occurs in more salient candidate should have higher weight. Given a graph G = (",
"cite_spans": [],
"ref_spans": [
{
"start": 782,
"end": 790,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "V, E), where V = {V 1 , \u2022 \u2022 \u2022 , V n } denotes the word nodes and E = {E ij \u2208 {0, 1}, i, j \u2208 [1, n]} denotes the edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "The vertex weight is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "w(V i ) = e\u2208CE sal(e) exp{\u2212dist(V i .w, e)} (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "where sal(e) is the salience score of an event from the candidate extraction step, V i .w denotes the word of vertex V i , and dist(w, e) denotes the distance from the word w to the event e, which are defined by the minimum distance from w to all the related words of e in a sentence by the dependency path 5 between them. Intuitively, equation 3 demonstrates that a vertex is salient when its corresponding word is close to salient events. It is worth noting that the formula can adapt to extractive and abstractive models as well, by replacing events with sentences and phrases. We use them for the SentRank and PhraseRank baseline systems in Section 4.3, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "The equation to compute the edge weight is adopted from Filippova 2010:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w (E ij ) = s rdist(V i .w, V j .w) w(E ij ) = w(V i )w(V j ) \u2022 w (E ij ) w(V i ) + w(V j )",
"eq_num": "(4)"
}
],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "where w (E ij ) refers to the sum of rdist(V i .w, V j .w) over all sentences, and rdist(\u2022) denotes the reciprocal distance of two words in a sentence by the dependency path. By the formula, an edge is salient when the corresponding vertex weights are large or the corresponding words are close.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Graph Construction",
"sec_num": "3.2.1"
},
{
"text": "The key to our MSC model is the path scoring function. We measure a candidate path based on two aspects. Besides the sum edge score of the path, we exploit a trigram language model to compute a fluency score of the path. Language models have been commonly used to generate more readable titles. The overall score of a path is compute by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Method",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(p) = edge(p) + \u03bb \u00d7 flu(p) edge(p) = E ij \u2208p ln{w(E ij )} n flu(p) = i ln{p(w i |w i\u22122 w i\u22121 )} n",
"eq_num": "(5)"
}
],
"section": "Scoring Method",
"sec_num": "3.2.2"
},
{
"text": "where p is a candidate path and the corresponding word sequence of p is w 1 \u2022 \u2022 \u2022 w n . A trigram language model is trained using SRILM 6 on English Gigaword (LDC2011T07).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Method",
"sec_num": "3.2.2"
},
{
"text": "Beam search has been widely used aiming to find the sub optimum result (Collins and Roark, 2004; Zhang and Clark, 2011) , when exact inference is extremely difficult. Assuming our word graph has a vertex size of n, the worst computation complexity is O(n 4 ) when using a trigram language model, which is time consuming.",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "(Collins and Roark, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 97,
"end": 119,
"text": "Zhang and Clark, 2011)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search",
"sec_num": "3.2.3"
},
{
"text": "Input: Figure 5 : The beam-search algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 15,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Beam Search",
"sec_num": "3.2.3"
},
{
"text": "G \u2190 (V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search",
"sec_num": "3.2.3"
},
{
"text": "Using beam search, assuming the beam size is B, the time complexity decreases to O(Bn 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search",
"sec_num": "3.2.3"
},
{
"text": "Pseudo-code of our beam search algorithm is shown in Figure 5 . During search, we use candidates to save a fixed size (B) of partial results. For each iteration, we generate a set of new candidates by adding one vertex from the graph, computing their scores, and maintaining the top B candidates for the next iteration. If one candidate reaches the end of the graph, we do not expand it, directly adding it into the new candidate set according to its current score. If all the candidates reach the end, the searching algorithm terminates and the result path is the candidate from candidates with the highest score.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 61,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Beam Search",
"sec_num": "3.2.3"
},
{
"text": "We use the standard HG test dataset to evaluate our model, which consists of 500 articles from DUC-04 task 1 7 , where each article is provided with four reference headlines. In particular, we use the first 100 articles from DUC-07 as our development set. There are averaged 40 events per article in the two datasets. All the pre-processing steps, including POS tagging, lemma analysis, dependency parsing and anaphora resolution, are conducted using the Stanford NLP tools (Marneffe and Manning, 2008) . The MRP iteration number is set to 10.",
"cite_spans": [
{
"start": 474,
"end": 502,
"text": "(Marneffe and Manning, 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We use ROUGE (Lin, 2004) to automatically measure the model performance, which has been widely used in summarization tasks (Wang et al., 2013; Ng et al., 2014) . We focus on Rouge1 and Rouge2 scores, following Xu et al. (2010) . In addition, we conduct human evaluations, using the same method as Woodsend et al. (2010) . Four participants are asked to rate the generated headlines by three criteria: informativeness (how much important information in the article does the headline describe?), fluency (is it fluent to read?) and coherence (does it capture the topic of article?). Each headline is given a subjective score from 0 to 5, with 0 being the worst and 5 being the best. The first 50 documents from the test set and their corresponding headlines are selected for human rating. We conduct significant tests using t-test.",
"cite_spans": [
{
"start": 13,
"end": 24,
"text": "(Lin, 2004)",
"ref_id": "BIBREF21"
},
{
"start": 123,
"end": 142,
"text": "(Wang et al., 2013;",
"ref_id": "BIBREF36"
},
{
"start": 143,
"end": 159,
"text": "Ng et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 210,
"end": 226,
"text": "Xu et al. (2010)",
"ref_id": "BIBREF38"
},
{
"start": 297,
"end": 319,
"text": "Woodsend et al. (2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "There are three important parameters in the proposed event-driven model, including the beam size B, the fluency weight \u03bb and the number of candidate events N . We find the optimum parameters on development dataset in this section. For efficiency, the three parameters are optimized separately. The best performance is achieved with B = 8, \u03bb = 0.4 and N = 10. We report the model results on the development dataset to study the influences of the three parameters, respectively, with the other two parameters being set with their best value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Results",
"sec_num": "4.2"
},
{
"text": "We perform experiments with different beam widths. Figure 6 shows the results of the proposed model with beam sizes of 1, 2, 4, 8, 16, 32, 64. As can be seen, our model can achieve the best performances when the beam size is set to 8. Larger beam sizes do not bring better results.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of Beam Size",
"sec_num": "4.2.1"
},
{
"text": "The fluency score is used for generating readable titles, while the edge score is used for generating informative titles. The balance between them is important. By default, we set one to the weight of edge score, and find the best weight \u03bb for the fluency score. We set \u03bb ranging from 0 to 1 with and interval of 0.1, to investigate the influence of Figure 7 shows the results. The best result is obtained when \u03bb = 0.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 350,
"end": 358,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Influence of Fluency Weight",
"sec_num": "4.2.2"
},
{
"text": "Ideally, all the sentences of an original text should be considered in multi-sentence compression. But an excess of sentences would bring more noise. We suppose that the number of candidate events N is important as well. To study its influence, we report the model results with different N , from 1 to 15 with an interval of 1. As shown in Figure 8 , the performance increases significantly from 1 to 10, and no more gains when N > 10. The performance decreases drastically when M ranges from 12 to 15. Table 1 shows the final results on the test dataset. The performances of the proposed eventdriven model are shown by EventRank. In addition, we use our graph-based MSC model to 8 Preliminary results show that \u03bb is better below one. 9 The mark * denotes the results are inaccurate, which are guessed from the figures in the published paper. generate titles for SentRank and PhraseRank, respectively, as mentioned in Section 3.2.1. By comparison with the two models, we can examine the effectiveness of the event-driven model. As shown in Table 1 , the event-driven model achieves the best scores on both Rouge1 and Rouge2, demonstrating events are more effective than sentences and phrases. Further, we compare our proposed MSC method with the MSC proposed by Filippova (2010) , to study the effectiveness of our novel MSC. We use MSC 10 and SalMSC 11 to Table 2 : Results from the manual evaluation. The mark \u2021 denotes the result is significantly better with a p-value below 0.01.",
"cite_spans": [
{
"start": 681,
"end": 682,
"text": "8",
"ref_id": null
},
{
"start": 1263,
"end": 1279,
"text": "Filippova (2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 340,
"end": 349,
"text": "Figure 8",
"ref_id": null
},
{
"start": 504,
"end": 511,
"text": "Table 1",
"ref_id": "TABREF5"
},
{
"start": 1041,
"end": 1048,
"text": "Table 1",
"ref_id": "TABREF5"
},
{
"start": 1358,
"end": 1365,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of Candidate Event Count",
"sec_num": "4.2.3"
},
{
"text": "SentRank, PhraseRank and EventRank to denote their MSC method and our proposed MSC, respectively, applying them, respectively. As shown in Table 1 , better performance is achieved by our MSC, demonstrating the effectiveness of our proposed MSC. Similarly, the event-driven model can achieve the best results. We report results of previous state-of-the-art systems as well. SentRank+SSC denotes the result of Erkan and Radev (2004) , which uses our SentRank and SSC to obtain the final title. Topiary denotes the result of Zajic et al. (2005) , which is an early abstractive model. Woodsend denotes the result of Woodsend et al. (2010) , which is an abstractive model using a quasisynchronous grammar to generate a title. As shown in Table 1 , MSC is significantly better than SSC, and our event-driven model achieves the best performance, compared with state-of-the-art systems.",
"cite_spans": [
{
"start": 408,
"end": 430,
"text": "Erkan and Radev (2004)",
"ref_id": "BIBREF13"
},
{
"start": 522,
"end": 541,
"text": "Zajic et al. (2005)",
"ref_id": "BIBREF40"
},
{
"start": 612,
"end": 634,
"text": "Woodsend et al. (2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 1",
"ref_id": "TABREF5"
},
{
"start": 733,
"end": 740,
"text": "Table 1",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Final Results",
"sec_num": "4.3"
},
{
"text": "Following Alfonseca et al. (2013) , we conduct human evaluation also. The results are shown in Table 2 , by three aspects: informativeness, fluency and coherence. The overall tendency is similar to the results, and the event-driven model achieves the best results.",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "Alfonseca et al. (2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Results",
"sec_num": "4.3"
},
{
"text": "We show several representative examples of the proposed event-driven model, in comparison with the extractive and abstractive models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Outputs",
"sec_num": "4.4"
},
{
"text": "The examples are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 3",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Example Outputs",
"sec_num": "4.4"
},
{
"text": "In the first example, the results of both SentRank and PhraseRank contain the redundant phrase \"catastrophe Tuesday\". The output of PhraseRank is less fluent compared with that of SentRank. The preposition \"for\" is not recovered by the headline generation system PhraseRank. In contrast, the output of EventRank is better, capturing the major event in the reference title. In the second example, the outputs of three systems all lose the phrase \"Ibero-American summit\". SentRank gives different additional information compared with PhraseRank and EventRank. Overall, the three outputs can be regarded as comparable. PhraseRank also has a fluency problem by ignoring some function words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Outputs",
"sec_num": "4.4"
},
{
"text": "In the third example, SentRank does not capture the information on \"demands for talks\". PhraseRank discards the preposition word \"for\". The output of EventRank is better, being both more fluent and more informative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Outputs",
"sec_num": "4.4"
},
{
"text": "From the three examples, we can see that SentRank tends to generate more readable titles, but may lose some important information. PhraseRank tends to generate a title with more important words, but the fluency is relatively weak even with MSC. EventRank combines the advantages of both SentRank and PhraseRank, generating titles that contain more important events with complete structures. The observation verifies our hypothesis in the introduction -that extractive models have the problem of low information coverage, and abstractive models have the problem of poor grammaticality. The event-driven mothod can alleviate both issues since event offer a trade-off between sentence and phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Outputs",
"sec_num": "4.4"
},
{
"text": "Our event-driven model is different from traditional extractive (Dorr et al., 2003; Erkan and Radev, 2004; Alfonseca et al., 2013) and abstractive models (Zajic et al., 2005; Soricut and Marcu, 2007; Woodsend et al., 2010; Xu et al., 2010) in that events are used as the basic processing units instead of sentences and phrases. As mentioned above, events are a trade-off between sentences and phrases, avoiding sparsity and structureless problems. In particular, our event-driven model can interact with sentences and phrases, thus is a light combination for two traditional models.",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "(Dorr et al., 2003;",
"ref_id": "BIBREF10"
},
{
"start": 84,
"end": 106,
"text": "Erkan and Radev, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 107,
"end": 130,
"text": "Alfonseca et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 154,
"end": 174,
"text": "(Zajic et al., 2005;",
"ref_id": "BIBREF40"
},
{
"start": 175,
"end": 199,
"text": "Soricut and Marcu, 2007;",
"ref_id": "BIBREF31"
},
{
"start": 200,
"end": 222,
"text": "Woodsend et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 223,
"end": 239,
"text": "Xu et al., 2010)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The event-driven model is mainly inspired by Alfonseca et al. (2013) , who exploit events for multi-document headline generation. They leverage titles of sub-documents for supervised training. In contrast, we generate a title for a single document using an unsupervised model. We use novel approaches for event ranking and title generation.",
"cite_spans": [
{
"start": 45,
"end": 68,
"text": "Alfonseca et al. (2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In recent years, sentence compression (Galanis and Androutsopoulos, 2010; Yoshikawa and Iida, 2012; Wang et al., 2013; Thadani, 2014) has received much attention. Some methods can be directly applied for multidocument summarization (Wang et al., 2013; . To our knowledge, few studies have been explored on applying them in headline generation.",
"cite_spans": [
{
"start": 38,
"end": 73,
"text": "(Galanis and Androutsopoulos, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 74,
"end": 99,
"text": "Yoshikawa and Iida, 2012;",
"ref_id": "BIBREF39"
},
{
"start": 100,
"end": 118,
"text": "Wang et al., 2013;",
"ref_id": "BIBREF36"
},
{
"start": 119,
"end": 133,
"text": "Thadani, 2014)",
"ref_id": "BIBREF32"
},
{
"start": 232,
"end": 251,
"text": "(Wang et al., 2013;",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Multi-sentence compression based on word graph was first proposed by Filippova (2010) . Some subsequent work was presented recently. Boudin and Morin (2013) propose that the key phrase is helpful to sentence generation. The key phrases are extracted according to syntactic pattern and introduced to identify shortest path in their work. Mehdad et al. (2013; Mehdad et al. (2014) introduce the MSC based on word graph into meeting summarization. Tzouridis et al. (2014) cast multi-sentence compression as a structured predication problem. They use a largemargin approach to adapt parameterised edge weights to the data in order to acquire the shortest path. In their work, the sentences introduced to a word graph are treated equally, and the edges in the graph are constructed according to the adjacent order in original sentence.",
"cite_spans": [
{
"start": 69,
"end": 85,
"text": "Filippova (2010)",
"ref_id": "BIBREF15"
},
{
"start": 133,
"end": 156,
"text": "Boudin and Morin (2013)",
"ref_id": null
},
{
"start": 337,
"end": 357,
"text": "Mehdad et al. (2013;",
"ref_id": "BIBREF23"
},
{
"start": 358,
"end": 378,
"text": "Mehdad et al. (2014)",
"ref_id": "BIBREF24"
},
{
"start": 445,
"end": 468,
"text": "Tzouridis et al. (2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our MSC model is also inspired by Filippova (2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our approach is more aggressive than their approach, generating compressions with arbitrary length by using a different edge construction strategy. In addition, our search algorithm is also different from theirs. Our graph-based MSC model is also similar in spirit to sentence fusion, which has been used for multi-document summarization (Barzilay and McKeown, 2005; Elsner and Santhanam, 2011) .",
"cite_spans": [
{
"start": 338,
"end": 366,
"text": "(Barzilay and McKeown, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 367,
"end": 394,
"text": "Elsner and Santhanam, 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We proposed an event-driven model headline generation, introducing a graph-based MSC model to generate the final title, based on a set of events. Our event-driven model can incorporate sentence and phrase salience, which has been used in extractive and abstractive HG models. The proposed graph-based MSC model is not limited to our event-driven model. It can be applied on extractive and abstractive models as well. Experimental results on DUC-04 demonstrate that event-driven model can achieve better results than extractive and abstractive models, and the proposed graph-based MSC model can bring improved performances compared with previous MSC techniques. Our final event-driven model obtains the best result on this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "For future work, we plan to explore two directions. Firstly, we plan to introduce event relations to learning event salience. In addition, we plan to investigate other methods about multisentence compression and sentence fusion, such as supervised methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "NPs are extracted according to the dependency relations nn and amod. As shown inFigure 2, we can extract the noun phrase Aryan Nations according to the dependency relation nn(Nations, Aryan).3 http://nlp.stanford.edu/software/dcoref.shtml 4 http://wordnet.princeton.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The distance is +\u221e when e and w are not in one sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.speech.sri.com/projects/srilm/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://duc.nist.gov/duc2004/tasks.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The MSC source code, published byBoudin and Morin (2013), is available at https://github.com/boudinfl/takahe.11 Our source code is available at https://github.com/ dram218/WordGraphCompression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank all reviewers for their detailed comments.This work is supported by the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "HEADY: News headline abstraction through event pattern clustering",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Garrido",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL 2013",
"volume": "",
"issue": "",
"pages": "1243--1253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrique Alfonseca, Daniele Pighin and Guillermo Garrido. 2013. HEADY: News headline abstraction through event pattern clustering. In Proceedings of ACL 2013,pages 1243-1253.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using Lexical Chains for Text Summarization",
"authors": [],
"year": null,
"venue": "Proceedings of the Intelligent Scalable Text Summarization Workshop(ISTS'97)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Using Lexical Chains for Text Summarization. In Proceedings of the Intelligent Scalable Text Summarization Workshop(ISTS'97), Madrid.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sentence fusion for multidocument news summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "3",
"pages": "297--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3), pages 297-328.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Keyphrase Extraction for N-best Reranking in Multi-Sentence Compression",
"authors": [],
"year": null,
"venue": "Proccedings of the NAACL HLT 2013 conference",
"volume": "",
"issue": "",
"pages": "298--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keyphrase Extraction for N-best Reranking in Multi-Sentence Compression. In Proccedings of the NAACL HLT 2013 conference, page 298-305.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Discourse Constraints for Document Compression",
"authors": [
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "411--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Clarke and Mirella Lapata. 2010. Discourse Constraints for Document Compression. Computational Linguistics, 36(3), pages 411- 441.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incremental Parsing with the Perceptron Algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL 2004",
"volume": "",
"issue": "",
"pages": "111--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of ACL 2004, pages 111-118.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text compaction for display on very small screens",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Corston-Oliver",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the NAACL Workshop on Automatic Summarization",
"volume": "",
"issue": "",
"pages": "89--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corston-Oliver, Simon. 2001. Text compaction for display on very small screens. In Proceedings of the NAACL Workshop on Automatic Summarization, Pittsburg, PA, 3 June 2001, pages 89-98.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using Structured Events to Predict Stock Price Movement : An Empirical Investigation",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junwen",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP 2014",
"volume": "",
"issue": "",
"pages": "1415--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan. 2014. Using Structured Events to Predict Stock Price Movement : An Empirical Investigation. In Proceedings of EMNLP 2014, pages 1415-1425.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Hedge trimmer: A parse-and-trim approach to headline generation",
"authors": [
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Zajic",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2003,
"venue": "proceedings of the HLT-NAACL 03 on Text summarization workshop",
"volume": "5",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In proceedings of the HLT-NAACL 03 on Text summarization workshop, volume 5, pages 1-8.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to fuse disparate sentences",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Santhanam",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL 2011",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner and Deepak Santhanam. 2011. Learning to fuse disparate sentences. In Proceedings of ACL 2011, pages 54-63.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hierarchy Identification for Automatically Generating Table-of-Contents",
"authors": [
{
"first": "Nicolai",
"middle": [],
"last": "Erbs",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "252--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolai Erbs, Iryna Gurevych and Torsten Zesch. 2013. Hierarchy Identification for Automatically Generating Table-of-Contents. In Proceedings of Recent Advances in Natural Language Processing, Hissar, Bulgaria, pages 252-260.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "LexRank : Graph-based Lexical Centrality as Salience in Text Summarization",
"authors": [
{
"first": "Gunes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gunes Erkan and Dragomir R Radev. 2004. LexRank : Graph-based Lexical Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research 22, 2004, pages 457-479.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP 2011",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fader A, Soderland S, Etzioni O. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP 2011, pages 1535-1545.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-sentence compression: Finding shortest paths in word graphs",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Coling 2010",
"volume": "",
"issue": "",
"pages": "322--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of Coling 2010, pages 322-330.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An extractive supervised two-stage method for sentence compression",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL 2010",
"volume": "",
"issue": "",
"pages": "885--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Galanis and Ion Androutsopoulos. 2010. An extractive supervised two-stage method for sentence compression. In Proceedings of NAACL 2010, pages 885-893.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Centering: A framework for modeling the local coherence of discourse",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barbara",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Weinstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J. Grosz and Scott Weinstein and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, volume 21, pages 203-225.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised Induction of Contingent Event Pairs from Film Scenes",
"authors": [],
"year": null,
"venue": "Proceedings of EMNLP 2013",
"volume": "",
"issue": "",
"pages": "369--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Unsupervised Induction of Contingent Event Pairs from Film Scenes. In Proceedings of EMNLP 2013, pages 369-379.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving Multi-documents Summarization by Sentence Compression based on Expanded Constituent Parse Trees",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Fuliang",
"middle": [],
"last": "Weng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP 2014",
"volume": "",
"issue": "",
"pages": "691--701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Li,Yang Liu, Fei Liu, Lin Zhao, Fuliang Weng. 2014. Improving Multi-documents Summarization by Sentence Compression based on Expanded Constituent Parse Trees. In Proceedings of EMNLP 2014, pages 691-701.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branckes Out: Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branckes Out: Proceedings of the ACL-04 Workshop, pages 74-81.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Summarization with a joint model for sentence extraction and compression",
"authors": [
{
"first": "F",
"middle": [
"T"
],
"last": "Andre",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Martins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre F.T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 1-9.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Abstractive Meeting Summarization with Entailment and Fusion",
"authors": [
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Frank",
"middle": [
"W"
],
"last": "Tompa",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "136--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yashar Mehdad, Giuseppe Carenini, Frank W.Tompa and Raymond T.Ng. 2013. Abstractive Meeting Summarization with Entailment and Fusion. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 136-146.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Abstractive Summarization of Spoken and Written Conversations Based on Phrasal Queries",
"authors": [
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL 2014",
"volume": "",
"issue": "",
"pages": "1220--1230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yashar Mehdad, Giuseppe Carenini and Raymond T.Ng. 2014. Abstractive Summarization of Spoken and Written Conversations Based on Phrasal Queries. In Proceedings of ACL 2014, pages 1220- 1230.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "17",
"issue": "1",
"pages": "21--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1), pages 21-48.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The stanford typed dependencies representation",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "COLING 2008 Workshop on Cross-framework and Cross-domain Parser Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The stanford typed dependencies representation. In COLING 2008 Workshop on Cross-framework and Cross-domain Parser Evaluation.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploiting Timelines to Enhance Multidocument Summarization",
"authors": [
{
"first": "Jun-Ping",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL 2014",
"volume": "",
"issue": "",
"pages": "923--933",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-Ping Ng, Yan Chen, Min-Yen Kan, Zhoujun Li. 2014. Exploiting Timelines to Enhance Multi- document Summarization. Proceedings of ACL 2014, pages 923-933.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "ZORE: A Syntaxbased System for Chinese Open Relation Extraction",
"authors": [
{
"first": "Likun",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP 2014",
"volume": "",
"issue": "",
"pages": "1870--1880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Likun Qiu and Yue Zhang. 2014. ZORE: A Syntax- based System for Chinese Open Relation Extraction. Proceedings of EMNLP 2014, pages 1870-1880.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Polynomial Time Joint Structural Inference for Sentence Compression",
"authors": [
{
"first": "Robert",
"middle": [
"G"
],
"last": "Sargent",
"suffix": ""
}
],
"year": 1988,
"venue": "Management Science",
"volume": "34",
"issue": "10",
"pages": "1231--1251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert G. Sargent. 1988. Polynomial Time Joint Structural Inference for Sentence Compression. Management Science, 34(10), pages 1231-1251.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised topic discovery",
"authors": [
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of workshop on language modeling and information retrieval",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz R. 1988. Unsupervised topic discovery. In Proceedings of workshop on language modeling and information retrieval, pages 72-77.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Abstractive headline generation using WIDL-expressions. Information Processing and Management",
"authors": [
{
"first": "R",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "43",
"issue": "",
"pages": "1536--1548",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Soricut, and D. Marcu. 2007. Abstractive headline generation using WIDL-expressions. Information Processing and Management, 43(6), pages 1536- 1548.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Approximation Strategies for Multi-Structure Sentence Compression",
"authors": [
{
"first": "",
"middle": [],
"last": "Kapil Thadani",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL 2014",
"volume": "",
"issue": "",
"pages": "1241--1251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kapil Thadani. 2014. Approximation Strategies for Multi-Structure Sentence Compression. Proceedings of ACL 2014, pages 1241-1251.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning to Summarise Related Sentences",
"authors": [
{
"first": "Emmanouil",
"middle": [],
"last": "Tzouridis",
"suffix": ""
},
{
"first": "Abdul",
"middle": [],
"last": "Nasir",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Brefeld",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014",
"volume": "",
"issue": "",
"pages": "1636--1647",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanouil Tzouridis, Jamal Abdul Nasir and Ulf Brefeld. 2014. Learning to Summarise Related Sentences. Proceedings of COLING 2014,Dublin, Ireland, August 23-29 2014. pages 1636-1647.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Automatic keyframe selection based on Mutual Reinforcement Algorithm",
"authors": [
{
"first": "Carles",
"middle": [],
"last": "Ventura",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Giro-I-Nieto",
"suffix": ""
},
{
"first": "Veronica",
"middle": [],
"last": "Vilaplana",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Giribet",
"suffix": ""
},
{
"first": "Eusebio",
"middle": [],
"last": "Carasusan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of 11th international workshop on content-based multimedia indexing(CBMI)",
"volume": "",
"issue": "",
"pages": "29--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carles Ventura, Xavier Giro-i-Nieto, Veronica Vilaplana, Daniel Giribet, and Eusebio Carasusan. 2013. Automatic keyframe selection based on Mutual Reinforcement Algorithm. In Proceedings of 11th international workshop on content-based multimedia indexing(CBMI), pages 29-34.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Generating overview summaries of ongoing email thread discussions",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 2004",
"volume": "",
"issue": "",
"pages": "1384--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Wan and Kathleen McKeown. 2004. Generating overview summaries of ongoing email thread discussions. In Proceedings of COLING 2004, Geneva, Switzerland, 2004, pages 1384-1394.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A sentence compression based framework to query-focused mutli-document summarization",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hema",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Castelli",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL 2013",
"volume": "",
"issue": "",
"pages": "1384--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, Claire Cardie. 2013. A sentence compression based framework to query-focused mutli-document summarization. In Proceedings of ACL 2013, Sofia, Bulgaria, August 4-9 2013, pages 1384-1394.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Title generation with quasi-synchronous grammar",
"authors": [
{
"first": "Kristian",
"middle": [],
"last": "Woodsend",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP 2010",
"volume": "",
"issue": "",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristian Woodsend, Yansong Feng and Mirella Lapata. 2010. Title generation with quasi-synchronous grammar. In Proceedings of EMNLP 2010, pages 513-523.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Keyword extraction and headline generation using novel work features",
"authors": [
{
"first": "Songhua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Shaohui",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "C",
"middle": [
"M"
],
"last": "Francis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of AAAI 2010",
"volume": "",
"issue": "",
"pages": "1461--1466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Songhua Xu, Shaohui Yang and Francis C.M. Lau. 2010. Keyword extraction and headline generation using novel work features. In Proceedings of AAAI 2010, pages 1461-1466.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Sentence Compression with Semantic Role Constraints",
"authors": [
{
"first": "Katsumasa",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
},
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL 2012",
"volume": "",
"issue": "",
"pages": "349--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsumasa Yoshikawa and Ryu Iida. 2012. Sentence Compression with Semantic Role Constraints. In Proceedings of ACL 2012, pages 349-353.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Headline generation for written and broadcast news. lamp-tr-120",
"authors": [
{
"first": "David",
"middle": [],
"last": "Zajic",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Zajic, Bonnie Dorr and Richard Schwartz. 2005. Headline generation for written and broadcast news. lamp-tr-120, cs-tr-4698.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Generic summarization and keyphrase extraction using mutual reinforement principle and sentence clustering",
"authors": [
{
"first": "Hongyuan",
"middle": [],
"last": "Zha",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of SIGIR 2002",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyuan Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforement principle and sentence clustering. In Proceedings of SIGIR 2002, pages 113-120.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning semantic lexicons using graph mutual reinforcement based bootstrapping",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wu",
"middle": [],
"last": "Lide",
"suffix": ""
}
],
"year": 2008,
"venue": "Acta Automatica Sinica",
"volume": "34",
"issue": "10",
"pages": "1257--1261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Zhang, Xipeng Qiu, Xuanjing Huang, Wu Lide. 2008. Learning semantic lexicons using graph mutual reinforcement based bootstrapping. Acta Automatica Sinica, 34(10), pages 1257-1261.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Syntactic Processing Using the Generalized Perceptron and Beam Search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "1",
"pages": "105--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang, Stephen Clark. 2011. Syntactic Processing Using the Generalized Perceptron and Beam Search. Computational Linguistics, 37(1), pages 105-150.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Partial-Tree Linearization: Generalized Word Ordering for Text Synthesis",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IJCAI 2013",
"volume": "",
"issue": "",
"pages": "2232--2238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang. 2013. Partial-Tree Linearization: Generalized Word Ordering for Text Synthesis. In Proceedings of IJCAI 2013, pages 2232-2238.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "{e 1 , \u2022 \u2022 \u2022 , e n } and m lexical chains: {l 1 , \u2022 \u2022 \u2022 , l m } in the bipartite graph G bi . Their scores are represented by sal(e) = {sal(e 1 ), \u2022 \u2022 \u2022 , sal(e n )} and sal(l) = {sal(l 1 ), \u2022 \u2022 \u2022 , sal(l m )}, respectively.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Word graph generated from candidates and a possible compression path.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Results using different fluency weights. this parameter 8 .",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>Texts</td><td>Candidate Extraction</td></tr><tr><td>Phrases</td><td>Events</td><td>Sentences</td></tr><tr><td/><td colspan=\"2\">Candidate Ranking</td></tr><tr><td colspan=\"3\">Candidate #1 Multi-Sentence Compression</td></tr><tr><td/><td>Headline</td><td>Headline Generation</td></tr><tr><td>;</td><td/><td/></tr></table>",
"text": "... Candidate #i ... Candidate #K"
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Performance comparison for automatic evaluation. The mark \u2021 denotes that the result is significantly better with a p-value below 0.01."
},
"TABREF8": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Comparison of headlines generated by the different methods."
}
}
}
}