{ "paper_id": "P18-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:39:31.619760Z" }, "title": "Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks", "authors": [ { "first": "Aishwarya", "middle": [], "last": "Jadhav", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Science Bangalore", "location": { "country": "India" } }, "email": "aishwaryaj@iisc.ac.in" }, { "first": "Vaibhav", "middle": [], "last": "Rajan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Singapore", "location": {} }, "email": "vaibhav.rajan@nus.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a new neural sequence-tosequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new twolevel pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers.", "pdf_parse": { "paper_id": "P18-1014", "_pdf_hash": "", "abstract": [ { "text": "We present a new neural sequence-tosequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new twolevel pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic summarization aims to shorten a text document while maintaining the salient information of the original text. The practical need for such systems is growing with the rapid and continuous increase in textual information sources in multiple domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Summarization tools can be broadly classified into two categories: extractive and abstractive. Extractive summarization selects parts of the input document to create its summary while abstractive summarization generates summaries that may have words or phrases not present in the input document. Abstractive summarization is clearly harder as methods have to address factual and grammatical errors that may be introduced and problems in utilizing external knowledge sources to obtain paraphrasing or generalization. Extractive summarizers obviate the need to solve these problems by selecting the most salient textual units (usually sentences) from the input documents. As a result, they generate summaries that are grammatically and semantically more accurate than those from abstractive methods. While they may have problems like incorrect or unclear referring expressions or lack of coherence, they are computationally simpler and more efficient to generate. Indeed, state-of-the-art extractive summarizers are comparable or often better in performance to competitive abstractive summarizers (see (Nallapati et al., 2017) for a recent empirical comparison).", "cite_spans": [ { "start": 1100, "end": 1124, "text": "(Nallapati et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Classical approaches to extractive summarization have relied on human-engineered features from the text that are used to score sentences in the input document and select the highestscoring sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These include graph or constraint-optimization based approaches as well as classifier-based methods. A review of these approaches can be found in Nenkova et al. (2011) . Some of these methods generate summaries from multiple documents. In this paper, we focus on single document summarization.", "cite_spans": [ { "start": 146, "end": 167, "text": "Nenkova et al. (2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Modern approaches that show the best performance are based on end-to-end deep learning models that do not require human-crafted features. Neural models have tremendously improved performance in several difficult problems in NLP such as machine translation (Chen et al., 2017) and question-answering (Hao et al., 2017) . Deep models with thousands of parameters require large, labeled datasets and for summarization this hurdle of labeled data was surmounted by Cheng and Lapata (2016) , through the creation of a labeled dataset of news stories from CNN and Daily Mail consisting of around 280,000 documents and human-generated summaries.", "cite_spans": [ { "start": 256, "end": 275, "text": "(Chen et al., 2017)", "ref_id": "BIBREF1" }, { "start": 299, "end": 317, "text": "(Hao et al., 2017)", "ref_id": "BIBREF8" }, { "start": 461, "end": 484, "text": "Cheng and Lapata (2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recurrent neural networks with encoderdecoder architecture (Sutskever et al., 2014) have been successful in a variety of NLP tasks where an encoder obtains representations of input sequences and a decoder generates target sequences. Attention mechanisms (Bahdanau et al., 2015 ) are used to model the effects of different loci in the input sequence during decoding. Pointer networks (Vinyals et al., 2015) use this mechanism to obtain target sequences wherein each decoding step is used to point to elements of the input sequence. This pointing ability has been effectively utilized by state-of-the-art extractive and abstractive summarizers (Cheng and Lapata, 2016; See et al., 2017) .", "cite_spans": [ { "start": 59, "end": 83, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF25" }, { "start": 254, "end": 276, "text": "(Bahdanau et al., 2015", "ref_id": "BIBREF0" }, { "start": 383, "end": 405, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF27" }, { "start": 642, "end": 666, "text": "(Cheng and Lapata, 2016;", "ref_id": "BIBREF2" }, { "start": 667, "end": 684, "text": "See et al., 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we design SWAP-NET a new deep learning model for extractive summarization. Similar to previous models, we use an encoderdecoder architecture with attention mechanism to select important sentences. Our key contribution is to design an architecture that utilizes key words in the selection process. Salient sentences of a document, that are useful in summaries, often contain key words and, to our knowledge, none of the previous models have explicitly modeled this interaction. We model this interaction through a two-level encoder and decoder, one for words and the other for sentences. An attention-based mechanism, similar to that of Pointer Networks, is used to learn important words and sentences from labeled data. A switch mechanism is used to select between words and sentences during decoding and the final summary is generated using a combination of selected sentences and words. We demonstrate the efficacy of our model on the CNN/Daily Mail corpus where it outperforms state-of-the-art extractive summarizers. Our experiments also suggest that the semantic redundancy in SWAP-NET generated summaries is comparable to that of human-generated summaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let D denote an input document, comprising of a sequence of N sentences: s 1 , . . . , s N . Ignoring sentence boundaries, let w 1 , . . . , w n be the sequence of n words in document D. An extractive summary aims to obtain a subset of the input sentences that forms a salient summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "2" }, { "text": "We use the interaction between words and sentences in a document to predict important words and sentences. Let the target sequence of indices of important words and sentences be V = v 1 , . . . , v m , where each index v j can point to ei-ther a sentence or a word in an input document. We design a supervised sequence-to-sequence recurrent neural network model, SWAP-NET, that uses these target sequences (of sentences and words) to learn salient sentences and key words. Our objective is to find SWAP-NET model parameters M that maximize the probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "2" }, { "text": "p(V |M, D) = j p(v j |v 1 , . . . , v j\u22121 , M, D) = j p(v j |v