ACL-OCL / Base_JSON /prefixP /json /P18 /P18-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P18-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:40:06.622166Z"
},
"title": "A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss",
"authors": [
{
"first": "Wan-Ting",
"middle": [],
"last": "Hsu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {}
},
"email": "hsuwanting@gapp.nthu.edu.tw"
},
{
"first": "Chieh-Kai",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {}
},
"email": ""
},
{
"first": "Ming-Ying",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {}
},
"email": ""
},
{
"first": "Kerui",
"middle": [],
"last": "Min",
"suffix": "",
"affiliation": {},
"email": "minkerui@cmcm.com"
},
{
"first": "Jing",
"middle": [],
"last": "Tang",
"suffix": "",
"affiliation": {},
"email": "tangjing@cmcm.com"
},
{
"first": "Min",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {}
},
"email": "sunmin@ee.nthu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation. Original Article: McDonald's says...... The company says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which Mc-Donald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Jessica Foust, director of culinary innovation at McDonald's, said the changes were made because customers said they want 'simple, clean ingredients' they are familiar with...... And Panera Bread has said it plans to purge artificial colors, flavors and preservatives from its food by 2016...... Extractive Approach: The company says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which Mc-Donald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Abstractive Approach: McDonald's says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week. The company says the changes were made because customers said they want 'simple, clean ingredients' they are familiar with. McDonald's said it plans to purge artificial colors, flavors and preservatives from its food by 2016. Unified Approach: McDonald's says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates. The new recipe also does not use maltodextrin, which McDonald's said is generally used as a sugar to increase browning or as a carrier for seasoning.",
"pdf_parse": {
"paper_id": "P18-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation. Original Article: McDonald's says...... The company says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which Mc-Donald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Jessica Foust, director of culinary innovation at McDonald's, said the changes were made because customers said they want 'simple, clean ingredients' they are familiar with...... And Panera Bread has said it plans to purge artificial colors, flavors and preservatives from its food by 2016...... Extractive Approach: The company says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which Mc-Donald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Abstractive Approach: McDonald's says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week. The company says the changes were made because customers said they want 'simple, clean ingredients' they are familiar with. McDonald's said it plans to purge artificial colors, flavors and preservatives from its food by 2016. Unified Approach: McDonald's says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates. The new recipe also does not use maltodextrin, which McDonald's said is generally used as a sugar to increase browning or as a carrier for seasoning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points. The ability to condense text information can aid many applications such as creating news digests, presenting search results, and generating reports. There are mainly two types of approaches: extractive and abstractive. Extractive approaches assemble summaries directly from the source text typically selecting one whole sentence at a time. In contrast, abstractive approaches can generate novel words and phrases not copied from the source text. Figure 1 : Comparison of extractive, abstractive, and our unified summaries on a news article. The extractive model picks most important but incoherent or not concise (see blue bold font) sentences. The abstractive summary is readable, concise but still loses or mistakes some facts (see red italics font). The final summary rewritten from fragments (see underline font) has the advantages from both extractive (importance) and abstractive advantage (coherence (see green bold font)).",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 590,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hence, abstractive summaries can be more coherent and concise than extractive summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extractive approaches are typically simpler. They output the probability of each sentence to be selected into the summary. Many earlier works on summarization (Cheng and Lapata, 2016; Nallapati et al., 2016a Nallapati et al., , 2017 Narayan et al., 2017; Yasunaga et al., 2017) focus on extractive summarization. Among them, Nallapati et al. (2017) have achieved high ROUGE scores. On the other hand, abstractive approaches (Nallapati et al., 2016b; See et al., 2017; Paulus et al., 2017; Fan et al., 2017; typically involve sophisticated mechanism in order to paraphrase, generate unseen words in the source text, or even incorporate external knowledge. Neural networks (Nallapati et al., 2017; See et al., 2017) based on the attentional encoder-decoder model (Bahdanau et al., 2014) were able to generate abstractive summaries with high ROUGE scores but suffer from inaccurately reproducing factual details and an inability to deal with outof-vocabulary (OOV) words. Recently, See et al. (2017) propose a pointer-generator model which has the abilities to copy words from source text as well as generate unseen words. Despite recent progress in abstractive summarization, extractive approaches (Nallapati et al., 2017; Yasunaga et al., 2017) and lead-3 baseline (i.e., selecting the first 3 sentences) still achieve strong performance in ROUGE scores.",
"cite_spans": [
{
"start": 159,
"end": 183,
"text": "(Cheng and Lapata, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 184,
"end": 207,
"text": "Nallapati et al., 2016a",
"ref_id": "BIBREF14"
},
{
"start": 208,
"end": 232,
"text": "Nallapati et al., , 2017",
"ref_id": "BIBREF13"
},
{
"start": 233,
"end": 254,
"text": "Narayan et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 255,
"end": 277,
"text": "Yasunaga et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 325,
"end": 348,
"text": "Nallapati et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 424,
"end": 449,
"text": "(Nallapati et al., 2016b;",
"ref_id": "BIBREF15"
},
{
"start": 450,
"end": 467,
"text": "See et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 468,
"end": 488,
"text": "Paulus et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 489,
"end": 506,
"text": "Fan et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 671,
"end": 695,
"text": "(Nallapati et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 696,
"end": 713,
"text": "See et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 761,
"end": 784,
"text": "(Bahdanau et al., 2014)",
"ref_id": null
},
{
"start": 979,
"end": 996,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 1196,
"end": 1220,
"text": "(Nallapati et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 1221,
"end": 1243,
"text": "Yasunaga et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose to explicitly take advantage of the strength of state-of-the-art extractive and abstractive summarization and introduced the following unified model. Firstly, we treat the probability output of each sentence from the extractive model (Nallapati et al., 2017) as sentence-level attention. Then, we modulate the word-level dynamic attention from the abstractive model (See et al., 2017) with sentence-level attention such that words in less attended sentences are less likely to be generated. In this way, extractive summarization mostly benefits abstractive summarization by mitigating spurious word-level attention. Secondly, we introduce a novel inconsistency loss function to encourage the consistency between two levels of attentions. The loss function can be computed without additional human annotation and has shown to ensure our unified model to be mutually beneficial to both extractive and abstractive summarization. On CNN/Daily Mail dataset, our unified model achieves state-of-theart ROUGE scores and outperforms a strong extractive baseline (i.e., lead-3). Finally, to ensure the quality of our unified model, we conduct a solid human evaluation and confirm that our method significantly outperforms recent state-ofthe-art methods in informativity and readability.",
"cite_spans": [
{
"start": 245,
"end": 269,
"text": "(Nallapati et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 377,
"end": 395,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, our contributions are twofold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a unified model combining sentence-level and word-level attentions to take advantage of both extractive and abstractive summarization approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel inconsistency loss function to ensure our unified model to be mutually beneficial to both extractive and abstractive summarization. The unified model with inconsistency loss achieves the best ROUGE scores on CNN/Daily Mail dataset and outperforms recent state-of-the-art methods in informativity and readability on human evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text summarization has been widely studied in recent years. We first introduce the related works of neural-network-based extractive and abstractive summarization. Finally, we introduce a few related works with hierarchical attention mechanism. Extractive summarization. K\u00e5geb\u00e4ck et al. (2014) and Yin and Pei (2015) use neural networks to map sentences into vectors and select sentences based on those vectors. Cheng and Lapata (2016) , Nallapati et al. (2016a) and Nallapati et al. (2017) use recurrent neural networks to read the article and get the representations of the sentences and article to select sentences. Narayan et al. (2017) utilize side information (i.e., image captions and titles) to help the sentence classifier choose sentences. Yasunaga et al. (2017) Figure 2: Our unified model combines the word-level and sentence-level attentions. Inconsistency occurs when word attention is high but sentence attention is low (see red arrow). (Vinyals et al., 2015) into their models to deal with out-of-vocabulary (OOV) words. Chen et al. (2016) and See et al. (2017) restrain their models from attending to the same word to decrease repeated phrases in the generated summary. Paulus et al. (2017) use policy gradient on summarization and state out the fact that high ROUGE scores might still lead to low human evaluation scores. Fan et al. (2017) apply convolutional sequenceto-sequence model and design several new tasks for summarization. achieve high readability score on human evaluation using generative adversarial networks. Hierarchical attention. Attention mechanism was first proposed by Bahdanau et al. 2014. Yang et al. (2016) proposed a hierarchical attention mechanism for document classification. We adopt the method of combining sentence-level and word-level attention in Nallapati et al. (2016b) . However, their sentence attention is dynamic, which means it will be different for each generated word. Whereas our sentence attention is fixed for all generated words. Inspired by the high performance of extractive summarization, we propose to use fixed sentence attention.",
"cite_spans": [
{
"start": 270,
"end": 292,
"text": "K\u00e5geb\u00e4ck et al. (2014)",
"ref_id": "BIBREF9"
},
{
"start": 297,
"end": 315,
"text": "Yin and Pei (2015)",
"ref_id": "BIBREF25"
},
{
"start": 411,
"end": 434,
"text": "Cheng and Lapata (2016)",
"ref_id": "BIBREF4"
},
{
"start": 437,
"end": 461,
"text": "Nallapati et al. (2016a)",
"ref_id": "BIBREF14"
},
{
"start": 466,
"end": 489,
"text": "Nallapati et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 618,
"end": 639,
"text": "Narayan et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 749,
"end": 771,
"text": "Yasunaga et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 951,
"end": 973,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 1036,
"end": 1054,
"text": "Chen et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 1059,
"end": 1076,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 1186,
"end": 1206,
"text": "Paulus et al. (2017)",
"ref_id": "BIBREF17"
},
{
"start": 1339,
"end": 1356,
"text": "Fan et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 1629,
"end": 1647,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF23"
},
{
"start": 1797,
"end": 1821,
"text": "Nallapati et al. (2016b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our model combines state-of-the-art extractive model (Nallapati et al., 2017) and abstractive model (See et al., 2017) by combining sentencelevel attention from the former and word-level attention from the latter. Furthermore, we design an inconsistency loss to enhance the cooperation between the extractive and abstractive models.",
"cite_spans": [
{
"start": 53,
"end": 77,
"text": "(Nallapati et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 100,
"end": 118,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose a unified model to combine the strength of both state-of-the-art extractor (Nallapati et al., 2017) and abstracter (See et al., 2017) . Before going into details of our model, we first define the tasks of the extractor and abstracter. Problem definition. The input of both extrac-tor and abstracter is a sequence of words w = [w 1 , w 2 , ..., w m , ...], where m is the word index. The sequence of words also forms a sequence of",
"cite_spans": [
{
"start": 126,
"end": 144,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": "sentences s = [s 1 , s 2 , ..., s n , ...],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": "where n is the sentence index. The m th word is mapped into the n(m) th sentence, where n(\u2022) is the mapping function. The output of the extractor is the sentencelevel attention",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": "\u03b2 = [\u03b2 1 , \u03b2 2 , ..., \u03b2 n , ...]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": ", where \u03b2 n is the probability of the n th sentence been extracted into the summary. On the other hand, our attention-based abstractor computes word-level attention",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": "\u03b1 t = \u03b1 t 1 , \u03b1 t 2 , ..., \u03b1 t m , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": ".. dynamically while generating the t th word in the summary. The output of the abstracter is the summary text y = y 1 , y 2 , ..., y t , ... , where y t is t th word in the summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": "In the following, we introduce the mechanism to combine sentence-level and word-level attentions in Sec. 3.1. Next, we define the novel inconsistency loss that ensures extractor and abstracter to be mutually beneficial in Sec. 3.2. We also give the details of our extractor in Sec. 3.3 and our abstracter in Sec. 3.4. Finally, our training procedure is described in Sec. 3.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Unified Model",
"sec_num": "3"
},
{
"text": "Pieces of evidence (e.g., Vaswani et al. (2017)) show that attention mechanism is very important for NLP tasks. Hence, we propose to explicitly combine the sentence-level \u03b2 n and word-level \u03b1 t m attentions by simple scalar multiplication and renormalization. The updated word attention\u03b1 t",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "Vaswani et al. (2017))",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Attentions",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m is\u03b1 t m = \u03b1 t m \u00d7 \u03b2 n(m) m \u03b1 t m \u00d7 \u03b2 n(m) .",
"eq_num": "(1)"
}
],
"section": "Combining Attentions",
"sec_num": "3.1"
},
{
"text": "The multiplication ensures that only when both word-level \u03b1 t m and sentence-level \u03b2 n attentions are high, the updated word attention\u03b1 t m can be high. Since the sentence-level attention \u03b2 n from the extractor already achieves high ROUGE scores, \u03b2 n intuitively modulates the word-level attention \u03b1 t m to mitigate spurious word-level attention such that words in less attended sentences are less likely to be generated (see Fig. 2 ). As highlighted in Sec. 3.4, the word-level attention\u03b1 t m significantly affects the decoding process of the abstracter. Hence, an updated word-level attention is our key to improve abstractive summarization.",
"cite_spans": [],
"ref_spans": [
{
"start": 426,
"end": 432,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combining Attentions",
"sec_num": "3.1"
},
{
"text": "Instead of only leveraging the complementary nature between sentence-level and word-level attentions, we would like to encourage these two-levels of attentions to be mostly consistent to each other during training as an intrinsic learning target for free (i.e., without additional human annotation). Explicitly, we would like the sentence-level attention to be high when the word-level attention is high. Hence, we design the following inconsistency loss,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistency Loss",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L inc = \u2212 1 T T t=1 log( 1 |K| m\u2208K \u03b1 t m \u00d7 \u03b2 n(m) ),",
"eq_num": "(2)"
}
],
"section": "Inconsistency Loss",
"sec_num": "3.2"
},
{
"text": "where K is the set of top K attended words and T is the number of words in the summary. This implicitly encourages the distribution of the wordlevel attentions to be sharp and sentence-level attention to be high. To avoid the degenerated solution for the distribution of word attention to be one-hot and sentence attention to be high, we include the original loss functions for training the extractor ( L ext in Sec. 3.3) and abstracter (L abs and L cov in Sec. 3.4). Note that Eq. 1 is the only part that the extractor is interacting with the abstracter. Our proposed inconsistency loss facilitates our end-to-end trained unified model to be mutually beneficial to both the extractor and abstracter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistency Loss",
"sec_num": "3.2"
},
{
"text": "Our extractor is inspired by Nallapati et al. (2017) . The main difference is that our extractor does not need to obtain the final summary. It mainly needs to obtain a short list of important sentences with a high recall to further facilitate the abstractor. We first introduce the network architecture and the loss function. Finally, we define our ground truth important sentences to encourage high recall. Architecture. The model consists of a hierarchical bidirectional GRU which extracts sentence representations and a classification layer for predicting the sentence-level attention \u03b2 n for each sentence (see Fig. 3 ).",
"cite_spans": [
{
"start": 29,
"end": 52,
"text": "Nallapati et al. (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 615,
"end": 621,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Extractor",
"sec_num": "3.3"
},
{
"text": "Extractor loss. The following sigmoid cross entropy loss is used,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extractor",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L ext = \u2212 1 N N n=1 (g n log \u03b2 n + (1 \u2212 g n ) log(1 \u2212 \u03b2 n )),",
"eq_num": "(3)"
}
],
"section": "Extractor",
"sec_num": "3.3"
},
{
"text": "where g n \u2208 {0, 1} is the ground-truth label for the n th sentence and N is the number of sentences. When g n = 1, it indicates that the n th sentence should be attended to facilitate abstractive summarization. Ground-truth label. The goal of our extractor is to extract sentences with high informativity, which means the extracted sentences should contain information that is needed to generate an abstractive summary as much as possible. To obtain the ground-truth labels g = {g n } n , first, we measure the informativity of each sentence s n in the article by computing the ROUGE-L recall score (Lin, 2004) between the sentence s n and the reference abstractive summary\u0177 = {\u0177 t } t . Second, we sort the sentences by their informativity and select the sentence in the order of high to low informativity. We add one sentence at a time if the new sentence can increase the informativity of all the selected sentences. Finally, we obtain the ground-truth labels g and train our extractor by minimizing Eq. 3. Note that our method is different from Nallapati et al. (2017) who aim to extract a final summary for an article so they use ROUGE F-1 score to select ground-truth sentences; while we focus on high informativity, hence, we use ROUGE recall score to obtain as much information as possible with respect to the reference summary\u0177.",
"cite_spans": [
{
"start": 599,
"end": 610,
"text": "(Lin, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 1049,
"end": 1072,
"text": "Nallapati et al. (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extractor",
"sec_num": "3.3"
},
{
"text": "The second part of our model is an abstracter that reads the article; then, generate a summary Figure 4 : Decoding mechanism in the abstracter.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "In the decoder step t, our updated word attention\u03b1 t is used to generate context vector h * (\u03b1 t ). Hence, it updates the final word distribution P f inal .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "word-by-word. We use the pointer-generator network proposed by See et al. (2017) and combine it with the extractor by combining sentence-level and word-level attentions (Sec. 3.1).",
"cite_spans": [
{
"start": 63,
"end": 80,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "Pointer-generator network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "The pointergenerator network (See et al., 2017 ) is a specially designed sequence-to-sequence attentional model that can generate the summary by copying words in the article or generating words from a fixed vocabulary at the same time. The model contains a bidirectional LSTM which serves as an encoder to encode the input words w and a unidirectional LSTM which serves as a decoder to generate the summary y. For details of the network architecture, please refer to See et al. (2017) . In the following, we describe how the updated word attention\u03b1 t affects the decoding process. Notations. We first define some notations. h e m is the encoder hidden state for the m th word. h d t is the decoder hidden state in step t. h * (\u03b1 t ) = M m\u03b1 t m \u00d7 h e m is the context vector which is a function of the updated word attention\u03b1 t . P vocab (h * (\u03b1 t )) is the probability distribution over the fixed vocabulary before applying the copying mechanism.",
"cite_spans": [
{
"start": 29,
"end": 46,
"text": "(See et al., 2017",
"ref_id": "BIBREF20"
},
{
"start": 467,
"end": 484,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "P vocab (h * (\u03b1 t )) (4) = softmax(W 2 (W 1 [h d t , h * (\u03b1 t )] + b 1 ) + b 2 ), where W 1 , W 2 , b 1 and b 2 are learnable parame- ters. P vocab = {P vocab w } w where P vocab w (h * (\u03b1 t ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "is the probability of word w being decoded. p gen (h * (\u03b1 t )) \u2208 [0, 1] is the generating probability (see Eq.8 in See et al. (2017) ) and 1 \u2212 p gen (h * (\u03b1 t )) is the copying probability. Final word distribution. P f inal w (\u03b1 t ) is the final probability of word w being decoded (i.e., y t = w). It is related to the updated word attention\u03b1 t as follows (see Fig. 4 ),",
"cite_spans": [
{
"start": 115,
"end": 132,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 362,
"end": 368,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "P f inal w (\u03b1 t ) = p gen (h * (\u03b1 t ))P vocab w (h * (\u03b1 t )) (5) + (1 \u2212 p gen (h * (\u03b1 t ))) m:wm=w\u03b1 t m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "Note that P f inal = {P f inal w } w is the probability distribution over the fixed vocabulary and out-ofvocabulary (OOV) words. Hence, OOV words can be decoded. Most importantly, it is clear from Eq. 5 that P f inal w (\u03b1 t ) is a function of the updated word attention\u03b1 t . Finally, we train the abstracter to minimize the negative log-likelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L abs = \u2212 1 T T t=1 log P f inal y t (\u03b1 t ) ,",
"eq_num": "(6)"
}
],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "where\u0177 t is the t th token in the reference abstractive summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "Coverage mechanism. We also apply coverage mechanism (See et al., 2017) to prevent the abstracter from repeatedly attending to the same place. In each decoder step t, we calculate the coverage vector c t = t\u22121 t =0\u03b1",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "t which indicates so far how much attention has been paid to every input word. The coverage vector c t will be used to calculate word attention\u03b1 t (see Eq.11 in See et al. (2017)). Moreover, coverage loss L cov is calculated to directly penalize the repetition in updated word attention\u03b1 t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L cov = 1 T T t=1 M m=1 min(\u03b1 t m , c t m ) .",
"eq_num": "(7)"
}
],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "The objective function for training the abstracter with coverage mechanism is the weighted sum of negative log-likelihood and coverage loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstracter",
"sec_num": "3.4"
},
{
"text": "We first pre-train the extractor by minimizing L ext in Eq. 3 and the abstracter by minimizing L abs and L cov in Eq. 6 and Eq. 7, respectively. When pre-training, the abstracter takes ground-truth extracted sentences (i.e., sentences with g n = 1) as input. To combine the extractor and abstracter, we proposed two training settings : (1) two-stages training and (2) end-to-end training. Two-stages training. In this setting, we view the sentence-level attention \u03b2 from the pre-trained extractor as hard attention. The extractor becomes a classifier to select sentences with high attention (i.e., \u03b2 n > threshold). We simply combine the extractor and abstracter by feeding the extracted sentences to the abstracter. Note that we finetune the abstracter since the input text becomes extractive summary which is obtained from the extractor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "3.5"
},
{
"text": "End-to-end training. For end-to-end training, the sentence-level attention \u03b2 is soft attention and will be combined with the word-level attention \u03b1 t as described in Sec. 3.1. We end-to-end train the extractor and abstracter by minimizing four loss functions: L ext , L abs , L cov , as well as L inc in Eq. 2. The final loss is as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L e2e = \u03bb 1 L ext + \u03bb 2 L abs + \u03bb 3 L cov + \u03bb 4 L inc ,",
"eq_num": "(8)"
}
],
"section": "Training Procedure",
"sec_num": "3.5"
},
{
"text": "where \u03bb 1 , \u03bb 2 , \u03bb 3 , \u03bb 4 are hyper-parameters. In our experiment, we give L ext a bigger weight (e.g., \u03bb 1 = 5) when end-to-end training with L inc since we found that L inc is relatively large such that the extractor tends to ignore L ext .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "3.5"
},
{
"text": "We introduce the dataset and implementation details of our method evaluated in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluate our models on the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016b; See et al., 2017) which contains news stories in CNN and Daily Mail websites. Each article in this dataset is paired with one humanwritten multi-sentence summary. This dataset has two versions: anonymized and non-anonymized. The former contains the news stories with all the named entities replaced by special tokens (e.g., @entity2); while the latter contains the raw text of each news story. We follow See et al. (2017) and obtain the non-anonymized version of this dataset which has 287,113 training pairs, 13,368 validation pairs and 11,490 test pairs.",
"cite_spans": [
{
"start": 53,
"end": 75,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 76,
"end": 100,
"text": "Nallapati et al., 2016b;",
"ref_id": "BIBREF15"
},
{
"start": 101,
"end": 118,
"text": "See et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 505,
"end": 522,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We train our extractor and abstracter with 128dimension word embeddings and set the vocabulary size to 50k for both source and target text. We follow Nallapati et al. (2017) and See et al. (2017) and set the hidden dimension to 200 and 256 for the extractor and abstracter, respectively. We use Adagrad optimizer (Duchi et al., 2011) and apply early stopping based on the validation set. In the testing phase, we limit the length of the summary to 120. Pre-training. We use learning rate 0.15 when pretraining the extractor and abstracter. For the extractor, we limit both the maximum number of sentences per article and the maximum number of tokens per sentence to 50 and train the model for 27k iterations with the batch size of 64. For the abstracter, it takes ground-truth extracted sentences (i.e., sentences with g n = 1) as input. We limit the length of the source text to 400 and the length of the summary to 100 and use the batch size of 16. We train the abstracter without coverage mechanism for 88k iterations and continue training for 1k iterations with coverage mechanism (L abs : L cov = 1 : 1). Two-stages training. The abstracter takes extracted sentences with \u03b2 n > 0.5, where \u03b2 is obtained from the pre-trained extractor, as input during two-stages training. We finetune the abstracter for 10k iterations. End-to-end training. During end-to-end training, we will minimize four loss functions (Eq. 8) with \u03bb 1 = 5 and \u03bb 2 = \u03bb 3 = \u03bb 4 = 1. We set K to 3 for computing L inc . Due to the limitation of the memory, we reduce the batch size to 8 and thus use a smaller learning rate 0.01 for stability. The abstracter here reads the whole article. Hence, we increase the maximum length of source text to 600. We end-to-end train the model for 50k iterations.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "Nallapati et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 178,
"end": 195,
"text": "See et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 313,
"end": 333,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "Our unified model not only generates an abstractive summary but also extracts the important sentences in an article. Our goal is that both of the two types of outputs can help people to read and understand an article faster. Hence, in this section, we evaluate the results of our extractor in Sec. 5.1 and unified model in Sec. 5.2. Furthermore, in Sec. 5.3, we perform human evaluation and show that our model can provide a better abstractive summary than other baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "To evaluate whether our extractor obtains enough information for the abstracter, we use full-length ROUGE recall scores 1 between the extracted sentences and reference abstractive summary. High ROUGE recall scores can be obtained if the extracted sentences include more words or sequences overlapping with the reference abstractive summary. For each article, we select sentences with the sentence probabilities \u03b2 greater than 0.5. We show the results of the ground-truth sentence labels (Sec. 3.3) and our models on the (See et al., 2017) 40.34 17.70 36.57 Table 2 : ROUGE F-1 scores of the generated abstractive summaries on the CNN/Daily Mail test set. Our two-stages model outperforms pointer-generator model on ROUGE-1 and ROUGE-2. In addition, our model trained end-to-end with inconsistency loss exceeds the lead-3 baseline. All our ROUGE scores have a 95% confidence interval with at most \u00b10.24. ' * ' indicates the model is trained and evaluated on the anonymized dataset and thus is not strictly comparable with ours.",
"cite_spans": [
{
"start": 520,
"end": 538,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 557,
"end": 564,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Extracted Sentences",
"sec_num": "5.1"
},
{
"text": "test set of the CNN/Daily Mail dataset in Table 1 . Note that the ground-truth extracted sentences can't get ROUGE recall scores of 100 because reference summary is abstractive and may contain some words and sequences that are not in the article. Our extractor performs the best when end-toend trained with inconsistency loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Extracted Sentences",
"sec_num": "5.1"
},
{
"text": "We use full-length ROUGE-1, ROUGE-2 and ROUGE-L F-1 scores to evaluate the generated summaries. We compare our models (two-stage and end-to-end) with state-of-the-art abstractive summarization models (Nallapati et al., 2016b; Paulus et al., 2017; See et al., 2017; ) and a strong lead-3 baseline which directly uses the first three article sentences as the summary. Due to the writing style of news articles, the most important information is often written at the beginning of an article which makes lead-3 a strong baseline. The results of ROUGE F-1 scores are shown in Table 2 . We prove that with help of the extractor, our unified model can outperform pointer-generator (the third row in Table 2) even with two-stages training (the fifth row in Table 2 ). After end-to-end training without inconsistency loss, our method already achieves better ROUGE scores by cooperating with each other. Moreover, our model end-to-end trained with inconsistency loss achieves state-of-the-art ROUGE scores and exceeds lead-3 baseline. In order to quantify the effect of inconsistency loss, we design a metric -inconsistency rate R inc -to measure the inconsistency for each generated summary. For each decoder step t, if the word with maximum attention belongs to a sentence with low attention (i.e., \u03b2 n(argmax(\u03b1 t )) < mean(\u03b2)), we define this step as an inconsistent step t inc . The inconsistency rate R inc is then defined as the percentage of the inconsistent steps in the summary.",
"cite_spans": [
{
"start": 200,
"end": 225,
"text": "(Nallapati et al., 2016b;",
"ref_id": "BIBREF15"
},
{
"start": 226,
"end": 246,
"text": "Paulus et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 247,
"end": 264,
"text": "See et al., 2017;",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 571,
"end": 578,
"text": "Table 2",
"ref_id": null
},
{
"start": 692,
"end": 700,
"text": "Table 2)",
"ref_id": null
},
{
"start": 749,
"end": 756,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Abstractive Summarization",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R inc = Count(t inc ) T ,",
"eq_num": "(9)"
}
],
"section": "Results of Abstractive Summarization",
"sec_num": "5.2"
},
{
"text": "where T is the length of the summary. The average inconsistency rates on test set are shown in Table 4 . Our inconsistency loss significantly decrease R inc from about 20% to 4%. An example of inconsistency improvement is shown in Fig. 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 4",
"ref_id": null
},
{
"start": 231,
"end": 237,
"text": "Fig. 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Abstractive Summarization",
"sec_num": "5.2"
},
{
"text": "Method informativity conciseness readability DeepRL (Paulus et al., 2017) 3.23 2.97 2.85 pointer-generator (See et al., 2017) 3.18 3.36 3.47 GAN 3.22 3.52 3.51 Ours 3.58 3.40 3.70 reference 3.43 3.61 3.62 Table 4 : Inconsistency rate of our end-to-end trained model with and without inconsistency loss.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Paulus et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 107,
"end": 125,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Abstractive Summarization",
"sec_num": "5.2"
},
{
"text": "If that was a tornado, it was one monster of one. Luckily, so far it looks like no one was hurt. With tornadoes touching down near Dallas on Sunday, Ryan Shepard snapped a photo of a black cloud formation reaching down to the ground. He said it was a tornado. It wouldn't be an exaggeration to say it looked half a mile wide. More like a mile, said Jamie Moore, head of emergency management in Johnson County, Texas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Without inconsistency loss:",
"sec_num": null
},
{
"text": "It could have been one the National Weather Service warned about in a tweet as severe thunderstorms drenched the area, causing street flooding. (...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Without inconsistency loss:",
"sec_num": null
},
{
"text": "If that was a tornado, it was one monster of one. Luckily, so far it looks like no one was hurt. With tornadoes touching down near Dallas on Sunday, Ryan Shepard snapped a photo of a black cloud formation reaching down to the ground. He said it was a tornado. It wouldn't be an exaggeration to say it looked half a mile wide. More like a mile, said Jamie Moore, head of emergency management in Johnson County, Texas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "With inconsistency loss:",
"sec_num": null
},
{
"text": "It could have been one the National Weather Service warned about in a tweet as severe thunderstorms drenched the area, causing street flooding. (...) Figure 5 : Visualizing the consistency between sentence and word attentions on the original article. We highlight word (bold font) and sentence (underline font) attentions. We compare our methods trained with and without inconsistency loss. Inconsistent fragments (see red bold font) occur when trained without the inconsistency loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "With inconsistency loss:",
"sec_num": null
},
{
"text": "We perform human evaluation on Amazon Mechanical Turk (MTurk) 2 to evaluate the informativity, conciseness and readability of the summaries. We compare our best model (end2end with inconsistency loss) with pointer-generator (See et al., 2017) , generative adversarial network ) and deep reinforcement model (Paulus et al., 2017) . For these three models, we use the test set outputs provided by the authors 3 .",
"cite_spans": [
{
"start": 224,
"end": 242,
"text": "(See et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 307,
"end": 328,
"text": "(Paulus et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "2 https://www.mturk.com/ 3 https://github.com/abisee/ pointer-generator and https://likicode.com for the first two. For DeepRL, we asked through email.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "We randomly pick 100 examples in the test set. All generated summaries are re-capitalized and de-tokenized. Since Paulus et al. (2017) trained their model on anonymized data, we also recover the anonymized entities and numbers of their outputs.",
"cite_spans": [
{
"start": 114,
"end": 134,
"text": "Paulus et al. (2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "We show the article and 6 summaries (reference summary, 4 generated summaries and a random summary) to each human evaluator. The random summary is a reference summary randomly picked from other articles and is used as a trap. We show the instructions of three different aspects as: (1) Informativity: how well does the summary capture the important parts of the article? (2) Conciseness: is the summary clear enough to explain everything without being redundant? (3) Readability: how well-written (fluent and grammatical) the summary is? The user interface of our human evaluation is shown in the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "We ask the human evaluator to evaluate each summary by scoring the three aspects with 1 to 5 score (higher the better). We reject all the evaluations that score the informativity of the random summary as 3, 4 and 5. By using this trap mechanism, we can ensure a much better quality of our human evaluation. For each example, we first ask 5 human evaluators to evaluate. However, for those articles that are too long, which are always skipped by the evaluators, it is hard to collect 5 reliable evaluations. Hence, we collect at least 3 evaluations for every example. For each summary, we average the scores over different human evaluators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "The results are shown in Table 3 . The reference summaries get the best score on conciseness since the recent abstractive models tend to copy sentences from the input articles. However, our model learns well to select important information and form complete sentences so we even get slightly better scores on informativity and readability than the reference summaries. We show a typical example of our model comparing with other state-of-Original article (truncated): A chameleon balances carefully on a branch, waiting calmly for its prey... except that if you look closely, you will see that this picture is not all that it seems. For the 'creature' poised to pounce is not a colourful species of lizard but something altogether more human. Featuring two carefully painted female models, it is a clever piece of sculpture designed to create an amazing illusion. It is the work of Italian artist Johannes Stoetter. Scroll down for video. Can you see us? Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to date. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. To complete the deception, the models rested on a bench painted to match their skin and held the green branch in the air beneath them. Stoetter can take weeks to plan one of his pieces and hours to paint it. Speaking about The Chameleon, he said: 'I worked about four days to design the motif bigger and paint it with colours. The body painting took me about six hours with the help of an assistant. I covered the hair with natural clay to make the heads look bald.' Camouflage job: A few finishing touches are applied to the two naked models to complete the transformation. 'There are different difficulties on different levels as in every work, but I think that my passion and love to my work is so big, that I figure out a way to deal with difficulties. My main inspirations are nature, my personal life-philosophy, every-day-life and people themselves.' However, the finished result existed only briefly before the models were able to get up and wash the paint off with just a video and some photographs to record it. (...) Figure 6 : Typical Comparison. Our model attended at the most important information (blue bold font) matching well with the reference summary; while other state-of-the-art methods generate repeated or less important information (red italic font).",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 2363,
"end": 2371,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "the-art methods in Fig. 6 . More examples (5 using CNN/Daily Mail news articles and 3 using nonnews articles as inputs) are provided in the supplementary material.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 25,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.3"
},
{
"text": "We propose a unified model combining the strength of extractive and abstractive summarization. Most importantly, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. The inconsistency loss enables extractive and abstractive summarization to be mutually beneficial. By end-to-end training of our model, we achieve the best ROUGE-recall and ROUGE while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "All our ROUGE scores are reported by the official ROUGE script. We use the pyrouge package. https://pypi.org/project/pyrouge/0.1.3/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the support from Cheetah Mobile, National Taiwan University, and MOST 107-2634-F-007-007, 106-3114-E-007-004, 107-2633-E-002-001. We thank Yun-Zhu Song for assistance with useful survey and experiment on the task of abstractive summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Completing the deception, models rested on bench painted to match skin. DeepRL: Italian artist Johannes Stoetter has painted female models to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be the work of Italian artist. He has painted nude models and it is a clever piece of sculpture designed to create an amazing illusion. It is work of artist Johannes Stoetter. GAN: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots",
"authors": [],
"year": null,
"venue": "Johannes Stoetter's artwork features two carefully painted female models",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Stoetter's artwork features two carefully painted female models. The 37-year-old has previously transformed models into frogs and parrots. Daubed water-based body paint on naked models to create the effect. Completing the deception, models rested on bench painted to match skin. DeepRL: Italian artist Johannes Stoetter has painted female models to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be the work of Italian artist. He has painted nude models and it is a clever piece of sculpture designed to create an amazing illusion. It is work of artist Johannes Stoetter. GAN: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to date.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pointer-generator: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. It is the work of Italian artist Johannes Stoetter. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pointer-generator: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. It is the work of Italian artist Johannes Stoetter. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. Our unified model (with inconsistency loss):",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The 37-year-old has previously transformed his models into frogs and parrots. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio",
"authors": [],
"year": 2014,
"venue": "Proceedings of the 2015 International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Repre- sentations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distraction-based neural networks for modeling documents",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Ar- tificial Intelligence (IJCAI-16).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural summarization by extracting sentences and words",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "484--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484-494.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Controllable abstractive summarization",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.05217"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), volume 1, pages 1631-1640.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693- 1701.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Extractive summarization using continuous vector space models",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "K\u00e5geb\u00e4ck",
"suffix": ""
},
{
"first": "Olof",
"middle": [],
"last": "Mogren",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
},
{
"first": "Devdatt",
"middle": [],
"last": "Dubhashi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)",
"volume": "",
"issue": "",
"pages": "31--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael K\u00e5geb\u00e4ck, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014. Extractive summariza- tion using continuous vector space models. In Pro- ceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 31-39.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generative adversarial network for abstractive text summarization",
"authors": [
{
"first": "Linqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceddings of the 2018 Association for the Advancement of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2017. Generative adversarial net- work for abstractive text summarization. In Proced- dings of the 2018 Association for the Advancement of Artificial Intelligence.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Language as a latent variable: Discrete generative models for sentence compression",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "319--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 319-328.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceddings of the 2017 Association for the Advancement of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3075--3081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of doc- uments. In Proceddings of the 2017 Association for the Advancement of Artificial Intelligence, pages 3075-3081.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Classify or select: Neural architectures for extractive document summarization",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04244"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, and Mingbo Ma. 2016a. Classify or select: Neural architectures for extractive document summarization. arXiv preprint arXiv:1611.04244.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Abstractive text summarization using sequence-tosequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos San- tos, Caglar Gulcehre, and Bing Xiang. 2016b. Abstractive text summarization using sequence-to- sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natu- ral Language Learning, pages 280-290.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural extractive summarization with side information",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Papasarantopoulos",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Shay B",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04530"
]
},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Nikos Papasarantopoulos, Mirella La- pata, and Shay B Cohen. 2017. Neural extrac- tive summarization with side information. arXiv preprint arXiv:1704.04530.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2018 International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. In Proceedings of the 2018 Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06732"
]
},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Alexander M Rush",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073-1083.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems, pages 2692-2700.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Graph-based neural multi-document summarization",
"authors": [
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kshitijh",
"middle": [],
"last": "Meelu",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Krishnan",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "452--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 452-462.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Optimizing sentence modeling and selection for document summarization",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Yulong",
"middle": [],
"last": "Pei",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1383--1389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Yulong Pei. 2015. Optimizing sen- tence modeling and selection for document summa- rization. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 1383-1389. AAAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Architecture of the extractor. We treat the sigmoid output of each sentence as sentencelevel attention \u2208 [0, 1]."
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td colspan=\"4\">ROUGE-1 ROUGE-2 ROUGE-L</td></tr><tr><td>pre-trained</td><td/><td>73.50</td><td>35.55</td><td>68.57</td></tr><tr><td>end2end w/o inconsistency loss</td><td/><td>72.97</td><td>35.11</td><td>67.99</td></tr><tr><td>end2end w/ inconsistency loss</td><td/><td>78.40</td><td>39.45</td><td>73.83</td></tr><tr><td>ground-truth labels</td><td/><td>89.23</td><td>49.36</td><td>85.46</td></tr><tr><td>Table 1: Method</td><td/><td colspan=\"3\">ROUGE-1 ROUGE-2 ROUGE-L</td></tr><tr><td>HierAttn (Nallapati et al., 2016b) *</td><td/><td>32.75</td><td>12.21</td><td>29.01</td></tr><tr><td>DeepRL (Paulus et al., 2017) *</td><td/><td>39.87</td><td>15.82</td><td>36.90</td></tr><tr><td>pointer-generator (See et al., 2017)</td><td/><td>39.53</td><td>17.28</td><td>36.38</td></tr><tr><td>GAN (Liu et al., 2017)</td><td/><td>39.92</td><td>17.65</td><td>36.71</td></tr><tr><td>two-stage (ours)</td><td/><td>39.97</td><td>17.43</td><td>36.34</td></tr><tr><td colspan=\"2\">end2end w/o inconsistency loss (ours)</td><td>40.19</td><td>17.67</td><td>36.68</td></tr><tr><td>end2end w/ inconsistency loss (ours)</td><td/><td>40.68</td><td>17.97</td><td>37.13</td></tr><tr><td>lead-3</td><td/><td/><td/></tr></table>",
"num": null,
"text": "ROUGE recall scores of the extracted sentences. pre-trained indicates the extractor trained on the ground-truth labels. end2end indicates the extractor after end-to-end training with the abstracter. Note that ground-truth labels show the upper-bound performance since the reference summary to calculate ROUGE-recall is abstractive. All our ROUGE scores have a 95% confidence interval with at most \u00b10.33."
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td>avg. R inc</td></tr><tr><td>w/o incon. loss</td><td>0.198</td></tr><tr><td>w/ incon. loss</td><td>0.042</td></tr></table>",
"num": null,
"text": "Comparing human evaluation results with state-of-the-art methods."
}
}
}
}