ACL-OCL / Base_JSON /prefixS /json /sdp /2020.sdp-1.37.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:36:28.955829Z"
},
"title": "SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline",
"authors": [
{
"first": "Jiaxin",
"middle": [],
"last": "Ju",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {
"postCode": "3800",
"region": "VIC",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Deakin University",
"location": {
"postCode": "3217",
"region": "VIC",
"country": "Australia"
}
},
"email": "m.liu@deakin.edu.au"
},
{
"first": "Longxiang",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Deakin University",
"location": {
"postCode": "3217",
"region": "VIC",
"country": "Australia"
}
},
"email": "longxiang.gao@deakin.edu.au"
},
{
"first": "Shirui",
"middle": [],
"last": "Pan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {
"postCode": "3800",
"region": "VIC",
"country": "Australia"
}
},
"email": "shirui.pan@monash.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Scholarly Document Processing (SDP) workshop is to encourage more efforts on natural language understanding of scientific task. It contains three shared tasks and we participate in the LongSumm shared task. In this paper, we describe our text summarization system, SciSummPip, inspired by SummPip (Zhao et al., 2020) that is an unsupervised text summarization system for multi-document in news domain. Our SciSummPip includes a transformer-based language model SciBERT (Beltagy et al., 2019) for contextual sentence representation, content selection with PageRank (Page et al., 1999), sentence graph construction with both deep and linguistic information, sentence graph clustering and withingraph summary generation. Our work differs from previous method in that content selection and a summary length constraint is applied to adapt to the scientific domain. The experiment results on both training dataset and blind test dataset show the effectiveness of our method, and we empirically verify the robustness of modules used in SciSummPip with BERTScore (Zhang et al., 2019a).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The Scholarly Document Processing (SDP) workshop is to encourage more efforts on natural language understanding of scientific task. It contains three shared tasks and we participate in the LongSumm shared task. In this paper, we describe our text summarization system, SciSummPip, inspired by SummPip (Zhao et al., 2020) that is an unsupervised text summarization system for multi-document in news domain. Our SciSummPip includes a transformer-based language model SciBERT (Beltagy et al., 2019) for contextual sentence representation, content selection with PageRank (Page et al., 1999), sentence graph construction with both deep and linguistic information, sentence graph clustering and withingraph summary generation. Our work differs from previous method in that content selection and a summary length constraint is applied to adapt to the scientific domain. The experiment results on both training dataset and blind test dataset show the effectiveness of our method, and we empirically verify the robustness of modules used in SciSummPip with BERTScore (Zhang et al., 2019a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text summarization aims at automatically generating a fluent and coherent summary that mainly contains the salient information from the source document(s). Two main categories are typically involved in the text summarization task, one is extractive approach (Luo et al., 2019; Xu and Durrett, 2019) which directly extracts salient sentences from the input text as the summary, and the other is abstractive approach (Sutskever et al., 2014; See et al., 2017; Sharma et al., 2019) which imitates human behaviour to produce new sentences based on the extracted information from the given document.",
"cite_spans": [
{
"start": 258,
"end": 276,
"text": "(Luo et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 277,
"end": 298,
"text": "Xu and Durrett, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 415,
"end": 439,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF24"
},
{
"start": 440,
"end": 457,
"text": "See et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 458,
"end": 478,
"text": "Sharma et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to meet the requirements of modern data-driven methods, several large datasets have been presented. The majority of those datasets are for generic domain, but few available corpora from other task-specific domains. Most of existing state-the-art summarization systems (Liu and Lapata, 2019; Zhou et al., 2020; Wang et al., 2020) target news or simple documents, and they are less adequate for summarizing scientific work due to the length and complexity. Those summarization systems cannot provide sufficient information conveyed in the scientific paper.",
"cite_spans": [
{
"start": 277,
"end": 299,
"text": "(Liu and Lapata, 2019;",
"ref_id": "BIBREF13"
},
{
"start": 300,
"end": 318,
"text": "Zhou et al., 2020;",
"ref_id": "BIBREF34"
},
{
"start": 319,
"end": 337,
"text": "Wang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The general domain have been paid enough attention, whereas the attention in scientific domain is far from enough. To address this point, the Scholarly Document Processing (SDP) workshop (Chandrasekaran et al., 2020) is held to accelerate scientific discovery in research community, they appeal to researchers for designing a summarization system that can generate a relatively long summary for scientific work.",
"cite_spans": [
{
"start": 187,
"end": 216,
"text": "(Chandrasekaran et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the release of Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2018) , much research has been carried out on involving them in their system. Liu (2019) modified the input sequence embedding and built several summarizationspecific layers for extractive summarization. Similarly, Liu and Lapata (2019) present a novel document-level encoder based on BERT (Devlin et al., 2018) for both extractive summarization and abstractive summarization. In their model structure, the lower transformer represents adjacent sentences and the higher layer with self-attention mechanism represents the multi-sentence discourse. These works leverage the advantage of deep neural network, not taking into account the linguistic information. In contrast, Zhao et al. (2020) 1 construct semantic clusters and sentence graphs for multidocument summarization, which involves linguistic information and discourse markers. In this paper, we followed the framework of Zhao et al. (2020) to construct our own unsupervised text summarization system. However, our model is different from the previous work: we modify the pipeline structure of multi-document summarization in the field of news to the single-document summarizer for summarizing scholarly documents, and we introduce two new steps to control the length of generated summary and to remove irrelevant sentences.",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 65,
"end": 86,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 159,
"end": 169,
"text": "Liu (2019)",
"ref_id": "BIBREF12"
},
{
"start": 296,
"end": 317,
"text": "Liu and Lapata (2019)",
"ref_id": "BIBREF13"
},
{
"start": 371,
"end": 392,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 959,
"end": 977,
"text": "Zhao et al. (2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions in this work can be summarized in the following aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We highlight the importance of sentence embedding for scientific work. A variety of works focus on facilitating the process of obtaining sentence representation from a pretrained language model on generic domain, while less attention is paid on other taskspecific domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We compare the performances between PageRank (Page et al., 1999) and the Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) in the content selection module. To our knowledge, no previous work compares their performances on scientific long document summarization task with deep neural representation.",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF20"
},
{
"start": 108,
"end": 139,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We experimentally verify that the effectiveness of the proposed model. We achieve better ROUGE results than original model on both training dataset and blind test dataset. Besides, our model is also evaluated on the BERTScore metric (Zhang et al., 2019a) and the results indicate that our model is more robust to generate high quality summary.",
"cite_spans": [
{
"start": 235,
"end": 256,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text Summarization System Most of recent text summarization systems leverage the advantages of deep neural networks, their encoderdecoder structures use either recurrent neural networks (Cheng and Lapata, 2016; Nallapati et al., 2016) or Transformer encoders (Zhang et al., 2019b; Khandelwal et al., 2019) . Benefit of the sequence-to-sequence structure, a great progress in both extractive and abstractive document summarization is achieved. Though abstractive summarization has more potentials to generate interpretations in a human-like fashion, it has been found that sometimes repeatedly produces the same phrase or sentence (Suzuki and Nagata, 2016) , which greatly reduces the comprehensibility and readability. In contrast, extractive summarization performs better in fluency aspect and it can grammatical and accurately represent the source text. One potential issue in extractive summarization is that not all of information from the extracted sentence is important, which leads more redundancy in the generated summary.",
"cite_spans": [
{
"start": 186,
"end": 210,
"text": "(Cheng and Lapata, 2016;",
"ref_id": "BIBREF5"
},
{
"start": 211,
"end": 234,
"text": "Nallapati et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 259,
"end": 280,
"text": "(Zhang et al., 2019b;",
"ref_id": "BIBREF32"
},
{
"start": 281,
"end": 305,
"text": "Khandelwal et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 630,
"end": 655,
"text": "(Suzuki and Nagata, 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the work of Zhao et al. (2020) , they apply graph structure and consider the discourse relationship between sentences rather than using encoderdecoder structure, and text compression is implemented in the final stage to reduce the redundancy in the generated sentences. However, their model is designed for multi-document summarization in the news domain, we extend their SummPip to singledocument settings for scientific long articles.",
"cite_spans": [
{
"start": 15,
"end": 33,
"text": "Zhao et al. (2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Sentence Embedding Method Term frequency-inverse document frequency (TF-IDF) is widely used in traditional NLP, but it cannot capture the semantic information and contextual relationship between sentences. Word2Vec (Mikolov et al., 2013) is used in SummPip (Zhao et al., 2020) to capture contextualized relationship, but this embedding method cannot solve the polysemous problem. More recently, BERT (Devlin et al., 2018) has achieved better performance in many NLP downstream tasks, but it is difficult to derive sentence embeddings. To solve this limitation, single sentences are passed to the BERT and two common ways to extract sentence representation are widely used: averaging the outputs and using the output of the [CLS] token (May et al., 2019; Zhang et al., 2019a) .",
"cite_spans": [
{
"start": 215,
"end": 237,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 257,
"end": 276,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF33"
},
{
"start": 400,
"end": 421,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 723,
"end": 728,
"text": "[CLS]",
"ref_id": null
},
{
"start": 735,
"end": 753,
"text": "(May et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 754,
"end": 774,
"text": "Zhang et al., 2019a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Xiao (2018) develops a repository, bert-as-aservice 2 , which accelerates the process of extracting token and sentence embeddings from BERT (Devlin et al., 2018) . Lately, in order to find a better way to derive semantically similar sentence from language models, Reimers and Gurevych (2019) present SBERT. However, above works help facilitate workload in generic domain rather than task-specific domain.",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 264,
"end": 291,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Graph is an intuitive structure for utilizing the relation information between sentences. Some work (Mihalcea and Tarau, 2004; Erkan and Radev, 2004) ing methods. Inspired by PageRank algorithm (Page et al., 1999) , they consider the document as a graph where sentences are vertices and edges represent the relations between two sentences. Shortly thereafter, some researchers (Carbonell and Goldstein, 1998; Kurmi and Jain, 2014; Mao et al., 2020) involved a query-biased strategy, the Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) , in their summarizers. MMR tries to balance the relevance and diversity by controlling the trade-off parameter \u03bb. The first part of the formula controls query relevance and the second part controls diversity.",
"cite_spans": [
{
"start": 100,
"end": 126,
"text": "(Mihalcea and Tarau, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 127,
"end": 149,
"text": "Erkan and Radev, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 194,
"end": 213,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF20"
},
{
"start": 377,
"end": 408,
"text": "(Carbonell and Goldstein, 1998;",
"ref_id": "BIBREF3"
},
{
"start": 409,
"end": 430,
"text": "Kurmi and Jain, 2014;",
"ref_id": "BIBREF9"
},
{
"start": 431,
"end": 448,
"text": "Mao et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 520,
"end": 551,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content Selection",
"sec_num": null
},
{
"text": "M M R = argmax S i \u2208C \u03bbSim 1 (S i , Q) \u2212 (1 \u2212 \u03bb) argmax S j \u2208S Sim 2 (S i , S j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Selection",
"sec_num": null
},
{
"text": "Where C is the set of candidate sentences, S is the set of extracted sentences, Q is the query embedding, S i , S j are sentence embeddings of candidate sentences i and j, respectively. Sim indicates the cosine similarity between two embeddings. Though this approach have been proved that it outperforms generic summarization approaches in the information retrieval task, to our knowledge, there is no previous work compared it with PageRank algorithm on scientific long document summarization task. Our work incorporates deep neural representations into both PageRank algorithm and MMR strategy and shows the comparison between these two methods in the field of scientific work for both extractive and abstractive summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Selection",
"sec_num": null
},
{
"text": "The training dataset provided by the LongSumm shared task consists of 2236 scientific papers, of which 1705 are for extractive method and 531 are for abstractive method. The reference extractive summaries are generated by TalkSumm (Lev et al., 2019 ) that extracts sentences appeared in associated conference videos, while the abstractive summaries are collected from blogs written by researchers.",
"cite_spans": [
{
"start": 231,
"end": 248,
"text": "(Lev et al., 2019",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Pre-processing",
"sec_num": "3"
},
{
"text": "We download the training corpus from the given URLs (for abstractive) and the script (for extractive).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download paper",
"sec_num": null
},
{
"text": "Paper Parsing All of papers are parsed from PDF form into JSON structure by using Science-Parse 3 . It outputs a JSON file for each PDF, which contains the title, abstract text, metadata, and the text of each section in the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download paper",
"sec_num": null
},
{
"text": "We concatenate each section text as the paper text. Then sentences are segmented by using the NLTK library, and each sentence is tokenized as well. Table 1 reports the result of the statistics analysis for both training dataset and test dataset, and we can see that the number of sentences in some reference summaries is far less than required length of generated summary, 600 words, which may lead a bias in the evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Text processing",
"sec_num": null
},
{
"text": "We adopt the SummPip (Zhao et al., 2020 ) as our baseline model, and we modify the pipeline architecture for summarizing scholarly documents. Two new steps are introduce for adapting scientific domain, one is to remove irrelevant sentences and the other is to control the length of generated summary. In the following subsections, we will specify each component in the SciSummPip.",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "(Zhao et al., 2020",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "4"
},
{
"text": "Pretrained language model In this paper, we apply a publicly available large-scale language model, SciBERT (Beltagy et al., 2019) , which is pretrained based on BERT (Devlin et al., 2018) and extends the idea of word embeddings by learning contextual representations from large-scale scientific corpora. This is implemented in Pytorch using Transformers established by Wolf et al. (2019) 4 .",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 166,
"end": 187,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 369,
"end": 387,
"text": "Wolf et al. (2019)",
"ref_id": "BIBREF28"
},
{
"start": 388,
"end": 389,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Method",
"sec_num": "4.1"
},
{
"text": "Sentence embedding Using more accurate sentence embeddings can improve the performance of summarization system in language understanding. In SciSummPip, we average the output of SciBERT from the second layer to the last layer. In addition, we also experiment with other embedding methods and the the results show that this is a more accurate way to represent scientific sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Method",
"sec_num": "4.1"
},
{
"text": "Content selection Not all of sentences should be involved in the summary, so we include content selection step before constructing sentence graph. We build a matrix to store the similarity between each two sentences, then PageRank (Page et al., 1999) algorithm is implemented to rank all of sentences. Sentences with lower score will be deleted from the candidate list, here we introduce a new step to control the ratio of removed sentences.",
"cite_spans": [
{
"start": 231,
"end": 250,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Graph Construction",
"sec_num": "4.2"
},
{
"text": "We construct the sentence graph, where each node represents a sentence, and nodes are connected if they meet the linguistic requirements. To identify this structure, we borrow the components from the previous work (Zhao et al., 2020) . Specifically, this pipeline consists of discovering deverbal noun reference, finding the same entity continuation, recognizing discourse markers, and calculating sentence similarity by taking the cosine similarity.",
"cite_spans": [
{
"start": 214,
"end": 233,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": null
},
{
"text": "Spectral clustering After identifying pairwise sentence connection, we involve a new step for determining the number of clusters. This is to control the length of generated summary so that the summary varies with the length of the original paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Generation",
"sec_num": "4.3"
},
{
"text": "Multi-sentences compression This module (Boudin and Morin, 2013) is to generate a single summary sentence from each sentence cluster. Sentences with similar semantic information will be compressed by building a word graph. Considering the key phrases and discourse structure, so that the reconstructed sentence will have higher score. Select the sentence with the highest score as the summary sentence, and then combine all 5 Experiment Setup",
"cite_spans": [
{
"start": 40,
"end": 64,
"text": "(Boudin and Morin, 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Generation",
"sec_num": "4.3"
},
{
"text": "Extractive summarization Task We use SciB-ERT for sentence embedding in our pipeline, so for extractive text summarization task we directly use Scibert-summarizer 5 with the fixed length range (from 60 to 600 words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.1"
},
{
"text": "We implement our pipeline, SciSummPip, in abstractive summarization task, and we compare the performances of PageRank algorithm and of MMR strategy in the content selection module. For PageRank algorithm, we set a cutoff ratio that is a new introduced parameter for removing irrelevant sentences and the empirical results show that setting it as 0.25 achieves better performance. For the MMR strategy, we set 0.2, 0.5, 0.8 for the trade-off parameter in the experiment, respectively. To control the generated summary length, we introduce another new parameter, extended ratio, to modify the number of clusters based on the number of ranking sentences. In our pipeline,we set it as 0.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive summarization Task",
"sec_num": null
},
{
"text": "For extractive task, we compare our model with the following unsupervised summarization models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "TextRank (Barrios et al., 2016) TextRank (Mihalcea and Tarau, 2004) applies a variation of PageRank algorithm (Page et al., 1999 ) over a graph-based structure, and it produces a list of ranked elements in the graph without the need of a training corpus. TextRank implemented in this paper is produced by Barrios et al. (2016) , they change the similarity function to Okapi BM25 so that the performance is better than the original tex-tRank model. We set the output summary with the fixed length 600 words.",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "(Barrios et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 110,
"end": 128,
"text": "(Page et al., 1999",
"ref_id": "BIBREF20"
},
{
"start": 305,
"end": 326,
"text": "Barrios et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "LexRank (Erkan and Radev, 2004) Similar with textRank (Mihalcea and Tarau, 2004) , LexRank also applies PageRank algorithm and leverages a graph structure for summarization. Differently, textRank calculate the similarity based on the number of words two sentences have in common, while LexRank uses cosine similarity of TF-IDF vectors. Table 2 : ROUGE scores reported on the training dataset and the blind test dataset. Best results are in boldface. The reference extractive summary and abstractive summary are generated by TalkSumm (Lev et al., 2019) and collected from online blogs, respectively. M M R Sci indicates we implement MMR algorithm with sentence embeddings derived from SciBERT (Beltagy et al., 2019) . SciSummP ip P R and SciSummP ip M M R are our model with different content selection modules, and the number follow the MMR is the setting for trade-off parameter \u03bb. As SummPip cannot effectively run on large scale corpora of long document, we add content selection module and shown as SummPip + P R .",
"cite_spans": [
{
"start": 8,
"end": 31,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 54,
"end": 80,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 533,
"end": 551,
"text": "(Lev et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 692,
"end": 714,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 336,
"end": 343,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "MMR (Carbonell and Goldstein, 1998) MMR is a query-biased summarization approach, it tries to balance the relevance and diversity by controlling the trade-off parameter \u03bb. In the previous works, the similarity usually calculate based on TF-IDF, but in our implementation we use sentence embeddings derived from the output of SciBERT (Beltagy et al., 2019) . In addition, we set the document title as the query and the fixed length of generated summary is set as 600 words. For abstractive task, we apply different sentence embedding methods in SciSummPip:",
"cite_spans": [
{
"start": 4,
"end": 35,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF3"
},
{
"start": 333,
"end": 355,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 SciBERT (Beltagy et al., 2019) : We implement two common strategies for sentence embeddings derived from SciBERT model: averaging the output from the second to the last layer and using [CLS] token embedding.",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 SummPip (Zhao et al., 2020) : We use the same embedding method with the original pipeline to compare the performance.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 SBERT (Reimers and Gurevych, 2019) : This is a modification of the BERT network using siamese and triplet networks in order to find semantically similar sentences in vector space. Their empirical results indicate that their method is better than those two common embedding strategies, so we incorporate it into SciSummPip as a comparison.",
"cite_spans": [
{
"start": 8,
"end": 36,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "6 Evaluation and Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "5.2"
},
{
"text": "Extractive summaries The training dataset for extractive method consists of 1705 papers, of which one paper cannot be parsed. Thus, we evaluate 1704 papers with the ROUGE metric (Lin and Hovy, 2003) in our experiments. As displayed in Table 2 , the Scibert-summarizer achieves better ROUGE scores than all other compared systems. We implement MMR algorithm with sentence embedding derived from averaging SciBERT (Beltagy et al., 2019) output, and we can see it performs better than LexRank (Erkan and Radev, 2004) but worse than the textRank model (Barrios et al., 2016) with the Okapi BM25 similarity function. Therefore, we can verify that PageRank ranking algorithm performers better than MMR strategy in extractive task.",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF11"
},
{
"start": 412,
"end": 434,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 490,
"end": 513,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 548,
"end": 570,
"text": "(Barrios et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment result on training dataset",
"sec_num": "6.1"
},
{
"text": "Abstractive summaries For abstractive experiments, we collect 530 summaries in total as one paper cannot be parsed by Science-parse. We implement SciSummPip with different parameter settings to find out the best one. The number of words in each sentence is set from 15 to 29, then we observe that the summary with 26 words in each sentence achieves the best performance. We incorporate PageRank algorithm (Page et al., 1999) and MMR algorithm (Carbonell and Goldstein, 1998) into SciSummPip content selection module, respectively. As displayed in Table 2 , it is not surprising to see SciSummPip with PageRank algorithm outperforms all of settings for SciSummPip with MMR algorithm, because the performance of textRank is better than that of MMR in the extractive task.",
"cite_spans": [
{
"start": 405,
"end": 424,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF20"
},
{
"start": 443,
"end": 474,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 547,
"end": 555,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment result on training dataset",
"sec_num": "6.1"
},
{
"text": "The blind test dataset consists of 22 scientific papers 6 . It does not declare the blind test data is for extractive summarizer or abstractive summarizer, so we implement both Scibert-summarizer and SciSummPip on it. Comparing with the SummPip (Zhao et al., 2020) , the experiment results verify that our new pipeline architecture significantly improve the performance. In addition, we try different number of words generated in each sentence and we find that setting it closes to the median value of that in scientific papers would gain higher score. Besides, although extractive model gains the highest ROUGE score, we still can see our SciSummPip is competitive. ",
"cite_spans": [
{
"start": 245,
"end": 264,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment result on test dataset",
"sec_num": "6.2"
},
{
"text": "To find out a more accurate method for representing scientific sentences, we incorporate different embedding strategies into SciSummPip. Performances reported in Table 3 and Table 4 indicate that our model ranks highest with averaging the output of SciBERT (Beltagy et al., 2019) method. SBERT (Reimers and Gurevych, 2019) shows competitive performance even though it is designed for generic domain. In fact, utilizing SBERT significantly reduce the workload of extracting sentence embedding, but it is not sufficient enough for representing scientific sentence.",
"cite_spans": [
{
"start": 257,
"end": 279,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 294,
"end": 322,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 174,
"end": 181,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Different Sentence Embedding Methods",
"sec_num": "6.3"
},
{
"text": "We evaluate models on BERTScore (Zhang et al., 2019a) , an automatic evaluation metric for text generation, to investigate the ability of writing abstractive summary. BERTScore calculates a similarity score for each token in the candidate sentence with each token in the reference sentence by leveraging contextual embeddings. As can be seen in Table 5 , SciSummPip achieves highest precision and F1-score while SBERT gains the highest recall. This proves that the summary generated by our model is more informative and representative. Since BERTScore utilizes Bert (Devlin et al., 2018) to calculate similarity score, the max length of input sequence is 512 tokens, which limits the performance of relatively long summary. We further investigate the distribution of F1score from BERTScore evaluation. As shown in figure 1, although these models achieve similar performance, the F1-score distribution of SciSummPip obviously more stable than others. SciSummPip achieve the highest frequency in the range of 0.80-0.82, which means near 140 generated summaries gain around 0.81 F1-score. Therefore, we can say that our model is more robust for summarizing scientific work in abstractive task. Table 5 . X-axis indicates data range of F1-score and Y-axis indicates the frequency of the data in each bin. In order to ensure the bin data range for each distribution is same, we set the data range of each bin as 0.005 so that the parameter, bins, is set as int(data range of F 1 \u2212 score/0.005).",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF31"
},
{
"start": 566,
"end": 587,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1191,
"end": 1199,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "BERTScore Evaluation",
"sec_num": "6.4"
},
{
"text": "Extractive Reference Summary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERTScore Evaluation",
"sec_num": "6.4"
},
{
"text": "The analysis of emotions in texts is an important task in NLP. Traditional studies treat this task as a pipeline of two separated sub-tasks: emotion classification and emotion cause detection. The former identifies the category of an emotion and the latter detects the cause of an emotion. This separated framework makes each sub-task more flexible to deal with, but it neglects the relevance between the two sub-tasks. In this paper, we use the human-labeled emotion corpus provided by Cheng et al. (2017) as our experimental data (namely Cheng emotion corpus). Cheng emotion corpus can be considered as a collection of subtweets. For each emotion in a subtweet, all emotion keywords expressing the emotion are selected, and then the class and the cause of the emotion are annotated. (...)",
"cite_spans": [
{
"start": 487,
"end": 506,
"text": "Cheng et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERTScore Evaluation",
"sec_num": "6.4"
},
{
"text": "The analysis of emotions in texts is an important task in NLP. Cheng emotion corpus can be considered as a collection of subtweets. Given an instance which is a pair of <an emotion keyword, a clause in the subtweet>, ECause assigns a binary label to the instance to indicates the presence of a causal relation. The input text of an ECause instance also has three sequences of words: the emotion keyword (i.e. EmoKW), the current clause (i.e. CauseCL) and the context between EmoKW and CauseCL. The BiLSTM layer focuses on the extraction of sequence features, and the attention layer focuses on the learning of word importance (weights). (...) Table 6 : Example of the generated extractive summary compared with reference summary that is generated by TalkSumm (Lev et al., 2019) . Text in the same color indicates the content they describe is the same. Due to the length constraint, we omit part of the generated summary and shown as (...).",
"cite_spans": [
{
"start": 759,
"end": 777,
"text": "(Lev et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 643,
"end": 650,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scibert-summarizer:",
"sec_num": null
},
{
"text": "We further manually inspect the generated summary to explore if our model can capture the salient information from given document. Table 6 and Table 7 display an example of generated summary compared with the corresponding reference summary in the training dataset. The abstractive ref-",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Analysis",
"sec_num": "6.5"
},
{
"text": "Abstractive Reference Summary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Analysis",
"sec_num": "6.5"
},
{
"text": "The paper proposes a two-stage synthesis network that can perform transfer learning for the task of machine comprehension. The problem is the following: We have a domain DS for which we have labelled dataset of question-answer pairs and another domain DT for which we do not have any labelled dataset. We use the data for domain DS to train SynNet and use that to generate synthetic questionanswer pairs for domain DT. Now we can train a machine comprehension model M on DS and finetune using the synthetic data for DT. SynNet Works in two stages: Answer Synthesis -Given a text paragraph, generate an answer. (...) After the word vector, append a '1' if the word was part of the candidate answer else append a '0'. Feed to a Bi-LSTM network (encoder-decoder) where the decoder conditions on the representation generated by the encoder as well as the question tokens generated so far. (...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Analysis",
"sec_num": "6.5"
},
{
"text": "the ability to quickly use a mc model trained on one domain to answer questions over paragraphs from another with no annotated data. recent work generated synthetic data generated questions leads to improved performance, we use a model where the answer synthesis and question types. we generate the answer first because answers are usually key semantic concepts, while questions can transfer a mc model trained on another domain. when we ensemble a bidaf model fs we use the two-stage synnet to generate data tuples to directly boost performance boost. (...) however, unlike machine translation , for tasks like mc, we need to synthesize both the question and answers given the context paragraph. (...) the first stage of the model, an answer synthesis module , uses a Bi-directional LSTM to predict iob tags on the input paragraph, which mark out key semantic concepts that are likely answers.(...) Table 7 : Example of the generated abstractive summary compared with reference summary that is collected from researcher's blog. Text in the same color indicates the content they describe is similar. Due to the length constraint, we omit part of the generated summary and shown as (...).",
"cite_spans": [],
"ref_spans": [
{
"start": 900,
"end": 907,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "SciSummPip:",
"sec_num": null
},
{
"text": "erence summary is collected from the online blog written by the researcher, so it is more difficult to capture the similar description in the generated summary. However, As shown in table 7, our model successfully write some similar context in the final output. Notwithstanding, we have to say the readability and grammatically of the generated summary still need to be improved. For blind test dataset, we also inspect the extractive summary and abstractive summary for the same paper. We find that the Scibert-summarizer tends to extract the sentence appeared in the early part of the paper, and the generated summary usually lack of logicality and consistency. In contrast, the summary produced by SciSummPip is more logical and contains more salient information about the methodology and the experiment. Although Scibert-summarizer gains higher ROUGE score on the blind test dataset, the summary generated by our model is more consistent with the purpose of the LongSumm Shared Task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SciSummPip:",
"sec_num": null
},
{
"text": "In this paper, we have presented the modified unsupervised pipeline architecture, SciSummPip, that leverages a transformer-based language model for summarizing scientific papers. We add content selection module and two steps to remove irrelevant sentences and to control the length of generated summary. After that, the linguistic knowledge will be incorporated into the process of multi-sentences compression for summarizing scientific work. The experiment results of automatic evaluation prove that our new pipeline significantly improves the overall performance on both training and blind test dataset. Besides, through manual inspection we find that our model indeed capture the salient information from the given source document. However, we have to admit that the readability of generated summary needs to be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Limitation",
"sec_num": "7"
},
{
"text": "We incorporated deep neural representation into both MMR (Carbonell and Goldstein, 1998) strategy and PageRank (Page et al., 1999) algorithm. Even though MMR strategy performs better in information retrieval task, we empirically verified that it is not sufficient for our model to summarize scientific work. MMR is a query-biased approach and we chose the title as query in our implementation, thus the potential reason for worse performance may be the query we chose is not effective enough.",
"cite_spans": [
{
"start": 57,
"end": 88,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF3"
},
{
"start": 111,
"end": 130,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Limitation",
"sec_num": "7"
},
{
"text": "To investigate a sentence embedding method for sufficiently summarizing scholarly document, we compared the performances among several embed-ding strategies and we also evaluated their performances on both ROUGE metric and BERTScore metric. Although averaging the output of SciBERT (Beltagy et al., 2019) achieves better performance, the workload of using it to extract sentence embeddings is heavier than that of directly using SBERT (Reimers and Gurevych, 2019) . There is enough work for generic domain while the attention paid for task-specific domain is far from enough, therefore we appeal to researchers for making more efforts on task-specific domain in their further research.",
"cite_spans": [
{
"start": 282,
"end": 304,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 435,
"end": 463,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Limitation",
"sec_num": "7"
},
{
"text": "As the future, we will evaluate our pipeline on larger scientific datasets to show the effectiveness and robustness, and we also would like to conduct a analysis on the faithfulness and the level of abstraction for the generated summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "8"
},
{
"text": "https://github.com/mingzi151/SummPip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/hanxiao/bert-as-service/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/allenai/science-parse",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/transformers reconstructed summary sentences as the generated summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "bert-extractive-summarizer: https://pypi.org/project/bertextractive-summarizer/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Test dataset: https://github.com/guyfe/LongSumm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewer(s) for helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Variations of the similarity function of textrank for automated summarization",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Barrios",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Argerich",
"suffix": ""
},
{
"first": "Rosa",
"middle": [],
"last": "Wachenchauzer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.03606"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Barrios, Federico L\u00f3pez, Luis Argerich, and Rosa Wachenchauzer. 2016. Variations of the simi- larity function of textrank for automated summariza- tion. arXiv preprint arXiv:1602.03606.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scibert: Pretrained contextualized embeddings for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10676"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Keyphrase extraction for n-best reranking in multi-sentence compression",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Boudin",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Morin",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Boudin and Emmanuel Morin. 2013. Keyphrase extraction for n-best reranking in multi-sentence compression.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The use of mmr, diversity-based reranking for reordering documents and producing summaries",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "335--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 335-336.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview and insights from scientific document summarization shared tasks 2020: Cl-scisumm, laysumm and longsumm",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Maroli Krishnayya Chandrasekaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Feigenblat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Eduard",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Anita",
"middle": [
"De"
],
"last": "Shmueli-Scheuer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Waard",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Scholarly Document Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maroli Krishnayya Chandrasekaran, Guy Feigen- blat, Hovy. Eduard, Anirudh Ravichander, Michal. Shmueli-Scheuer, and Anita De Waard. 2020. Overview and insights from scientific document summarization shared tasks 2020: Cl-scisumm, lay- summ and longsumm. In In Proceedings of the First Workshop on Scholarly Document Processing (SDP 2020).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural summarization by extracting sentences and words",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.07252"
]
},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural sum- marization by extracting sentences and words. arXiv preprint arXiv:1603.07252.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of artificial intelligence research",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sample efficient text summarization using a single pre-trained transformer",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.08836"
]
},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Kevin Clark, Dan Jurafsky, and Lukasz Kaiser. 2019. Sample efficient text sum- marization using a single pre-trained transformer. arXiv preprint arXiv:1905.08836.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Text summarization using enhanced mmr technique",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Kurmi",
"suffix": ""
},
{
"first": "Pranita",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Computer Communication and Informatics",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Kurmi and Pranita Jain. 2014. Text summariza- tion using enhanced mmr technique. In 2014 Inter- national Conference on Computer Communication and Informatics, pages 1-5. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Lev",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Shmueli-Scheuer",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Achiya",
"middle": [],
"last": "Jerbi",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Konopnicki",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01351"
]
},
"num": null,
"urls": [],
"raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks. arXiv preprint arXiv:1906.01351.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "150--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 150-157.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fine-tune bert for extractive summarization",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10318"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Liu. 2019. Fine-tune bert for extractive summa- rization. arXiv preprint arXiv:1903.10318.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text summarization with pretrained encoders",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.08345"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2019. Text summa- rization with pretrained encoders. arXiv preprint arXiv:1908.08345.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reading like her: Human reading inspired extractive summarization",
"authors": [
{
"first": "Ling",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Feiyang",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3024--3034",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling Luo, Xiang Ao, Yan Song, Feiyang Pan, Min Yang, and Qing He. 2019. Reading like her: Human reading inspired extractive summarization. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3024-3034.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-document summarization with maximal marginal relevance-guided reinforcement learning",
"authors": [
{
"first": "Yuning",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yanru",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Yiqing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.00117"
]
},
"num": null,
"urls": [],
"raw_text": "Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren, and Jiawei Han. 2020. Multi-document summarization with maximal marginal relevance-guided reinforce- ment learning. arXiv preprint arXiv:2010.00117.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10561"
]
},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On mea- suring social biases in sentence encoders. arXiv preprint arXiv:1903.10561.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Abstractive text summarization using sequence-to-sequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.06023"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summariza- tion using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The pagerank citation ranking: Bringing order to the web",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Page",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [],
"last": "Motwani",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10084"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04368"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An entity-driven framework for abstractive summarization",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Luyang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02059"
]
},
"num": null,
"urls": [],
"raw_text": "Eva Sharma, Luyang Huang, Zhe Hu, and Lu Wang. 2019. An entity-driven framework for abstractive summarization. arXiv preprint arXiv:1909.02059.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cutting-off redundant repeating generations for neural abstractive summarization",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.00138"
]
},
"num": null,
"urls": [],
"raw_text": "Jun Suzuki and Masaaki Nagata. 2016. Cutting-off re- dundant repeating generations for neural abstractive summarization. arXiv preprint arXiv:1701.00138.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization",
"authors": [
{
"first": "Danqing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.12393"
]
},
"num": null,
"urls": [],
"raw_text": "Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document sum- marization. arXiv preprint arXiv:2004.12393.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "bert-as-service",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Xiao. 2018. bert-as-service. https://github. com/hanxiao/bert-as-service.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Neural extractive text summarization with syntactic compression",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.00863"
]
},
"num": null,
"urls": [],
"raw_text": "Jiacheng Xu and Greg Durrett. 2019. Neural extrac- tive text summarization with syntactic compression. arXiv preprint arXiv:1902.00863.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09675"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019a. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Hibert: Document level pre-training of hierarchical bidirectional transformers for document summarization",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.06566"
]
},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang, Furu Wei, and Ming Zhou. 2019b. Hibert: Document level pre-training of hierarchical bidirectional transformers for document summariza- tion. arXiv preprint arXiv:1905.06566.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Summpip: Unsupervised multidocument summarization with sentence graph compression",
"authors": [
{
"first": "Jinming",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Longxiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1949--1952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinming Zhao, Ming Liu, Longxiang Gao, Yuan Jin, Lan Du, He Zhao, He Zhang, and Gholamreza Haffari. 2020. Summpip: Unsupervised multi- document summarization with sentence graph com- pression. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1949-1952.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "At which level should we extract? an empirical study on extractive document summarization",
"authors": [
{
"first": "Qingyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.02664"
]
},
"num": null,
"urls": [],
"raw_text": "Qingyu Zhou, Furu Wei, and Ming Zhou. 2020. At which level should we extract? an empirical study on extractive document summarization. arXiv preprint arXiv:2004.02664.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "The histogram distribution of F1-score evaluated by BERTScore metric for each model reported in",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Elementary data statistics for the LongSumm shared task of the Scholarly Document Processing @ EMNLP 2020. Sci P and Ref S represent scientific paper and reference summary, respectively."
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"2\">: ROUGE F1 scores for SciSummPip with dif-</td></tr><tr><td colspan=\"2\">ferent sentence embedding methods. Special token em-</td></tr><tr><td colspan=\"2\">bedding method is extracting [CLS] token embedding</td></tr><tr><td colspan=\"2\">from SciBERT (Beltagy et al., 2019) output.</td></tr><tr><td>Sentence Embedding</td><td>R1 R R2 R RL R</td></tr><tr><td colspan=\"2\">Avg. SciBERT embeddings 43.09 9.83 17.26</td></tr><tr><td>Special token embedding</td><td>39.99 8.75 16.13</td></tr><tr><td>Word2Vec</td><td>32.73 7.27 13.83</td></tr><tr><td>SBERT</td><td>41.53 9.56 16.73</td></tr></table>",
"text": ""
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": ""
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "BERTScore reported on abstractive training dataset to investigate text generation ability of our model. SBERT means we use use SBERT sentence embedding method in SciSummPip."
}
}
}
}