ACL-OCL / Base_JSON /prefixS /json /sdp /2021.sdp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:36:13.469617Z"
},
"title": "LongSumm 2021: Session based automatic summarization model for scientific document",
"authors": [
{
"first": "Senci",
"middle": [],
"last": "Ying",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NetEase",
"location": {
"country": "China"
}
},
"email": "yingsenci@corp.netease.com"
},
{
"first": "Yanzhao",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NetEase",
"location": {
"country": "China"
}
},
"email": "zhengyanzhao@corp.netease.com"
},
{
"first": "Wuhe",
"middle": [],
"last": "Zou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NetEase",
"location": {
"country": "China"
}
},
"email": "zouwuhe@corp.netease.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most of the document summarization tasks focus on generate a short summary that keeps the core idea of the original document. For long scientific papers, a short abstract is not long enough to cover all the salient information. Researchers often summarize scientific articles by writing a blog, which requires specialized knowledge and a deep understanding of the scientific domain. The LongSumm, a shared task of SDP 2021(https://sdproc.org/2021/ sharedtasks.html), opts to leverage blog posts created by researchers that summarize scientific articles and extractive summaries based on video talks from associated conferences (Lev et al., 2019) to address the problem mentioned above.",
"cite_spans": [
{
"start": 627,
"end": 645,
"text": "(Lev et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the previous methods divide the document according to section, and use the extraction or abstraction model to predict the summary for each part respectively, and combine the results as the final summary of the document. Section based method may drop some important information among the sections. Generally, only uses one type of model for prediction can not make good use of the advantages of different models. Combined with the later models and solutions, we propose an ensemble method based on session like figure1. We split the task into four steps: session generation, extraction, abstraction, and merging the results at the end. First, we split an document into several sessions with a certain size, and use a rouge metric to match the ground truth (sentences from given document's summary). Then, we train two different types of model. One is the abstraction-based model. Specifically, we use the BIGBIRD (Zaheer et al., 2020) , a sparse attention mechanism that reduces this quadratic dependency to linear, and PEGASUS (Zhang et al., 2020) , a pretrained model specially designed for summarization. The other one is based on extraction method. We test the performance of TextRank (Mihalcea and Tarau, 2004; Xu et al., 2019) , DGCNN(Dilate Gated Convolutional Neural Network) (Su, 2018) and BERTSUMM (Liu, 2019) . In the end, for each type of model, we generate the summary from the one which has the best performance, and use an ensemble method to merge the summaries together. The result show that our method is effective and beats the state-of-art models in this task.",
"cite_spans": [
{
"start": 920,
"end": 941,
"text": "(Zaheer et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 1035,
"end": 1055,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 1196,
"end": 1222,
"text": "(Mihalcea and Tarau, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 1223,
"end": 1239,
"text": "Xu et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1291,
"end": 1301,
"text": "(Su, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1315,
"end": 1326,
"text": "(Liu, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The common automatic summarization is mainly divided into the extraction-based summarization and the abstraction-based summarization. The extraction-based model extracts several sentences and words from the original article by the semantic analysis and sentence importance analysis to form the abstract of the article. Typical models include TextRank (Mihalcea and Tarau, 2004; Xu et al., 2019) algorithm which based on sentence importance and the extraction method based on pre-training model (Liu, 2019) . The abstracts obtained by the extraction model can better reflect the focus of the article, but because the extracted sentences are scattered in different parts of the article, the coherence of the abstracts is a problem to be challenged. The abstraction-based models are based on the structure of seq2seq, and the pretraining model is used to achieve better generation effect like BART , T5 (Raffel et al., 2019) . Recently, PEGASUS (Zhang et al., 2020) , a pre-training model released by Google, specially designed the pre-training mode for the summarization task, and achieved the state-of-art performance on all 12 downstream datasets.",
"cite_spans": [
{
"start": 351,
"end": 377,
"text": "(Mihalcea and Tarau, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 378,
"end": 394,
"text": "Xu et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 494,
"end": 505,
"text": "(Liu, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 900,
"end": 921,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 942,
"end": 962,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This task focuses on the the solution of the long summary. The input and ouput text of the traditional model is limited due to the memory and time-consuming. However, this task requires the model to summarize scientific papers and generate very long summaries. To solve this problem, most of the solutions in the the previous are based on sections Roy et al., 2020) . They divide scientific papers into sections, generate abstracts for each seciton, and finally combine them to get the final results. Resently, Google's new model BIGBIRD (Zaheer et al., 2020) , using sparse attention mechanism to enable the model fit long text, is suitable for this task scenario.",
"cite_spans": [
{
"start": 348,
"end": 365,
"text": "Roy et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 538,
"end": 559,
"text": "(Zaheer et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The pre-training model plays a significant role in the field of automatic summarization, but due to its huge amount of parameters, most of the models can only be used for short text tasks. For long articles, there are two common ways to do. One is to directly truncate the long articles, the other is to predict the articles according to the section. This paper proposes a text segmentation method based on session, and use an ensemble method with the extraction model and the abstraction model to generate the final summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Limited by the computational power, many methods chose to truncate long articles directly, which makes the model unable to perceive the content of the following articles, and the generated summary can only reflect part of the input text. Others divide the article into sections, but this also raise some problems. The length and content of section are different between different articles. The division based on section may not reflect the relationship between text and abstract well. This paper proposes a segmentation method based on session, which divides the article into different sessions according to the selected size, predicts the summary for each session, and selects the most appropriate window size in this task by adjusting the size of the session.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Generation",
"sec_num": "3.1"
},
{
"text": "The specific data processing steps are as follows: (1) First, select the appropriate session size(2048 words) and a buffer(128 words), which is used to keep the last text of the previous session as the context of the current session. (2) For generating models. The real summary is divided into sentences, and the corresponding summary sentence is assigned to each session according to the rouge metric. In order to make the model predict long summaries as much as possible, a greedy matching rule is used to allocate the summary sentences to each session. we first drop the sentences with the threshold 0.7, which denotes the rouge score between the session and summary sentences. Then we pick the sentences according to the scores until meets the length we set, default 256 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Generation",
"sec_num": "3.1"
},
{
"text": "Although this may cause different sessions to predict the same summary, we think that duplicate sentences can be detected through the later data processing, and it is more important for the training model to generate long sentences . (3) For the extraction model, we only need to match different sessions with their corresponding summary sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Session Generation",
"sec_num": "3.1"
},
{
"text": "The training data contains around 700 abstractive summaries that come from different domains of CS including ML, NLP, AI, vision, storage, etc. And the abstractive summaries are blog posts created by NLP and ML researchers. The traditional generation model is mainly based on the classical transformers structure. In order to solve the problem of long text input , we use the sparse attention structure BIGBIRD (Zaheer et al., 2020) , which is proposed by Google recently, and makes finetuning on its two open source pre-training models:",
"cite_spans": [
{
"start": 411,
"end": 432,
"text": "(Zaheer et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstraction-based Model",
"sec_num": "3.2"
},
{
"text": "(1) Roberta ): a bert model with the dynamic masking and drops the next predict loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstraction-based Model",
"sec_num": "3.2"
},
{
"text": "(2) PEGASUS (Zhang et al., 2020) : a transformer model while using gap sentences generation to pre-training.",
"cite_spans": [
{
"start": 12,
"end": 32,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstraction-based Model",
"sec_num": "3.2"
},
{
"text": "The models used in this paper are both pretrained on arXiv datasets, so they have strong ability to generate abstracts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstraction-based Model",
"sec_num": "3.2"
},
{
"text": "The extractive data have 1705 extractive summaries which are based on video talks from associated conferences (Lev et al., 2019) . We have tried tree different extraction models to select important sentence from the documents.",
"cite_spans": [
{
"start": 110,
"end": 128,
"text": "(Lev et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction-based Model",
"sec_num": "3.3"
},
{
"text": "(1) TextRank (Mihalcea and Tarau, 2004) : We simply use the TextRank algorithm to pick out some most important sentences from the documents and limited the number of sentences extracted.",
"cite_spans": [
{
"start": 13,
"end": 39,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction-based Model",
"sec_num": "3.3"
},
{
"text": "(2) DGCNN-Extraction (Su, 2018) : DGCNN is an 1D-CNN Network structure combines two new convolution structure: dilated convolution (Gehring et al., 2017) and gated convolution(Dauphin et al., 2017). The advantage of DGCNN-Extraction model is that it can process the information of every sentence in the text at same time, and identify the important sentence by context. The way we train the model is as follows:",
"cite_spans": [
{
"start": 21,
"end": 31,
"text": "(Su, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 131,
"end": 153,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction-based Model",
"sec_num": "3.3"
},
{
"text": "1. We use NLTK to break the original paper into multiple sentences, and label each sentence according to the golden extractive summarize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction-based Model",
"sec_num": "3.3"
},
{
"text": "2. Transform each sentence by Roberta-Large pre-trained model , and get the output of last hidden layers as the feature representation, then convert the feature matrix to a fixed-size vector by average-pooling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction-based Model",
"sec_num": "3.3"
},
{
"text": "3. TRAINING: Feed the obtained sentence vectors into the DGCNN-Extraction model (Figure 2) and binary classify each sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 90,
"text": "(Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extraction-based Model",
"sec_num": "3.3"
},
{
"text": "Take the sigmoid-output of the model as the importance score for each sentence, according to which we extract the corresponding sentences from the paper as the extractive summary and the total length of the summary is limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFERENCE:",
"sec_num": "4."
},
{
"text": "(3) BERTSUMM (Liu, 2019) : BERTSUMM is a Bert-based model designed for the extractive summarization task. Different from DGCNN-Extraction model, because of the limit of the input length of Bert, we have to divide each paper into sections, then treat each section as a independent sample. As the result, we get 17720 sections in total. Follow the practice in BERTSUMM paper, we insert a [CLS] token before each sentence and a [SEP] token after each sentence and the [CLS] is used as a symbol to aggregate features from one sentence. In each sections we label the [CLS] token of sentences in ground-truth as 1 and others as 0. We split the data into training data and validation data and train the model on the training data. It's a pity that the F1-score of the result of validation data only peaked at 0.35. We think it is because this approach abandon the the information between the sections and the assumption of sections independence is not valid.",
"cite_spans": [
{
"start": 13,
"end": 24,
"text": "(Liu, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INFERENCE:",
"sec_num": "4."
},
{
"text": "According to the performance of this three models on the validation set, we choose DGCNN-Extraction model as the baseline of the extraction model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFERENCE:",
"sec_num": "4."
},
{
"text": "Abstraction model and extraction model have their own advantages and disadvantages. The advantage of abstraction model is that it can produce different expression from the original text, and can better summarize the original text, also the generated summary will be more fluent than the extracted summary. However, the disadvantage of this model is that the generated content can not be controlled, and it can not guarantee that the model can predict all the key points of the original text. The extraction model can capture most of the important information directly from the score of the original sentence. Therefore, this paper considers an ensemble method to reorganize the abstracts predicted by the abstraction model and the extraction model so as to further improve the accuracy of the abstracts. The specific implementation method is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Method",
"sec_num": "3.4"
},
{
"text": "1. self drop: since there are overlapping texts between sessions, the results predicted by the model may have repeated text. This paper first divides the predicted summary into sentences, and judges the sentence similarity according to the rouge metric. The sentences whose similarity are greater than a certain threshold(rough1-f + rough2-f > 0.8) will be determined as repeated sentences, and the longest one (we think that the long sentence carries more information) is selected as the most representative sentence, the rest are dropped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Method",
"sec_num": "3.4"
},
{
"text": "2. sentence reorder: reorder the abstracted and extracted sentences according to the session. For each session we will predict summaries by both abstracted and extracted model. And we ordered them look like this : sess 1 : abs 11 , .., abs 1n 1 , ext 11 , .., ext 1m 1 ; sess 2 : abs 21 , .., abs 2n 2 , ext 21 , .., ext 2m 2 ; ..; sess m . Because the abstraction model predicts the sentence that is usually a summary sentence, we put it before the extracted sentence in the same session.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Method",
"sec_num": "3.4"
},
{
"text": "3. recall: we will filter the combined summaries again and recall the most useful sentence for the final result. To do this, we used TextRank algorithm and dropped the sentences which scores are under 0.9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Method",
"sec_num": "3.4"
},
{
"text": "After these steps, the predictions from the different models are well cleaned and merged. The most important sentences are selected from the candidate summaries to form the final result. The experiment shows that the comparison of single model and ensemble method has a significant effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Method",
"sec_num": "3.4"
},
{
"text": "We extract the text from the PDF of paper by using Science Parse(https://github.com/ allenai/science-parse). There is a lot of dirty text in the data, which will make the model hard to converge during training. So we clean the text as follows: (1) replace the URL link in the text with [url] (2) remove special characters from the text and keep only some common symbols. (3) merge the broken words and remove some words that is not in the word list.",
"cite_spans": [
{
"start": 286,
"end": 291,
"text": "[url]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "We spilt the text of each paper into sessions,and the best session size by testing should be 1024 words. The buffer size is 128 which we think is enought to keep the context. Each sentence of ground truth is set as the target summary of one of the sessions according to the location of the most similar sentence in the original paper. We use the NLTK to count words of the session. As for pretrained model, all input session are truncated to a maximum of 1024 words, and their target summary are truncated to a maximum of 128 words. Based on the test results, the best generation model is built as follows: The model is fine-tuned on the pegasus-arxiv pre-trained model released by Google which has about 570 million parameters for 20 epochs with a learning rate of 2e-5. The batch size is 8 and the model is trained on four v100(32G) GPUs for about 20 hours. As for building DGCNN-Extraction model, all input papers are truncated to a maximum of 400 sentences(1024d) and 7 DGCNN-layers (with 1,2,4,8,16,1,1 dilation rate) are added to the model. Then we compile the model with Adam optimizer(learning rate = 0.001). The model is trained for 20 epochs on training set and the batch size is set to 32. DGCNN is a lightweight model that only takes 30 minutes to train.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Follow the method mentioned above, we ensemble the summaries obtained from the best generation model and extraction model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "We test three different models on the test set: (1)SBAS extract : the model only include the DGCNN-extraction model for summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": "5"
},
{
"text": "(2)SBAS abstract : the one using the PEGASUS as a base abstractive model to generate the summary. (3)SBAS ensemble : the ensemble model of the SBAS extract and the SBAS abstract . We compare the final test scores of all metrics with other teams on the leaderboard in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result",
"sec_num": "5"
},
{
"text": "The result show that both SBAS abstract and SBAS extract model are competitive. As for the result of SBAS abstract , its recall-score is much lower than F1-score, this might be caused by the summary generated by SBAS abstract is shorter than the ground truth. We limit the length of summary extracted by SBAS extract to 900 words, and get an excellent result compared with other teams. The result of SBAS ensemble is far superior to the others models, we believe this is because our ensemble method not only remove the redundant sentences in the combined summary, but also make the output of SBAS extract well supplement for the result of SBAS abstract . We extract some of the abstract for manual evaluation, and find that the abstract generated by our method can generate sentences with high readability and cover a lot of important information of the paper, but sentence to sentence is not coherent, the fluency of the abstract is insufficient. And we will try to improve the fluency of the summary in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": "5"
},
{
"text": "Pre-train models such as Bert and GPT have obvious effects in all NLP fields, but they can't deal with long text due to their huge amount of parameters and computation. In this paper, we propose an ensemble model based on session for the Long-Summ task. In our method, the document is firstly segmented according to the session, and some context semantics are reserved. Then, the labels corresponding to each session are matched by a specific algorithm to generate a new dataset. The extraction and abstraction models are trained on the new dataset, and the final summary is obtained by merging the results of different models through the ensemble method. The method proposed in this paper considers the context of the text as much as possible while limiting the memory growth, so that the summary predicted by the model is more coherent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "And the method of merging two different types of summary models is proposed for the first time. The prediction results of different models are dropped and combined for the second time, so as to make the results closer to the real summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our model has achieved the best performance in all metrics of this task, but there for improvement. The current approach is to compress the input and output to make the task adapt to the model, but the best design idea is to make the model fit the task. One of the biggest problems is how to reduce the resource consumption of the transformers structure model. BIGBIRD model proposed by Google alleviates this problem through sparse attention mechanism, but after our test, because of the decoding part of the model still uses full attention, BigBird does not solve the problem of long text output, and it is difficult to directly generate a complete long summary from scientific documents in this task. Therefore, future research can focus on how to decode longer text, so that the language model can adapt to more NLP scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language modeling with gated convolutional networks",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Yann N Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2017,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "933--941",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated con- volutional networks. In International conference on machine learning, pages 933-941. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning, pages 1243-1252. PMLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Lev",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Shmueli-Scheuer",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Achiya",
"middle": [],
"last": "Jerbi",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Konopnicki",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01351"
]
},
"num": null,
"urls": [],
"raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks. arXiv preprint arXiv:1906.01351.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic scientific document summarization",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yinan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yafei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Siya",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Xingyuan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Scholarly Document Processing",
"volume": "2020",
"issue": "",
"pages": "225--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Li, Yang Xie, Wei Liu, Yinan Liu, Yafei Jiang, Siya Qi, and Xingyuan Li. 2020. Cist@ cl-scisumm 2020, longsumm 2020: Automatic scientific docu- ment summarization. In Proceedings of the First Workshop on Scholarly Document Processing, pages 225-234.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fine-tune bert for extractive summarization",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10318"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Liu. 2019. Fine-tune bert for extractive summa- rization. arXiv preprint arXiv:1903.10318.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scientific document summarization for laysumm'20 and longsumm",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Sayar Ghosh Roy",
"suffix": ""
},
{
"first": "Risubh",
"middle": [],
"last": "Pinnaparaju",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sayar Ghosh Roy, Nikhil Pinnaparaju, Risubh Jain, Manish Gupta, and Vasudeva Varma. 2020. Scien- tific document summarization for laysumm'20 and longsumm'20.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dgcnn: a reading comprehension model based on cnn",
"authors": [
{
"first": "Jianlin",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianlin Su. 2018. Dgcnn: a reading comprehension model based on cnn.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discourse-aware neural extractive text summarization",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.14142"
]
},
"num": null,
"urls": [],
"raw_text": "Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Discourse-aware neural extractive text sum- marization. arXiv preprint arXiv:1910.14142.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Big bird: Transformers for longer sequences",
"authors": [
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Guru",
"middle": [],
"last": "Guruganesh",
"suffix": ""
},
{
"first": "Avinava",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Ainslie",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Ontanon",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Ravula",
"suffix": ""
},
{
"first": "Qifan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.14062"
]
},
"num": null,
"urls": [],
"raw_text": "Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer se- quences. arXiv preprint arXiv:2007.14062.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "11328--11339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In In- ternational Conference on Machine Learning, pages 11328-11339. PMLR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "SBAS: a session based automatic summarization model"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "DGCNN-Extraction model structure"
},
"TABREF0": {
"type_str": "table",
"text": "Methodrouge1 f rouge1 r rouge2 f rouge2 r rougeL f rougeL r",
"html": null,
"num": null,
"content": "<table><tr><td>BART</td><td>0.1921</td><td>0.1122</td><td>0.0533</td><td>0.0310</td><td>0.1062</td><td>0.0620</td></tr><tr><td>Sroberta</td><td>0.4621</td><td>0.4377</td><td>0.1280</td><td>0.1212</td><td>0.1701</td><td>0.1610</td></tr><tr><td>Sharingan</td><td>0.5031</td><td>0.5164</td><td>0.1706</td><td>0.1744</td><td>0.2114</td><td>0.2162</td></tr><tr><td colspan=\"2\">Summaf ormers 0.4938</td><td>0.4390</td><td>0.1686</td><td>0.2498</td><td>0.2138</td><td>0.1898</td></tr><tr><td colspan=\"2\">CN LP \u2212 N IT S 0.5096</td><td>0.5234</td><td>0.1538</td><td>0.1581</td><td>0.1951</td><td>0.2008</td></tr><tr><td>M T P</td><td>0.4858</td><td>0.4919</td><td>0.1330</td><td>0.1348</td><td>0.1697</td><td>0.1714</td></tr><tr><td>SBAS abstract</td><td>0.5080</td><td>0.4755</td><td>0.1740</td><td>0.1634</td><td>0.2156</td><td>0.2016</td></tr><tr><td>SBAS extract</td><td>0.5275</td><td>0.5415</td><td>0.1711</td><td>0.1747</td><td>0.2209</td><td>0.2262</td></tr><tr><td>SBAS ensemble</td><td>0.5507</td><td>0.5660</td><td>0.1945</td><td>0.1998</td><td>0.2295</td><td>0.2357</td></tr><tr><td colspan=\"6\">Table 1: Result for Long Scientific Document Summarization 2021</td><td/></tr></table>"
}
}
}
}