| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:36:08.878927Z" |
| }, |
| "title": "Overview and Insights from the Shared Tasks at Scholarly Document Processing 2020: CL-SciSumm, LaySumm and LongSumm", |
| "authors": [ |
| { |
| "first": "Muthu", |
| "middle": [ |
| "Kumar" |
| ], |
| "last": "Chandrasekaran", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Feigenblat", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "guyf@il.ibm.com" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "hovy@cmu.edu" |
| }, |
| { |
| "first": "Abhilasha", |
| "middle": [], |
| "last": "Ravichander", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Michal", |
| "middle": [], |
| "last": "Shmueli-Scheuer", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Anita", |
| "middle": [], |
| "last": "De Waard", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present the results of three Shared Tasks held at the Scholarly Document Processing Workshop at EMNLP2020: CL-SciSumm, LaySumm and LongSumm. We report on each of the tasks, which received 18 submissions in total, with some submissions addressing two or three of the tasks. In summary, the quality and quantity of the submissions show that there is ample interest in scholarly document summarization, and the state of the art in this domain is at a midway point between being an impossible task and one that is fully resolved.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present the results of three Shared Tasks held at the Scholarly Document Processing Workshop at EMNLP2020: CL-SciSumm, LaySumm and LongSumm. We report on each of the tasks, which received 18 submissions in total, with some submissions addressing two or three of the tasks. In summary, the quality and quantity of the submissions show that there is ample interest in scholarly document summarization, and the state of the art in this domain is at a midway point between being an impossible task and one that is fully resolved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Scientific documents constitute a rich field for different tasks such as Reference String Parsing, Citation Intent Classification, Summarization and more. The constantly increasing number of scientific publications raises additional issues such as making these publications accessible to non-expert readers, or, on the other hand, to experts who are interested in a deeper understanding of the paper without reading a paper in full.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For this year's Scholarly Document Processing workshop (Chandrasekaran et al., 2020) at EMNLP 2020, we proposed three tasks: CL-SciSumm, Lay-Summ and LongSumm to improve the state of the art for different aspects of scientific document summarization.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 84, |
| "text": "(Chandrasekaran et al., 2020)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The CL-SciSumm task was introduced in 2014 and aims to explore the summarization of scientific research in the domain of computational linguistics research. It encourages the incorporation of new kinds of information in automatic scientific paper summarization, such as the facets of research information being summarized in the research paper. CL-SciSumm also encourages the use of citing mini-summaries written in other papers, by other scholars, when they refer to the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "LaySumm (Lay Summarization) addresses the issue of making research results available to a larger audience by automatically generating 'Lay Summaries', or summaries that explain the science contained within the paper in laymen's terms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Finally, the LongSumm (Long Scientific Document Summarization) task focuses on generating long summaries of scientific text. It is fundamentally different than generating short summaries that mostly aim at teasing the reader. The LongSumm task strives to learn how to cover the salient information conveyed in a given scientific document, taking into account the characteristics and the structure of the text. The motivation for LongSumm was first demonstrated by the IBM Science Summarizer system, (Erera et al., 2019 ) that retrieves and creates long summaries of scientific documents 1 . While Erera et al. (2019) studied some use-cases and proposed a summarization approach with some human evaluation, the authors stressed the need of a large dataset that will unleash the research in this domain. LongSumm aims at filling this gap by providing large dataset of long summaries which are based on blogs written by Machine Learning and NLP experts.", |
| "cite_spans": [ |
| { |
| "start": 499, |
| "end": 518, |
| "text": "(Erera et al., 2019", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 597, |
| "end": 616, |
| "text": "Erera et al. (2019)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we present the tasks, datasets, description of the participating systems, and provide their results and insights from shared tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The CL-SciSumm Shared Task was launched in 2014 as a pilot task aimed at bringing together the summarization community to address challenges in scientific communication summarization. Over time, the Shared Task has spurred the creation of new resources (e.g., ), tools and evaluation frameworks. As a consequence of this wide interest, CL-SciSumm 2020 is jointly organised with the inaugural editions of two other Scientific Summarization shared tasks, all of which were held as part of SDP 2020 workshop at EMNLP 2 ) (Chandrasekaran et al., 2020) A pilot CL-SciSumm task was conducted at TAC 2014, as part of the larger BioMedSumm Task 3 . In 2016, a second CL-Scisumm Shared Task (Jaidka et al., 2018) was held as part of the Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL) workshop at the Joint Conference on Digital Libraries (JCDL 2016). From 2017 (Jaidka et al., 2017 (Jaidka et al., , 2019 through 2019 CL-SciSumm was colocated with BIRNDL at the annual ACM Conference on Research and Development in Information Retrieval (ACM SIGIR 2017 .", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 547, |
| "text": "(Chandrasekaran et al., 2020)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 682, |
| "end": 703, |
| "text": "(Jaidka et al., 2018)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 930, |
| "end": 950, |
| "text": "(Jaidka et al., 2017", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 951, |
| "end": 973, |
| "text": "(Jaidka et al., , 2019", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1084, |
| "end": 1121, |
| "text": "Information Retrieval (ACM SIGIR 2017", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overview", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In this section we provide the results and insights from CL-SciSumm 2020.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overview", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We built the CL-SciSumm corpus by randomly sampling research papers (Reference papers, RPs) from the ACL Anthology corpus and then downloading the citing papers (CPs) for those which had at least ten citations. The prepared dataset then comprised annotated citing sentences for a research paper, mapped to the sentences in the RP which they referenced. Summaries of the RP were also included.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "The CL-SciSumm 2020 corpus consisted of 40 annotated RPs and their CPs. These are the same as described in our overview paper in CL-SciSumm 2019 and 2018. The test set was blind. We reused the blind test we used from CL-SciSumm 2018 and 2019 since we want to have a comparable evaluation CL-SciSumm 2020 systems. After 3 iterations, we now release the gold labels for the 2018 test-set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "For details of the general procedure followed to construct the CL-SciSumm corpus, and changes made to the procedure in CL-SciSumm-2016, please see (Jaidka et al., 2018) . In 2017, we made revisions to the corpus to remove citances from passing citations. These are described in (Jaidka et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 168, |
| "text": "(Jaidka et al., 2018)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 278, |
| "end": 299, |
| "text": "(Jaidka et al., 2017)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Annotation. Given each RP and its associated CPs, the annotation group was instructed to find citations to the RP in each CP. Specifically, the citation text, citation marker, reference text, and discourse facet were identified for each citation of the RP found in the CP. The corpus has 40 annotated RPs, exclusive of 1000 auto-annotated RPs added in CL-SciSumm 2019. For CL-SciSumm-20 we encourage participants to use out-of-domain data (i.e., scientific document corpora from papers outside of the ACL anthology corpora; e.g., BIGPATENT (Sharma et al., 2019) ) to bootstrap training using transfer learning. From 2019 onward, Task 2, training data (summaries) has been augmented with the SciSummNet corpus .", |
| "cite_spans": [ |
| { |
| "start": 540, |
| "end": 561, |
| "text": "(Sharma et al., 2019)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "CL-SciSumm defined two serially dependent tasks that participants could attempt, given a canonical training and testing set of papers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Given: A topic consists of a Reference Paper (RP) and ten or more Citing Papers (CPs) that all contain citations to the RP. In each CP, the text spans (i.e., citances) have been identified that pertain to a particular citation to the RP. Additionally, the dataset provides three types of summaries for each RP:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "\u2022 the abstract, written by the authors of the research paper. \u2022 the community summary, collated from the reference spans of its citances. \u2022 a human-written summary, written by the annotators of the CL-SciSumm annotation effort. Task 1A: For each citance, identify the spans of text (cited text spans) in the RP that most accurately reflect the citance. These are of the granularity of a sentence fragment, a full sentence, or several consecutive sentences (no more than 5). Task 1B: For each cited text span, identify what facet of the paper it belongs to, from a predefined set of facets. Task 2: Finally, generate a structured summary of the RP from the cited text spans of the RP. The length of the summary should not exceed 250 words. This was an optional bonus task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "An automatic evaluation script was used to measure system performance for Task 1A, in terms of the sentence ID overlaps between the sentences identified in system output, versus the gold standard created by human annotators. The raw number of overlapping sentences were used to calculate the precision, recall and F 1 score for each system. We followed the approach in most SemEval tasks in reporting the overall system performance as its micro-averaged performance over all topics in the blind test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "Additionally, we calculated lexical overlaps in terms of the ROUGE-2 scores (Lin, 2004) between the system output and the human annotated gold standard reference spans.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 87, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "We have been reporting ROUGE score since CL-SciSumm-17, for Tasks 1a and Task 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "Task 1B was evaluated as a proportion of the correctly classified discourse facets by the system, contingent on the expected response of Task 1A.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "As it is a multi-label classification task, we report classification performance in terms of precision, recall and F 1 scores averaged over the 4 classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "Task 2 was optional, and also evaluated using the ROUGE-2 between the system output and three types of gold standard summaries of the research paper: the reference paper's abstract, a community summary, and a human summary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "We provisioned the evaluation scripts and goldtest-set CL-SciSumm Github repository 4 . For transparency we published all the system runs submitted by the participants. The participants then ran the evaluation and reported the results back to us. We collate and publish these as the CL-SciSumm'20 official result.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "Following teams submitted systems for evaluation for Task 1a and 1b. Their systems are described in their cited systems papers: NJUST , CIST (Li et al., 2020) , AUTH , CiteQA (Umapathy et al., 2020) , IIITBH-IITP (Reddy et al., 2020), IITP-AI-NLP-ML (Mishra et al., 2020), MLU (Huang and Krylova, 2020), MLUHW (Boltze et al., 2020), UniHD (Aumiller et al., 2020) , NLP-PINGAN-TECH (Chai et al., 2020) Following teams submitted systems for evaluation on Task 2 also which is an optional bonus task: AUTH , CIST (Li et al., 2020) , IIITBH-IITP (Reddy et al., 2020), IITP-AI-NLP-ML (Mishra et al., 2020) Official evaluation results on these systems is presented in the next section. 4 github.com/WING-NUS/scisumm-corpus", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 158, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 175, |
| "end": 198, |
| "text": "(Umapathy et al., 2020)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 339, |
| "end": 362, |
| "text": "(Aumiller et al., 2020)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 381, |
| "end": 400, |
| "text": "(Chai et al., 2020)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 510, |
| "end": 527, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 579, |
| "end": 600, |
| "text": "(Mishra et al., 2020)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 680, |
| "end": 681, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems Overview", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Out of the 11 participants systems, 8 were able complete the final evaluation correctly. We have omitted the remaining 3 teams from our official listings in Tables 1 and 2 with evaluations on the blind test set. However, their systems and evaluations on the development set are published in their respective system papers. Although we allowed teams to submit an unlimited number of runs since this is an offline evaluation on a blind test set, we only tabulate the results from the top 5 runs when a large of runs are submitted.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 157, |
| "end": 171, |
| "text": "Tables 1 and 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Task 1a. (Table 1 )NLP-PINGAN-TECH (Chai et al., 2020) achieve the best result on Task 1a when evaluated using sentence overlaps and ngram overlaps using ROUGE SU4. All top 5 of their runs outperforms other systems. Runs from UniHD's system are a close second.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 54, |
| "text": "(Chai et al., 2020)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 17, |
| "text": "(Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Task 1b. ( Table 2) We note that the runs that perform the best on Task1a are not the same that top performance in Task 1b though Task 1b is evaluated conditioned on Task 1a. CMU (Umapathy et al., 2020) 's and CIST (Li et al., 2020) 's systems do consistently well on this task and are the top two performers respectively. We note that UniHD's systems, intersection 2 field and intersection 3 field do well on both Task 1a and 1b though they do not top the rankings on either task.", |
| "cite_spans": [ |
| { |
| "start": 179, |
| "end": 202, |
| "text": "(Umapathy et al., 2020)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 215, |
| "end": 232, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 19, |
| "text": "Table 2)", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Task 2. Four of the eleven teams also participated in the bonus summarization task. On the summarization task AUTH does well when evaluated against both abstract and human written summaries. They score 0.41 on ROUGE-2 on Abstracts which is comparable to the state-of-the-art for general summarization. However, their system does not do well on community summaries, which is dependant on Task 1a. IIITBH-IITP (Reddy et al., 2020)'s systems consistently perform better than the rest on community summaries. CIST (Li et al., 2020) 's systems are second and are comparable to the top performing system in this category. Notably CIST's runs do well on both human and community summaries and second only to AUTH on abstracts. This type of systems are the intended goal of the CL-SciSumm shared task.", |
| "cite_spans": [ |
| { |
| "start": 510, |
| "end": 527, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "To improve public understanding of science, researchers are increasingly asked by funders and publishers to outline the scope of their research, described in scientific research articles, by writing a summary for a lay audience. We call this a Lay Summary: a text of about 70-100 words intended for a non-technical audience that explains, succinctly and without using technical jargon, the overall scope, goal, and potential impact expressed in a scientific paper. The Lay Summarization task provides data for and evaluates automaticallyproduced Lay Summaries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Overview", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The corpus comprised 572 author-generated lay summaries from a multidisciplinary collection of journals in Materials Science, Archaeology, Hepatology and Artificial intelligence, together with their corresponding abstracts and full text articles, provided by Elsevier. A small sample dataset can be found on the GitHub repository 5 ). A training corpus of 37 full-text papers and abstracts was 5 https://github.com/WING-NUS/ scisumm-corpus/blob/master/README_ Laysumm.md#sample-dataset made available to enable evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "The Lay Summary Task requires systems to generate a lay summary, given a full-text paper and its abstract. This summary should be representative of the content, comprehensible, and interesting to a lay audience. In addition to their results, system builders were asked to provide an automatically generated lay summary of their own systemdescription paper. The task was run on CodaLabs 6 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "We measured summary quality using the ROUGE measure (Lin, 2004) . We used the Py-Rouge 0.1.3 package, which is built on the ROUGE 1.5.5 toolkit with its standard parameters setting 7 . We report both Recall and F-Measure for ROUGE-1, ROUGE-2, and ROUGE-L. The evaluation results were displayed on a public leaderboard on Co- dalab 8 . In addition, a number of automatically generated lay summaries underwent human evaluation by science journalists and communicators for comprehensiveness, legibility, and interest.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 63, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 331, |
| "end": 332, |
| "text": "8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "We received eight submissions. We briefly describe the approaches taken by the participating teams: AUTH (Gidiotis et al., 2020) -The authors use a summarization method utilizing PEGASUS to compress and rewrite the abstract of a given article to generate a lay summary. The PEGASUS model is fine-tuned to generate lay summaries, using the article abstract as input and the lay summary as the reference for training the summarization model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems Overview", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Dimsum (Tiezheng Yu and Fung, 2020) -The system generates a summary by using a joint extractive and abstractive summarization approach, based on the intuition that lay summaries are grounded in sentences that occur within the scientific document. The abstractive summaries are converted to extractive labels, by selecting sentences that maximize the rouge score with the reference summary. The BART encoder (Lewis et al., 2020) is then used to make sentence representations and the model is trained with both extractive and abstractive summarization objectives. Seungwon (Kim, 2020) -The system built by the team from Georgia Tech primarily uses the PEGA-SUS model to generate lay summaries, combining this with a BERT-based extractive summarization model. After generating a lay summary using PEGASUS, if the generated summary is shorter than a specified length, the extractive model is used to identify candidate sentences in the document that can be included in the summary. Sentences are only included in the summary by the extractive model if they are judged sufficiently readable, according to a sentence readability metric defined by the authors. IIITBH-IITP (Reddy et al., 2020) -The authors use an extractive sentence classification method. They develop an unsupervised approach, selecting sentences from the document using variants of the maximum marginal relevance (MMR) metric. Summaformers (Roy et al., 2020) -This system utilizes the BART model (Lewis et al., 2020) to generate summaries. BART is trained on the CNN/Dailymail summarization dataset (See et al., 2017) and fine-tuned on the Laysumm corpus. IITP-AI-NLP-ML (Mishra et al., 2020) This method uses a standard encoder-decoder framework for abstractive summarization. The system is based on BERT fine-tuned on the CNN/Dailymail dataset (Liu and Lapata, 2019a) , with a decoder consisting of six transformer layers. DUCS (Chaturvedi et al., 2020) This system uses a two-stage pipeline. In the first phase, extractive summarization is performed, and relevant sentences are selected from the introduction, discussion and conclusion of the article. The abstract, and the extracted sentences from the introduction, discussion and conclusion are summarized using the BART model (Lewis et al., 2020) , and the summaries are concatenated.", |
| "cite_spans": [ |
| { |
| "start": 407, |
| "end": 427, |
| "text": "(Lewis et al., 2020)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1459, |
| "end": 1479, |
| "text": "(Lewis et al., 2020)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1562, |
| "end": 1580, |
| "text": "(See et al., 2017)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1809, |
| "end": 1832, |
| "text": "(Liu and Lapata, 2019a)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 2245, |
| "end": 2265, |
| "text": "(Lewis et al., 2020)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems Overview", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Taking these metrics into account, the top 3 systems are: #1 Seungwon Kim, #2 HYTZ, and #3 Summaformers. Next to the formal ROUGE scores, a subset of documents was evaluated by a team of domain experts. Gratifyingly, this human assessment confirmed this order of the results. Overall, the majority of submitted Lay Summaries was easy to read, though in some cases there were odd errors (e.g., inserted ellipses). The winning systems all produced legible and accessible summaries. Four of the papers complied with the request that the systems generate a Lay Summary of their own paper, using their own tools. This helps both to explain the concept of a Lay Summary and offers insights into the output of the software; hopefully it also helps explain this work to a non-specialised audience. For examples, please see the Lay Summary Submissions elsewhere in this Anthology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A comparison of Lay Summaries against typical paper abstracts (Technical Summaries) reveals several systematic differences. These include:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\u2022 Lexical specialization: This category includes both domain-based terminological difference (e.g., \"renal\" vs \"kidney\" failure, \"high-octane\" vs \"powerful\" gasoline) and conceptual specificity / specialization (e.g., \"bubblesort\" vs \"sorting\", \"kNN\" vs \"clustering\"). Used at even the same level of specificity, the expert uses domainspecialist words. It is well known that experts' Basic Level categories (in the sense of Prototype Theory) (Rosch, 1973) is one level lower/more specific than normal speakers' categories. \u2022 Syntactic complexity: This includes morecomplex descriptive NPs vs simpler NPs across more sentences, and longer and deeper sentence parse trees vs shorter and more straightforward ones. Generally an expert author's abstract has no direct verb forms and no personal pronouns, while the lay summary has nothing but. Direct quotes typically make a lay summary read like journalism. \u2022 Epistemic complexity: Expert text includes more (and more-precise) hedging vs simper, more absolutist claims, and fewer evaluative interjections (\"surprising\", \"lovely\", \"elegant\"). \u2022 Content detail: Generally a lay content is more general, wider-ranging, and includes a historically longer but much shallower historical overview compared to the Related Work section of an expert text. Typically there are more examples in the lay text and the examples employ out-of-domain scenarios/entities. \u2022 Author presence: In lay summaries there is generally more explicit 'author foregrounding', leading to the personalization of the knowledge source. The opposite in expert summaries has been argued as suggesting there statement of known facts, a tactic that scientists often use. As described in the previous section, only a few systems implemented some of these strategies explicitly. Generally the hope was that the training data will allow a sufficiently powerful machine learning model to learn what to do by itself. The results do not really bear out this hope. We believe there is some very interesting and fruitful analysis to be done in order to create machine-learning models that are sufficiently rich to produce truly interesting and readable Lay Summaries.", |
| "cite_spans": [ |
| { |
| "start": 442, |
| "end": 455, |
| "text": "(Rosch, 1973)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Existing work on scientific document summarization focuses on generating short, abstract-like summaries. While this might be appropriate when summarizing news articles, such summaries cannot cover all the salient information conveyed in a scientific paper. Writing longer summaries requires deep understanding and domain expertise, as can be found in research blogs. To address this point, the LongSumm task opted to leverage blog posts created by researchers in the NLP and Machine learning communities that summarize scientific articles and use these posts as reference summaries (Boni et al., 2020) . The task is, given a scientific document, generate a 600 words summary.", |
| "cite_spans": [ |
| { |
| "start": 582, |
| "end": 601, |
| "text": "(Boni et al., 2020)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Overview", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The corpus for this task includes a training set that consists of 1705 extractive summaries, and 531 abstractive summaries of NLP and Machine Learning scientific papers. The extractive summaries are based on video talks from associated conferences , and contain up to 30 sentences. The abstractive summaries are blog posts created by NLP and ML researchers, with length varied between 100-1500 words, an average of 779 (\u00b1460) words, and an average of 31 (\u00b118) sentences in a summary. In addition, we created a (blind) test set of 22 abstractive summaries for eval- uating the submissions. The corpus can be found on LongSumm GitHub repository 9 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "We measured summarization quality using the ROUGE measure (Lin, 2004) . The evaluation script utilizes the rouge-score 10 python package which is designed to replicate results from the original perl package with its standard parameters. We report both Recall and F-Measure of ROUGE-1, ROUGE-2, and ROUGE-L. The evaluation was executed on a public leaderboard 11 , forked from EvalAI (Yadav et al., 2019) , an open-source AI challenge hosting platform. In addition, 6 randomly selected summaries are selected from the top performing systems, to undergo human evaluation. The evaluation focuses on informativeness and readability.", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 69, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 383, |
| "end": 403, |
| "text": "(Yadav et al., 2019)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "Nine systems participated in the task, with a total of 100 submissions. We will briefly describe eight of them, that submitted a research report describing their approach. ARTU (El-Ebshihy et al., 2020) -The system generates an extractive summary which is based on the papers' abstract. Each sentence from the abstract becomes a query to an index that contains all papers' paragraphs. For each abstract sentence, a cluster that contains the top retrieved paragraphs is created. The final set of sentences is chosen based on the sentences LexRank value, their discourse (based on the section they belong to), and the size of the cluster. AUTH (Gidiotis et al., 2020) -The authors propose an extractive summarization method that utilizes DANCER, a divide and conquer approach for long document summarization. DANCER (Gidiotis and Tsoumakas, 2020) helps to select key sections in the document to be summarized separately, for that each sentence in the article is classified to a section type. Then using PEGASUS based Transformer they are combined together to form an complete article summary. CIST BUPT (Li et al., 2020) -The system supports both an extractive and abstractive summaries using deep-learning architectures. For extractive summaries, they used RNN to compress and represent a sentence, and build a sentences relation graphs which are fed into the Graph Convolutional Network (GCN), and Graph Attention Network (GAN) to create a summary. For abstractive summaries, they used the gap-sentence method in (Zhang et al., 2015) to combine and transform all the data, and then T5 (Raffel et al., 2019) , a transformer-liked pre-trained to fine-tune and generation. GUIR (Sotudeh et al., 2020 ) -A summarization method that utilizes BERT summarizer (Liu and Lapata, 2019b) . The idea is based on multi-task learning heuristic, in which two tasks are optimized. The first is a binary classification task, for sentence selection. The second is section prediction, in which the model predicts section labels associated with input sentences. The extractive network is then trained to optimize both tasks. The authors also propose an abstractive summarizer based on BART (Lewis et al., 2020) (Lloyd, 1982) and DBScan (Ester et al., 1996) ). Then, each cluster is ranked based on its centrality. Finally, salient sentences are selected from each cluster, taking into account cluster score, until the desired length of the summary. Monash-Summ (Ju et al., 2020)-The system, inspired by SummPip (Zhao et al., 2020) , proposes an unsupervised approach that leveraging linguistic knowledge to construct sentence graph. The graph nodes, which represent sentences, are further clustered. This enables the control of the summary length. Finally, for each cluster they considered the key phrases and discourse and created an abstractive sentence. Summaformers (Roy et al., 2020) -To handle long documents, each section was allocated with a budget based on its contribution in the training data. Each section was summarized separately, using SummaRuNNer (Nallapati et al., 2017) , a neural extractive summarizer. Table 4 reports the results of the 9 participating systems, 8 of them submitted a research report describing their system 12 . In order to compare between the systems we considered an average score of ROUGE-1, ROUGE-2, and ROUGE-L. Although some of the systems developed an abstractive variant, the highest ROUGE scores were obtained by leveraging extractive summarization techniques. The only system that reported abstrative summarization results, in the official leaderbaord, is Monash-Summ. Most of the systems except ARTU and IITP-AI-NLP-ML employ supervised learning approaches. The system that achieved the highest ROUGE average score is GUIR, with their multi-task learning heuristic. Second best is Summaformers, with about 3% lower ROUGE score. In addition, we randomly selected 5 summaries from the top-3 ranked systems, namely: GUIR, Summaformers and IIITBH-IITP, to be evaluated by experts. We asked them to rank the systems w.r.t coverage, and readability. For coverage, we asked to take into account how well the summary contains important, informative information con-veyed in the text. For Readability, we asked to take into account fluency, coherence and grammatical correctness. From coverage perspective, all experts reported that GUIR summaries outperform the other systems, where the main issue with Summaformers and IIITBH-IITP is that they mainly cover the introduction and related works sections. From readability perspective, the experts pointed out on several issues such as out of context formulas and reference to tables and figures, sentences are not sorted by the paper discourse, and footnotes that are clearly not relevant such as URLs, author's information, etc.", |
| "cite_spans": [ |
| { |
| "start": 814, |
| "end": 844, |
| "text": "(Gidiotis and Tsoumakas, 2020)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1513, |
| "end": 1533, |
| "text": "(Zhang et al., 2015)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 1585, |
| "end": 1606, |
| "text": "(Raffel et al., 2019)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1675, |
| "end": 1696, |
| "text": "(Sotudeh et al., 2020", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1753, |
| "end": 1776, |
| "text": "(Liu and Lapata, 2019b)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 2170, |
| "end": 2190, |
| "text": "(Lewis et al., 2020)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 2191, |
| "end": 2204, |
| "text": "(Lloyd, 1982)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 2216, |
| "end": 2236, |
| "text": "(Ester et al., 1996)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 2491, |
| "end": 2510, |
| "text": "(Zhao et al., 2020)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 3043, |
| "end": 3067, |
| "text": "(Nallapati et al., 2017)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 3102, |
| "end": 3109, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Systems Overview", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Scientific documents can be characterized as long, structured, utilizing technical language (i.e., formulas, tables, definitions, etc.). Analyzing the summaries and reports of the participated systems shows that most of them considered the structure of the document while generating summaries, by utilizing sections and document discourse. From a language perspective, some systems utilized language models that were pre-trained on scientific corpora. However, we believe that more efforts should be focused on handling mathematical definitions, formulas, tables, and the text surrounding them. For example, it is not clear whether these entities should be treated differently than narrative text and whether they should be considered as atomic units that should not be compressed further.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Moreover, readability should play an important role in algorithmic design. Due to the nature of scientific documents and LongSumm length requirement, we believe this is even more challenging compared to traditional summarization tasks. This should have gotten more attention by the participating systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Finally, it was surprising to see that most evaluated systems are extractive and not abstractive. In the future we plan to extend this corpus, with the hope that LongSumm will help foster further research in this domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The First Scholarly Document Processing workshop (Chandrasekaran et al., 2020) comprise three summarization tasks, that each aimed to improve the state-of-the-art of scientific document summarization. In total, we received 18 submissions that addressed one or more of these tasks. It was a useful exercise to compare and contrast each of these summarization tasks, since they allowed researchers to explore their systems in different contexts, on different corpora, and for different audiences. Overall, what this efforts has shown is that the state of the art of summarizing scientific documents is neither in its nascency, nor a fully solved problem. We are interested in expanding task-based efforts in scholarly document summarization in future workshops, and investigating how scholarly documents differ or are similar to other texts. We are interested in collaborating with others in the NLP and AI-communities to investigate to what degree new technologies can be utilized and developed, to allow for a future where some of the work of tracking the scientific literature can be supported by machines. While CL-SciSumm has run for 6 editions and with the 2020 edition now set up two standard benchmark evaluation datasets for citation based summarization intended for use by researchers to aid in scientific discovery (breadth), LongSumm and LaySumm are inaugural tasks towards building systems that to improve understanding and dissemination of papers (depth).", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 78, |
| "text": "(Chandrasekaran et al., 2020)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://ibm.biz/sciencesum", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://2020.emnlp.org/ 3 http://www.nist.gov/tac/2014", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://competitions.codalab.org/ competitions/25516", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/guyfe/LongSumm 10 https://pypi.org/project/rouge-score/ 11 https://aieval.draco.res.ibm.com/ challenge/39/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Our analysis ignores Wing since they did not submit a system report as required", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "CL-SciSumm would like to Microsoft Research Asia who funded the development of Cl-SciSumm corpus and the shared tasks from 2016 through 2018. We also thank Vasudeva Varma and colleagues at IIIT-Hyderabad, India and University of Hyderabad for their efforts in convening and organizing our annotation workshops in 2016-17. We acknowledge the advice of Min-Yen Kan, Hoa Dang, NIST, Lucy Vanderwende and Anita de Waard from the pilot stage of this task. We would also like to thank Rahul Jha and Dragomir Radev for sharing their software. We are grateful to Kevin B. Cohen and colleagues for their support, and for sharing their annotation schema and tools which have been indispensable for all six editions of CL-SciSumm.The LongSumm task organizers would like to thank the blog authors Shagun Sodhani, Patrick Emami, Adrian Colyer, Alexander Jung, Joseph Paul Cohen, Hugo Larochelle, Elvis Saravia and to ShortScience.org who generously allowed them to share the content as part of the LongSumm dataset.The LaySumm task organizers thank Darin McBeath at Elsevier who compiled the test and training data and Ilaria Meliconi, Virgina Prada Lopez and Victor Croes at Elsevier, who acted as domain experts for spot checking the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "UniHD@CL-SciSumm20: Citation Extraction as Search", |
| "authors": [ |
| { |
| "first": "Dennis", |
| "middle": [], |
| "last": "Aumiller", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Satya Almasian", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Hausner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gertz", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dennis Aumiller, Satya Almasian, Philip Hausner, and Michael Gertz. 2020. UniHD@CL-SciSumm20: Ci- tation Extraction as Search. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A study of human summaries of scientific articles", |
| "authors": [ |
| { |
| "first": "Odellia", |
| "middle": [], |
| "last": "Boni", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Feigenblat", |
| "suffix": "" |
| }, |
| { |
| "first": "Doron", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Haggai", |
| "middle": [], |
| "last": "Roitman", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Konopnicki", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Odellia Boni, Guy Feigenblat, Doron Cohen, Haggai Roitman, and David Konopnicki. 2020. A study of human summaries of scientific articles.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "NLP-PINGAN-TECH@CLSciSumm-20", |
| "authors": [ |
| { |
| "first": "Ling", |
| "middle": [], |
| "last": "Chai", |
| "suffix": "" |
| }, |
| { |
| "first": "Guizhen", |
| "middle": [], |
| "last": "Fu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Ni", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ling Chai, Guizhen Fu, and Yuan Ni. 2020. NLP- PINGAN-TECH@CLSciSumm-20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Overview of the first workshop on scholarly document processing (sdp)", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "K" |
| ], |
| "last": "Chandrasekaran", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Feigenblat", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ghosal", |
| "suffix": "" |
| }, |
| { |
| "first": "Hovy", |
| "middle": [ |
| "E P" |
| ], |
| "last": "Mayr", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Shmueli-Scheuer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "De Waard", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the First Workshop on Scholarly Document Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. K. Chandrasekaran, G. Feigenblat, D. Freitag, T. Ghosal, Hovy. E., Mayr. P., M. Shmueli-Scheuer, and A De Waard. 2020. Overview of the first work- shop on scholarly document processing (sdp). In Proceedings of the First Workshop on Scholarly Doc- ument Processing (SDP 2020).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Overview and results: CL-scisumm shared task", |
| "authors": [ |
| { |
| "first": "Michihiro", |
| "middle": [], |
| "last": "Muthu Kumar Chandrasekaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Yasunaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Dayne", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1907.09854" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Muthu Kumar Chandrasekaran, Michihiro Yasunaga, Dragomir Radev, Dayne Freitag, and Min-Yen Kan. 2019. Overview and results: CL-scisumm shared task 2019. arXiv preprint arXiv:1907.09854.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Divide and conquer: From complexity to simplicity for lay summarization", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rochana Chaturvedi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saachi", |
| "suffix": "" |
| }, |
| { |
| "first": "Anurag", |
| "middle": [], |
| "last": "Jaspreet Singh Dhani", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankush", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Neha", |
| "middle": [], |
| "last": "Khanna", |
| "suffix": "" |
| }, |
| { |
| "first": "Swagata", |
| "middle": [], |
| "last": "Tomar", |
| "suffix": "" |
| }, |
| { |
| "first": "Alka", |
| "middle": [], |
| "last": "Duari", |
| "suffix": "" |
| }, |
| { |
| "first": "Vasudha", |
| "middle": [], |
| "last": "Khurana", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bhatnagar", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the First Workshop on Scholarly Document Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "344--355", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rochana Chaturvedi, Saachi ., Jaspreet Singh Dhani, Anurag Joshi, Ankush Khanna, Neha Tomar, Swa- gata Duari, Alka Khurana, and Vasudha Bhatnagar. 2020. Divide and conquer: From complexity to sim- plicity for lay summarization. In Proceedings of the First Workshop on Scholarly Document Processing, pages 344-355, Online. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Annisa Maulida Ningtyas, Linda Andersson, Florina Piroi, and Andreas Rauber", |
| "authors": [ |
| { |
| "first": "Alaa", |
| "middle": [], |
| "last": "El-Ebshihy", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alaa El-Ebshihy, Annisa Maulida Ningtyas, Linda An- dersson, Florina Piroi, and Andreas Rauber. 2020.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Wien and Artificial Researcher@ Long-Summ 20", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Artu / Tu", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "SDP 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "ARTU / TU Wien and Artificial Researcher@ Long- Summ 20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A summarization system for scientific documents", |
| "authors": [ |
| { |
| "first": "Shai", |
| "middle": [], |
| "last": "Erera", |
| "suffix": "" |
| }, |
| { |
| "first": "Michal", |
| "middle": [], |
| "last": "Shmueli-Scheuer", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Feigenblat", |
| "suffix": "" |
| }, |
| { |
| "first": "Ora", |
| "middle": [ |
| "Peled" |
| ], |
| "last": "Nakash", |
| "suffix": "" |
| }, |
| { |
| "first": "Odellia", |
| "middle": [], |
| "last": "Boni", |
| "suffix": "" |
| }, |
| { |
| "first": "Haggai", |
| "middle": [], |
| "last": "Roitman", |
| "suffix": "" |
| }, |
| { |
| "first": "Doron", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Bar", |
| "middle": [], |
| "last": "Weiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Yosi", |
| "middle": [], |
| "last": "Mass", |
| "suffix": "" |
| }, |
| { |
| "first": "Or", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Lev", |
| "suffix": "" |
| }, |
| { |
| "first": "Achiya", |
| "middle": [], |
| "last": "Jerbi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Herzig", |
| "suffix": "" |
| }, |
| { |
| "first": "Yufang", |
| "middle": [], |
| "last": "Hou", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Jochim", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Gleize", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesca", |
| "middle": [], |
| "last": "Bonin", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesca", |
| "middle": [], |
| "last": "Bonin", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Konopnicki", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shai Erera, Michal Shmueli-Scheuer, Guy Feigenblat, Ora Peled Nakash, Odellia Boni, Haggai Roitman, Doron Cohen, Bar Weiner, Yosi Mass, Or Rivlin, Guy Lev, Achiya Jerbi, Jonathan Herzig, Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Francesca Bonin, and David Konopnicki. 2019. A summarization system for scientific docu- ments. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP): Sys- tem Demonstrations.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Ester", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans-Peter", |
| "middle": [], |
| "last": "Kriegel", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Sander", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaowei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD'96", |
| "volume": "", |
| "issue": "", |
| "pages": "226--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, and Xi- aowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Min- ing, KDD'96, page 226-231. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Stefanos Dimitrios Stefanidis, and Grigorios Tsoumakas", |
| "authors": [ |
| { |
| "first": "Alexios", |
| "middle": [], |
| "last": "Gidiotis", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexios Gidiotis, Stefanos Dimitrios Stefanidis, and Grigorios Tsoumakas. 2020.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "CL-LaySumm20, LongSumm20", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Auth@cl-Scisumm20", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "SDP 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "AUTH@CL- SciSumm20, CL-LaySumm20, LongSumm20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A divide-and-conquer approach to the summarization of academic articles", |
| "authors": [ |
| { |
| "first": "Alexios", |
| "middle": [], |
| "last": "Gidiotis", |
| "suffix": "" |
| }, |
| { |
| "first": "Grigorios", |
| "middle": [], |
| "last": "Tsoumakas", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2004.06190" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexios Gidiotis and Grigorios Tsoumakas. 2020. A divide-and-conquer approach to the summa- rization of academic articles. arXiv preprint arXiv:2004.06190.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Team MLU@CL-SciSumm20: Methods for Computational Linguistics Scientific Citation Linkage", |
| "authors": [ |
| { |
| "first": "Rong", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kseniia", |
| "middle": [], |
| "last": "Krylova", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rong Huang and Kseniia Krylova. 2020. Team MLU@CL-SciSumm20: Methods for Computa- tional Linguistics Scientific Citation Linkage. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The cl-scisumm shared task 2017: Results and key insights", |
| "authors": [ |
| { |
| "first": "Kokil", |
| "middle": [], |
| "last": "Jaidka", |
| "suffix": "" |
| }, |
| { |
| "first": "Muthu", |
| "middle": [], |
| "last": "Kumar Chandrasekaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Devanshu", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "BIRNDL@ SIGIR", |
| "volume": "2002", |
| "issue": "", |
| "pages": "1--15", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kokil Jaidka, Muthu Kumar Chandrasekaran, Devan- shu Jain, and Min-Yen Kan. 2017. The cl-scisumm shared task 2017: Results and key insights. In BIRNDL@ SIGIR (2), volume 2002, pages 1-15. CEUR.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Insights from cl-scisumm 2016: the faceted scientific document summarization shared task", |
| "authors": [ |
| { |
| "first": "Kokil", |
| "middle": [], |
| "last": "Jaidka", |
| "suffix": "" |
| }, |
| { |
| "first": "Muthu", |
| "middle": [], |
| "last": "Kumar Chandrasekaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Sajal", |
| "middle": [], |
| "last": "Rustagi", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "International Journal on Digital Libraries", |
| "volume": "19", |
| "issue": "2-3", |
| "pages": "163--171", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2018. Insights from cl-scisumm 2016: the faceted scientific document summarization shared task. International Journal on Digital Libraries, 19(2-3):163-171.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The cl-scisumm shared task 2018: Results and key insights", |
| "authors": [ |
| { |
| "first": "Kokil", |
| "middle": [], |
| "last": "Jaidka", |
| "suffix": "" |
| }, |
| { |
| "first": "Michihiro", |
| "middle": [], |
| "last": "Yasunaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Muthu", |
| "middle": [], |
| "last": "Kumar Chandrasekaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1909.00764" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kokil Jaidka, Michihiro Yasunaga, Muthu Ku- mar Chandrasekaran, Dragomir Radev, and Min- Yen Kan. 2019. The cl-scisumm shared task 2018: Results and key insights. arXiv preprint arXiv:1909.00764.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Longxiang Gao, and Shirui Pan. 2020. Monash-Summ@LongSumm", |
| "authors": [ |
| { |
| "first": "Jiaxin", |
| "middle": [], |
| "last": "Ju", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiaxin Ju, Ming Liu, Longxiang Gao, and Shirui Pan. 2020. Monash-Summ@LongSumm 20", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline", |
| "authors": [], |
| "year": null, |
| "venue": "SDP 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Using Pre-Trained Transformer for a better Lay Summarization", |
| "authors": [ |
| { |
| "first": "Seungwon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "SDP 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seungwon Kim. 2020. Using Pre-Trained Transformer for a better Lay Summarization. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks", |
| "authors": [ |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Lev", |
| "suffix": "" |
| }, |
| { |
| "first": "Michal", |
| "middle": [], |
| "last": "Shmueli-Scheuer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Herzig", |
| "suffix": "" |
| }, |
| { |
| "first": "Achiya", |
| "middle": [], |
| "last": "Jerbi", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Konopnicki", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", |
| "volume": "1", |
| "issue": "", |
| "pages": "2125--2131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- summ: A dataset and scalable annotation method for scientific paper summarization based on confer- ence talks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 2125-2131.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal ; Abdelrahman Mohamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7871--7880", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.703" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "LongSumm 2020: Automatic Scientific Document Summarization", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yafei", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Siya", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Xingyuan", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "20", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lei Li, Yang Xie, Wei Liu, Yinan Liu, Yafei Jiang, Siya Qi, and Xingyuan Li. 2020. CIST@CLSciSumm- 20, LongSumm 2020: Automatic Scientific Docu- ment Summarization. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Rouge: A package for automatic evaluation of summaries", |
| "authors": [ |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Text summarization branches out: Proceedings of the ACL-04 workshop", |
| "volume": "8", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Text summarization with pretrained encoders", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "3730--3740", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-1387" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Liu and Mirella Lapata. 2019a. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Text summarization with pretrained encoders", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1908.08345" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Liu and Mirella Lapata. 2019b. Text summa- rization with pretrained encoders. arXiv preprint arXiv:1908.08345.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Least squares quantization in pcm", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "P" |
| ], |
| "last": "Lloyd", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "IEEE Trans. Inf. Theory", |
| "volume": "28", |
| "issue": "", |
| "pages": "129--136", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. P. Lloyd. 1982. Least squares quantization in pcm. IEEE Trans. Inf. Theory, 28:129-136.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Sriparna Saha, and Pushpak Bhattacharyya. 2020. IITP-AI-NLP-ML@CLSciSumm-20, CL-LaySumm 2020", |
| "authors": [ |
| { |
| "first": "Kundarapu", |
| "middle": [], |
| "last": "Santosh Kumar Mishra", |
| "suffix": "" |
| }, |
| { |
| "first": "Naveen", |
| "middle": [], |
| "last": "Harshavardhan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saini", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Santosh Kumar Mishra, Kundarapu Harshavardhan, Naveen Saini, Sriparna Saha, and Pushpak Bhat- tacharyya. 2020. IITP-AI-NLP-ML@CLSciSumm- 20, CL-LaySumm 2020, LongSumm 2020. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents", |
| "authors": [ |
| { |
| "first": "Ramesh", |
| "middle": [], |
| "last": "Nallapati", |
| "suffix": "" |
| }, |
| { |
| "first": "Feifei", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17", |
| "volume": "", |
| "issue": "", |
| "pages": "3075--3081", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proceedings of the Thirty-First AAAI Con- ference on Artificial Intelligence, AAAI'17, page 3075-3081. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Exploring the limits of transfer learning with a unified text-to", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Sriparna Saha, and Pushpak Bhattacharyya. 2020. IIITBH-IITP@CL-SciSumm20, CL-LaySumm20, LongSumm20", |
| "authors": [ |
| { |
| "first": "Naveen", |
| "middle": [], |
| "last": "Saichethan Miriyala Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sainiand Naveen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saini", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saichethan Miriyala Reddy, Naveen Sainiand Naveen Saini, Sriparna Saha, and Pushpak Bhattacharyya. 2020. IIITBH-IITP@CL-SciSumm20, CL- LaySumm20, LongSumm20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Natural categories", |
| "authors": [ |
| { |
| "first": "Eleanor", |
| "middle": [ |
| "H" |
| ], |
| "last": "Rosch", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Cognitive Psychology", |
| "volume": "4", |
| "issue": "3", |
| "pages": "328--350", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/0010-0285(73)90017-0" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eleanor H. Rosch. 1973. Natural categories. Cognitive Psychology, 4(3):328 -350.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Information Retrieval and Extraction Lab, IIIT-H @ Lay-Summ 20, LongSumm 20", |
| "authors": [ |
| { |
| "first": "Nikhil", |
| "middle": [], |
| "last": "Sayar Ghosh Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "Risubh", |
| "middle": [], |
| "last": "Pinnaparaju", |
| "suffix": "" |
| }, |
| { |
| "first": "Manish", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "Vasudeva", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Varma", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "SDP 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sayar Ghosh Roy, Nikhil Pinnaparaju, Risubh Jain, Manish Gupta, and Vasudeva Varma. 2020. Infor- mation Retrieval and Extraction Lab, IIIT-H @ Lay- Summ 20, LongSumm 20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Get to the point: Summarization with pointergenerator networks", |
| "authors": [ |
| { |
| "first": "Abigail", |
| "middle": [], |
| "last": "See", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1073--1083", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1099" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Bigpatent: A large-scale dataset for abstractive and coherent summarization", |
| "authors": [ |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "" |
| }, |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Lu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1906.03741" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "GUIR @ LongSumm 2020: Learning to Generate Long Summaries from Scientific Documents", |
| "authors": [ |
| { |
| "first": "Sajad", |
| "middle": [], |
| "last": "Sotudeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Arman", |
| "middle": [], |
| "last": "Cohan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nazli", |
| "middle": [], |
| "last": "Goharian", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sajad Sotudeh, Arman Cohan, and Nazli Goharian. 2020. GUIR @ LongSumm 2020: Learning to Gen- erate Long Summaries from Scientific Documents. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "CiteQA@CL-SciSumm20", |
| "authors": [ |
| { |
| "first": "Anjana", |
| "middle": [], |
| "last": "Umapathy", |
| "suffix": "" |
| }, |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Radhakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kinjal", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "Rahul", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anjana Umapathy, Karthik Radhakrishnan, Kinjal Jain, and Rahul Singh. 2020. CiteQA@CL-SciSumm20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Evalai: Towards better evaluation systems for ai agents", |
| "authors": [ |
| { |
| "first": "Deshraj", |
| "middle": [], |
| "last": "Yadav", |
| "suffix": "" |
| }, |
| { |
| "first": "Rishabh", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "Harsh", |
| "middle": [], |
| "last": "Agrawal", |
| "suffix": "" |
| }, |
| { |
| "first": "Prithvijit", |
| "middle": [], |
| "last": "Chattopadhyay", |
| "suffix": "" |
| }, |
| { |
| "first": "Taranjeet", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Akash", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shiv Baran", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi- jit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. 2019. Evalai: Towards better evaluation systems for ai agents.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks", |
| "authors": [ |
| { |
| "first": "Michihiro", |
| "middle": [], |
| "last": "Yasunaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Jungo", |
| "middle": [], |
| "last": "Kasai", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "R" |
| ], |
| "last": "Fabbri", |
| "suffix": "" |
| }, |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dragomir R Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
| "volume": "33", |
| "issue": "", |
| "pages": "7386--7393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large an- notated corpus and content-impact models for scien- tific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7386-7393.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "IR&TM-NJUST @ CLSciSumm-20", |
| "authors": [ |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lifan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruping", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaohu", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shutain", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Chengzhi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "2020", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heng Zhang, Lifan Liu, Ruping Wang, Shaohu Hu, Shutain Ma, and Chengzhi Zhang. 2020. IR&TM- NJUST @ CLSciSumm-20. In SDP 2020.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", |
| "authors": [ |
| { |
| "first": "Jingqing", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yao", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Saleh", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter J", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1912.08777" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Character-level convolutional networks for text classification", |
| "authors": [ |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Proceedings of the 28th International Conference on Neural Information Processing Sys- tems -Volume 1, NIPS'15.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Summpip: Unsupervised multidocument summarization with sentence graph compression", |
| "authors": [ |
| { |
| "first": "Jinming", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Longxiang", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| }, |
| { |
| "first": "Lan", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "He", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "He", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Gholamreza", |
| "middle": [], |
| "last": "Haffari", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event", |
| "volume": "", |
| "issue": "", |
| "pages": "1949--1952", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinming Zhao, Ming Liu, Longxiang Gao, Yuan Jin, Lan Du, He Zhao, He Zhang, and Gholamreza Haffari. 2020. Summpip: Unsupervised multi- document summarization with sentence graph com- pression. In Proceedings of the 43rd International ACM SIGIR conference on research and develop- ment in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1949-1952.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "CL-SciSumm systems' performance in Task 1A and 1B, ordered by their F 1 -scores for sentence overlap on Task 1A, Task 1B separately. Each system's rank by their performance on ROUGE on Task 1A is shown in parentheses." |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "" |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "content": "<table><tr><td>System</td><td colspan=\"6\">Rouge1-F1 Rouge1-Recall Rouge2-F1 Rouge2-Recall RougeL-F1 RougeL-Recall</td></tr><tr><td>HYTZ</td><td>0.4600</td><td>0.5013</td><td>0.2070</td><td>0.2223</td><td>0.2876</td><td>0.3104</td></tr><tr><td>seungwonkim</td><td>0.4596</td><td>0.4810</td><td>0.2146</td><td>0.2237</td><td>0.2977</td><td>0.3105</td></tr><tr><td>Summaformers</td><td>0.4594</td><td>0.4911</td><td>0.1902</td><td>0.2026</td><td>0.2744</td><td>0.2923</td></tr><tr><td>AUTH</td><td>0.4456</td><td>0.4298</td><td>0.1936</td><td>0.1860</td><td>0.2772</td><td>0.2673</td></tr><tr><td>DUCS</td><td>0.4253</td><td>0.5159</td><td>0.1748</td><td>0.2102</td><td>0.2526</td><td>0.3055</td></tr><tr><td>IIITBH-IITP</td><td>0.4048</td><td>0.5414</td><td>0.1690</td><td>0.2253</td><td>0.2244</td><td>0.3019</td></tr><tr><td colspan=\"2\">Harita ramesh babu 0.3524</td><td>0.3865</td><td>0.1110</td><td>0.1232</td><td>0.1995</td><td>0.2188</td></tr><tr><td>IITP-AI-NLP-ML</td><td>0.3132</td><td>0.3705</td><td>0.0631</td><td>0.0746</td><td>0.1662</td><td>0.1973</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "ROUGE Recall and F-Measure evaluation on LaySumm test set" |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "content": "<table><tr><td>System</td><td/><td colspan=\"2\">F-Measure</td><td/><td>Recall</td><td/><td>F-Measure average Methodology</td></tr><tr><td/><td>R-1</td><td>R-2</td><td>R-L</td><td>R-1</td><td>R-2</td><td>R-L</td><td>Supervised/ Unsupervised</td></tr><tr><td>GUIR</td><td colspan=\"6\">53.11 16.77 20.34 54.60 17.28 20.90</td><td>30.07</td><td>S</td></tr><tr><td>Wing</td><td colspan=\"6\">50.58 16.62 20.50 51.16 16.75 20.66</td><td>29.23</td><td>-</td></tr><tr><td>Summaformers</td><td colspan=\"6\">49.38 16.86 21.38 43.90 14.98 18.98</td><td>29.21</td><td>S</td></tr><tr><td>IIITBH-IITP</td><td colspan=\"6\">49.03 15.74 20.46 49.84 16.00 20.80</td><td>28.41</td><td>S</td></tr><tr><td>AUTH</td><td colspan=\"6\">50.11 15.37 19.59 46.93 14.23 18.18</td><td>28.36</td><td>S</td></tr><tr><td>CIST BUPT</td><td colspan=\"6\">48.99 15.06 20.13 49.74 15.22 20.39</td><td>28.06</td><td>S</td></tr><tr><td>ARTU</td><td colspan=\"6\">48.03 14.76 18.04 46.78 14.28 17.43</td><td>26.94</td><td>U</td></tr><tr><td colspan=\"7\">IITP-AI-NLP-ML 46.46 14.61 19.58 47.43 14.86 19.95</td><td>26.88</td><td>U</td></tr><tr><td>Monash-Summ</td><td colspan=\"6\">49.16 12.80 18.31 49.35 12.76 18.33</td><td>26.76</td><td>S</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "ROUGE F-Measure and Recall evaluation on the official LongSumm test set. In addition, for each reported result, the Methodology columns indicate whether a reported result employs a Supervised or Unsupervised summarization technique." |
| } |
| } |
| } |
| } |