ACL-OCL / Base_JSON /prefixF /json /fnp /2020.fnp-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:23:54.723031Z"
},
"title": "End-to-end Training For Financial Report Summarization",
"authors": [
{
"first": "La",
"middle": [],
"last": "Moreno",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico di Torino",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Quatra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico di Torino",
"location": {}
},
"email": ""
},
{
"first": "Luca",
"middle": [],
"last": "Cagliero",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico di Torino",
"location": {}
},
"email": "luca.cagliero@polito.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Quoted companies are requested to periodically publish financial reports in textual form. The annual financial reports typically include detailed financial and business information, thus giving relevant insights into company outlooks. However, a manual exploration of these financial reports could be very time consuming since most of the available information can be deemed as noninformative or redundant by expert readers. Hence, an increasing research interest has been devoted to automatically extracting domain-specific summaries, which include only the most relevant information. This paper describes the SumTO system architecture, which addresses the Shared Task of the Financial Narrative Summarisation (FNS) 2020 contest. The main task objective is to automatically extract the most informative, domain-specific textual content from financial, English-written documents. The aim is to create a summary of each company report covering all the business-relevant key points. To address the above-mentioned goal, we propose an end-to-end training method relying on Deep NLP techniques. The idea behind the system is to exploit the syntactic overlap between input sentences and ground-truth summaries to fine-tune pre-trained BERT embedding models, thus making such models tailored to the specific context. The achieved results confirm the effectiveness of the proposed method, especially when the goal is to select relatively long text snippets.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Quoted companies are requested to periodically publish financial reports in textual form. The annual financial reports typically include detailed financial and business information, thus giving relevant insights into company outlooks. However, a manual exploration of these financial reports could be very time consuming since most of the available information can be deemed as noninformative or redundant by expert readers. Hence, an increasing research interest has been devoted to automatically extracting domain-specific summaries, which include only the most relevant information. This paper describes the SumTO system architecture, which addresses the Shared Task of the Financial Narrative Summarisation (FNS) 2020 contest. The main task objective is to automatically extract the most informative, domain-specific textual content from financial, English-written documents. The aim is to create a summary of each company report covering all the business-relevant key points. To address the above-mentioned goal, we propose an end-to-end training method relying on Deep NLP techniques. The idea behind the system is to exploit the syntactic overlap between input sentences and ground-truth summaries to fine-tune pre-trained BERT embedding models, thus making such models tailored to the specific context. The achieved results confirm the effectiveness of the proposed method, especially when the goal is to select relatively long text snippets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Analyzing the annual financial reports is the most established way to assess the health state of business companies. For example, rating agencies, banks, and hedge funds rely on the information extracted from domain-specific reports to assign ratings, grant loans, and drive investment strategies (Piotroski, 2000) . Unfortunately, the content of the released financial reports is highly redundant as it typically includes contextual and technical information that is marginally relevant to domain experts. The Shared Task of the Financial Narrative Summarization (FNS) research challenge (El-Haj et al., 2020) aims to address this issue by fostering innovative research on the problem of automatic extraction of domain-specific summaries from the annual financial reports.",
"cite_spans": [
{
"start": 297,
"end": 314,
"text": "(Piotroski, 2000)",
"ref_id": "BIBREF23"
},
{
"start": 589,
"end": 610,
"text": "(El-Haj et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The algorithms designed for automatic text summarization can be partitioned into two main classes: Extractive approaches and Abstractive ones. While extractive approaches pick existing text snippets (e.g., sentences, phrases, keywords) directly from the source text, abstractive methods generate new content based on the analysis of the input documents. The summarization process can be either supervised, when a portion of document content already annotated by human experts as relevant or not is available, or unsupervised when no a priori knowledge is given. The FNS shared task promotes the study, development, and testing of automated sentence-based summarization techniques tailored to the financial domain. To extract relevant sentences from annual financial reports, it provides researchers with a large set of humanly annotated data (El-Haj, 2019) . Therefore, the present work addresses the study of a supervised, extractive, sentence-based approach to address the FNS Shared Task.",
"cite_spans": [
{
"start": 842,
"end": 856,
"text": "(El-Haj, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extractive summarization methods have found application in several domains, such as the summarization from news articles (e.g., (See et al., 2017; Cagliero et al., 2019; Krishnan et al., 2019) ), scientific papers (e.g., Cohan and Goharian, 2018; Collins et al., 2017) ) and product reviews (i.e., (Ganesan et al., 2010) ). Wide-ranging overviews of the state-of-the-art works on text summarization can be found in (Widyassari et al., 2020; El-Kassas et al., 2020) . Using Machine Learning techniques to summarize documents entails (i) extracting relevant text features at the sentence level and (ii) feeding the extracted features to a supervised model to produce a sentence rank (El-Kassas et al., 2020) . To address the former step, latent text representations based on Deep Learning models have shown to be very effective in generating relevant text features (Chen and Nguyen, 2019; Kobayashi et al., 2015) However, pre-trained deep NLP models need to be tailored to the specific context under analysis (e.g., medical data (Lee et al., 2020; Huang et al., 2019) ), patent-related areas (Lee and Hsiang, 2019) ). Previous works that use deep language models in the financial domain focused on the sentiment analysis task (Yang et al., 2020) . To the best of our knowledge, this is the first attempt to fine-tune pre-trained deep NLP models in order to enhance the quality of the process of financial report summarization.",
"cite_spans": [
{
"start": 128,
"end": 146,
"text": "(See et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 147,
"end": 169,
"text": "Cagliero et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 170,
"end": 192,
"text": "Krishnan et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 221,
"end": 246,
"text": "Cohan and Goharian, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 247,
"end": 268,
"text": "Collins et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 298,
"end": 320,
"text": "(Ganesan et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 415,
"end": 440,
"text": "(Widyassari et al., 2020;",
"ref_id": null
},
{
"start": 441,
"end": 464,
"text": "El-Kassas et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 681,
"end": 705,
"text": "(El-Kassas et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 863,
"end": 886,
"text": "(Chen and Nguyen, 2019;",
"ref_id": "BIBREF3"
},
{
"start": 887,
"end": 910,
"text": "Kobayashi et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1027,
"end": 1045,
"text": "(Lee et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 1046,
"end": 1065,
"text": "Huang et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1090,
"end": 1112,
"text": "(Lee and Hsiang, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1224,
"end": 1243,
"text": "(Yang et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 overviews the architecture of the proposed summarizer. Section 3 and 4, 5 separately describe each phase of the summarization process. Section 6 summarizes the outcomes of the evaluation step. Finally, Section 7 draws conclusions and envisions future research steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Summarizer based on end-TO-end training (SumTO) consists of a three-phase process, which is depicted in Figure 1 . It comprises (i) a preprocessing phase, which transforms the raw textual documents and annotates the content at the sentence-level. (ii) a training step, which extract relevant concepts and relationships according to two established Deep language models, i.e., BERT (Devlin et al., 2019) and DistilBERT (Sanh et al., 2019) . (iii) a evaluation step, which rates the sentences of each test document according to the fine-tuned models trained at the previous step and produce a per-document summary consisting of the highly rated sentences.",
"cite_spans": [
{
"start": 385,
"end": 406,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 422,
"end": 441,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The SumTO System",
"sec_num": "2"
},
{
"text": "The text file containing the annual report of the company, divided into sections...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annual Report",
"sec_num": null
},
{
"text": "Step The fine-tuned model is able to provide better contextual representations for domain-specific vocabulary. The end-to-end process aims at training the model for the identification of relevant topics in the financial domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing",
"sec_num": null
},
{
"text": "The data collection provided by the organizers of the FNS 2020 Shared Task includes the (i) the training set, consisting of 3,000 annual reports and 9,873 golden summaries (3.29 summaries per report, on average), (ii) the evaluation set, consisting of 363 annual reports and 1,250 golden summaries (3.44 summaries per report, on average), and (iii) the test set, consisting of 500 annual reports and 1,673 blind System ID Pre-Trained Model Parameters Settings 1pe distilbert-base-cased N. of epochs: 1, Batch Size: 32, Learning rate: 2e-5 2pe distilbert-base-cased N. of epochs: 2, Batch Size: 32, Learning rate: 2e-5 3pe bert-base-cased N. of epochs: 1, Batch Size: 32, Learning rate: 2e-5 Table 1 : System configuration settings golden summaries (3.34 summaries per report, on average). The size of the training data enables the use deep Natural Language Processing models (Kobayashi et al., 2015) . The textual content of the reports in the training, evaluation, and test sets is transformed by applying the following data preparation steps.",
"cite_spans": [
{
"start": 875,
"end": 899,
"text": "(Kobayashi et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Collection and preprocessing",
"sec_num": "3"
},
{
"text": "1. Text cleaning: the source text, parsed from PDF documents, usually contains small errors in text parsing (e.g., a single word that spans over multiple lines is split in two different tokens). By employing ad-hoc regular expressions, the original content of each report is re-assembled as a single textual document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and preprocessing",
"sec_num": "3"
},
{
"text": "2. Sentence splitting: The text stream is split into sentences by using PunktSentenceTokenizer provided by the Natural Language ToolKit (Loper and Bird, 2002) library.",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and preprocessing",
"sec_num": "3"
},
{
"text": "3. Data Annotation: The sentence of the reports in the training and evaluation sets are annotated with the corresponding relevance score. The score indicates the similarity of the sentence with the content of the human-annotated summaries. It is computed by maximizing the syntactic overlap (i.e., Rouge-2 precision values (Lin, 2004) ) with respect to all the given summaries 1 .",
"cite_spans": [
{
"start": 323,
"end": 334,
"text": "(Lin, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and preprocessing",
"sec_num": "3"
},
{
"text": "A regression model is trained on the sentences of the training documents in order to predict the previously assigned sentence label (i.e., the Rouge-2 precision score). This idea behind to optimize the sentence relevance score according to the provided human annotation by fine-tuning the pre-trained BERT model (Devlin et al., 2019) . The overall architecture is trained using the Mean-Square loss and the AdamW optimizer (Loshchilov and Hutter, 2017) for faster convergence. Table 1 reports the settings for each system run. We generated three different fine-tuned models, hereafter denoted as 1pe, 2pe, 3pe. The best performing model (i.e. 3pe) and the code to apply the summarization algorithm are available on GitHub 2 .",
"cite_spans": [
{
"start": 312,
"end": 333,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 477,
"end": 484,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training phase of the Deep language model",
"sec_num": "4"
},
{
"text": "For each test document, the summarizer evaluates and ranks the corresponding sentences according to the fine-tuned model. Specifically, the sentences of the input report are forward-passed through the trained model and sorted in order of decreasing predicted Rouge-2 Precision score. The ranked list is postprocessed by removing (i) duplicate sentences, (ii) sentences containing more than 50% of uppercase characters, (iii) sentence containing more than 50% of non alphabetic characters, (iv) sentences shorter than 5 words. The text snippets are selected from the post-processed pool according to their assigned score until the summary length requirement (up to 1000 words) is met. The output summary is generated by concatenating the post-processed sentences in order of decreasing relevance score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation phase",
"sec_num": "5"
},
{
"text": "The output summaries submitted to the FNS 2020 Shared Task contest were evaluated by the shared task organizers. To evaluate the system outputs provided by the participants, they exploited the JRouge package 3 , which is a lightweight, multilingual tool implementing the Rouge metrics (Lin, 2004) . Summaries were evaluated using the Rouge-1, Rouge-2, Rouge-SU4, and Rouge-L metrics. Beyond the systems proposed by the contest participants, the following baseline methods have been considered: (i) TextRank (Mihalcea and Tarau, 2004) , (ii) LexRank (Erkan and Radev, 2004) , (iii) POLY (Litvak and Vanetik, 2013) , and (iv) a topline algorithm, i.e., MUSE (Litvak et al., 2010) . Table 2 summarizes the F1-Score results achieved by our submitted runs. The scores of the best performing model (3pe) are reported in bold.",
"cite_spans": [
{
"start": 285,
"end": 296,
"text": "(Lin, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 507,
"end": 533,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF22"
},
{
"start": 549,
"end": 572,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 586,
"end": 612,
"text": "(Litvak and Vanetik, 2013)",
"ref_id": "BIBREF18"
},
{
"start": 656,
"end": 677,
"text": "(Litvak et al., 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 680,
"end": 687,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The SumTO system achieved fairly good results in terms of Rouge-L (i.e., finding the longest N-gram match with the ground truth), because our system tends to prefer relatively longer sentences. For the same reason, ROUGE-1 performance is on average worse than that achieved for Rouge-2 and rouge-SU4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The models were trained on a machine equipped with Intel R Xeon R Gold 5115 CPU, NVIDIA R Tesla R V100 16GB GPU and 512GB of RAM. Using this configuration the fine-tuning of the BERT model (on the full training set) took on average 36 hours per epoch, wheras for DistilBERT each epoch took less than 20 hours. During the evaluation phase, the summarization of a single annual report took around 30 seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational requirements and execution times",
"sec_num": "6.1"
},
{
"text": "The paper described an extractive summarization approach to summarizing textual financial reports. The proposed approach relies on the fine-tuning of a BERT deep language model. The goal is to deeply tailor the Deep NLP model to the specific context under analysis. The system runs were submitted to the FNS 2020 Shared Task, achieving fairly high performance in terms of Rouge-L score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future research steps",
"sec_num": "7"
},
{
"text": "Our future research agenda will cover the following aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future research steps",
"sec_num": "7"
},
{
"text": "Pruning of redundant information: The current summarization architecture is not able to prune redundant content, with respect to the previously selected sentences, during the sentence evaluation phase. We plan to extend system by embedding ad hoc redundancy penalty score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future research steps",
"sec_num": "7"
},
{
"text": "Deeper model contextualization: The results have confirmed the effectiveness of the BERT architecture to support text summarization. We aim to explore the applicability of larger and deeper neural language models in order to better capture the semantic meaning of the analyzed sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future research steps",
"sec_num": "7"
},
{
"text": "Each report may be annotated by multiple summaries provided by different experts. 2 https://github.com/MorenoLaQuatra/SumTO_financial_summarization 3 https://bitbucket.org/nocgod/jrouge/wiki/Home",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research leading to these results is supported by the SmartData@PoliTO center for Big Data technologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Extracting highlights of scientific articles: A supervised summarization approach",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Cagliero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moreno La Quatra",
"suffix": ""
}
],
"year": 2020,
"venue": "Expert Systems with Applications",
"volume": "160",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Cagliero and Moreno La Quatra. 2020. Extracting highlights of scientific articles: A supervised summariza- tion approach. Expert Systems with Applications, 160:113659.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Elsa: A multilingual document summarization algorithm based on frequent itemsets and latent semantic analysis",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Cagliero",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Garza",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Baralis",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Trans. Inf. Syst",
"volume": "37",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Cagliero, Paolo Garza, and Elena Baralis. 2019. Elsa: A multilingual document summarization algorithm based on frequent itemsets and latent semantic analysis. ACM Trans. Inf. Syst., 37(2), January.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combining machine learning and natural language processing for language-specific, multi-lingual, and cross-lingual text summarization: A wide-ranging overview",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Cagliero",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Garza",
"suffix": ""
},
{
"first": "Moreno La",
"middle": [],
"last": "Quatra",
"suffix": ""
}
],
"year": 2020,
"venue": "Trends and Applications of Text Summarization Techniques",
"volume": "",
"issue": "",
"pages": "1--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Cagliero, Paolo Garza, and Moreno La Quatra. 2020. Combining machine learning and natural language processing for language-specific, multi-lingual, and cross-lingual text summarization: A wide-ranging overview. In Trends and Applications of Text Summarization Techniques, pages 1-31. IGI Global.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sentence selective neural extractive summarization with reinforcement learning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 11th International Conference on Knowledge and Systems Engineering (KSE)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Chen and M. L. Nguyen. 2019. Sentence selective neural extractive summarization with reinforcement learning. In 2019 11th International Conference on Knowledge and Systems Engineering (KSE), pages 1-5.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scientific document summarization via citation contextualization and scientific discourse",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2018,
"venue": "International Journal on Digital Libraries",
"volume": "19",
"issue": "2-3",
"pages": "287--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan and Nazli Goharian. 2018. Scientific document summarization via citation contextualization and scientific discourse. International Journal on Digital Libraries, 19(2-3):287-303.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A supervised approach to extractive summarisation of scientific papers",
"authors": [
{
"first": "Ed",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "195--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ed Collins, Isabelle Augenstein, and Sebastian Riedel. 2017. A supervised approach to extractive summarisation of scientific papers. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 195-205, Vancouver, Canada, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Financial Narrative Summarisation Shared Task (FNS 2020)",
"authors": [],
"year": 2020,
"venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmoud El-Haj, Ahmed AbuRa'ed, Nikiforos Pittaras, and George Giannakopoulos. 2020. The Financial Nar- rative Summarisation Shared Task (FNS 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020, Barcelona, Spain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multiling 2019: Financial narrative summarisation",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "El-Haj",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources",
"volume": "",
"issue": "",
"pages": "6--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmoud El-Haj. 2019. Multiling 2019: Financial narrative summarisation. In Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources, pages 6-10.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic text summarization: A comprehensive survey",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wafaa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "El-Kassas",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cherif",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"A"
],
"last": "Salama",
"suffix": ""
},
{
"first": "Hoda K",
"middle": [],
"last": "Rafea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohamed",
"suffix": ""
}
],
"year": 2020,
"venue": "Expert Systems with Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wafaa S El-Kassas, Cherif R Salama, Ahmed A Rafea, and Hoda K Mohamed. 2020. Automatic text summariza- tion: A comprehensive survey. Expert Systems with Applications, page 113679.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of artificial intelligence research",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summa- rization. Journal of artificial intelligence research, 22:457-479.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions",
"authors": [
{
"first": "Kavita",
"middle": [],
"last": "Ganesan",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "340--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstractive sum- marization of highly redundant opinions. In Proceedings of the 23rd international conference on computational linguistics, pages 340-348. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission",
"authors": [
{
"first": "Kexin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jaan",
"middle": [],
"last": "Altosaar",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Ranganath",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.05342"
]
},
"num": null,
"urls": [],
"raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Summarization based on embedding distributions",
"authors": [
{
"first": "Hayato",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Masaki",
"middle": [],
"last": "Noguchi",
"suffix": ""
},
{
"first": "Taichi",
"middle": [],
"last": "Yatsuka",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hayato Kobayashi, Masaki Noguchi, and Taichi Yatsuka. 2015. Summarization based on embedding distributions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1984- 1989, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A supervised approach for extractive text summarization using minimal robust features",
"authors": [
{
"first": "D",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bharathy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Anagha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Venugopalan",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Intelligent Computing and Control Systems (ICCS)",
"volume": "",
"issue": "",
"pages": "521--527",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Krishnan, P. Bharathy, Anagha, and M. Venugopalan. 2019. A supervised approach for extractive text summa- rization using minimal robust features. In 2019 International Conference on Intelligent Computing and Control Systems (ICCS), pages 521-527.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Patentbert: Patent classification with fine-tuning a pre-trained bert model",
"authors": [
{
"first": "Jieh-Sheng",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jieh",
"middle": [],
"last": "Hsiang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.02124"
]
},
"num": null,
"urls": [],
"raw_text": "Jieh-Sheng Lee and Jieh Hsiang. 2019. Patentbert: Patent classification with fine-tuning a pre-trained bert model. arXiv preprint arXiv:1906.02124.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Mining the gaps: Towards polynomial summarization",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Vanetik",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "655--660",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak and Natalia Vanetik. 2013. Mining the gaps: Towards polynomial summarization. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 655-660.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A new approach to improving multilingual summarization using a genetic algorithm",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Last",
"suffix": ""
},
{
"first": "Menahem",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "927--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summariza- tion using a genetic algorithm. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 927-936.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Nltk: The natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguis- tics. Philadelphia: Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404-411.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Value investing: The use of historical financial statement information to separate winners from losers",
"authors": [
{
"first": "D",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Piotroski",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Accounting Research",
"volume": "38",
"issue": "",
"pages": "1--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph D. Piotroski. 2000. Value investing: The use of historical financial statement information to separate winners from losers. Journal of Accounting Research, 38:1-41.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. CoRR, abs/1704.04368.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Affandy Affandy, and De Rosal Ignatius Moses Setiadi. 2020. Review of automatic text summarization techniques & methods",
"authors": [
{
"first": "Supriadi",
"middle": [],
"last": "Adhika Pramita Widyassari",
"suffix": ""
},
{
"first": "Guruh",
"middle": [
"Fajar"
],
"last": "Rustad",
"suffix": ""
},
{
"first": "Edi",
"middle": [],
"last": "Shidik",
"suffix": ""
},
{
"first": "Abdul",
"middle": [],
"last": "Noersasongko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Syukur",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adhika Pramita Widyassari, Supriadi Rustad, Guruh Fajar Shidik, Edi Noersasongko, Abdul Syukur, Affandy Affandy, and De Rosal Ignatius Moses Setiadi. 2020. Review of automatic text summarization techniques & methods. Journal of King Saud University -Computer and Information Sciences.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Finbert: A pretrained language model for financial communications",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"Christopher"
],
"last": "Siy",
"suffix": ""
},
{
"first": "U",
"middle": [
"Y"
],
"last": "",
"suffix": ""
},
{
"first": "Allen",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Mark Christopher Siy UY, and Allen Huang. 2020. Finbert: A pretrained language model for financial communications.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Outline of the proposed method",
"type_str": "figure"
}
}
}
}