ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:17.262511Z"
},
"title": "Assessing the Quality of Human-Generated Summaries with Weakly Supervised Learning",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Olsen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTNU -Norwegian University of Science and Technology Trondheim",
"location": {
"country": "Norway"
}
},
"email": ""
},
{
"first": "Arild",
"middle": [
"Brandrud"
],
"last": "Naess",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTNU -Norwegian University of Science and Technology Trondheim",
"location": {
"country": "Norway"
}
},
"email": "arild.naess@ntnu.no"
},
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": "",
"affiliation": {},
"email": "plison@nr.no"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores how to automatically measure the quality of human-generated summaries, based on a Norwegian corpus of real estate condition reports and their corresponding summaries. The proposed approach proceeds in two steps. First, the real estate reports and their associated summaries are automatically labelled using a set of heuristic rules gathered from human experts and aggregated using weak supervision. The aggregated labels are then employed to learn a neural model that takes a document and its summary as inputs and outputs a score reflecting the predicted quality of the summary. The neural model maps the document and its summary to a shared \"summary content space\" and computes the cosine similarity between the two document embeddings to predict the final summary quality score. The best performance is achieved by a CNN-based model with an accuracy (measured against the aggregated labels obtained via weak supervision) of 89.5%, compared to 72.6% for the best unsupervised model. Manual inspection of examples indicate that the weak supervision labels do capture important indicators of summary quality, but the correlation of those labels with human judgements remains to be validated. Our models of summary quality predict that approximately 30% of the real estate reports in the corpus have a summary of poor quality.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores how to automatically measure the quality of human-generated summaries, based on a Norwegian corpus of real estate condition reports and their corresponding summaries. The proposed approach proceeds in two steps. First, the real estate reports and their associated summaries are automatically labelled using a set of heuristic rules gathered from human experts and aggregated using weak supervision. The aggregated labels are then employed to learn a neural model that takes a document and its summary as inputs and outputs a score reflecting the predicted quality of the summary. The neural model maps the document and its summary to a shared \"summary content space\" and computes the cosine similarity between the two document embeddings to predict the final summary quality score. The best performance is achieved by a CNN-based model with an accuracy (measured against the aggregated labels obtained via weak supervision) of 89.5%, compared to 72.6% for the best unsupervised model. Manual inspection of examples indicate that the weak supervision labels do capture important indicators of summary quality, but the correlation of those labels with human judgements remains to be validated. Our models of summary quality predict that approximately 30% of the real estate reports in the corpus have a summary of poor quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many types of reports incorporate humangenerated summaries that seek to highlight the most important pieces of information described in the full document. This is notably the case for real estate condition reports, which are long, technical reports presenting the current condition (as it is known to the seller) of a property for sale, including the general state of each room, known damages and defects, and key technical aspects such as the heating, plumbing, electricity and roof. Despite the rich amount of information contained in these real estate reports, several surveys have shown that many buyers of real estate do not read the full documents but rather concentrate on the summaries (Sandberg, 2017) . However, professionals regard the quality of these summaries as varying greatly, from good to very poor. Actors in the real estate market have suggested that this information deficit may play an important role in the reported 10% of Norwegian real estate transactions ending in conflict (Huseiernes Landsforbund, 2017) .",
"cite_spans": [
{
"start": 694,
"end": 710,
"text": "(Sandberg, 2017)",
"ref_id": "BIBREF40"
},
{
"start": 1000,
"end": 1031,
"text": "(Huseiernes Landsforbund, 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we explore ways of automatically measuring the quality of such summaries, using a corpus of 96 534 real estate condition reports and their corresponding summaries. Although there exists a substantial body of work on summary evaluation (Lloret et al., 2018) , previous work has largely focused on automatically generated summaries, often by comparing those generated summaries to reference summaries written by humans. The automated evaluation of human-generated summaries, however, has received little attention so far. This paper presents an approach to automatically evaluate the quality of human-generated summaries when no manually labelled data is available. Instead, we rely on a set of heuristic rules provided by domain experts to automatically annotate a dataset of summaries (each coupled to their full-length document) with quality indicators. Those annotations are subsequently aggregated into a single, unified annotation layer using weak supervision (Ratner et al., 2017 , based on a generative model that takes into account the varying coverage and accuracy of the heuristic rules.",
"cite_spans": [
{
"start": 248,
"end": 269,
"text": "(Lloret et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 977,
"end": 997,
"text": "(Ratner et al., 2017",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although one could in theory directly use the labels obtained through weak supervision as quality indicators for the summaries, such an approach has a number of limitations. Most importantly, heuristic rules are only triggered under certain conditions, and may therefore \"abstain\" from providing a quality score on some summaries. For instance, we may have a rule stating that, if the full report describes a major defect or damage in the bathroom, then a summary that fails to mention this defect should be labelled as being of poor quality. This rule will only label summaries that meet this specific condition, and abstain from generating a prediction in all other cases. Some heuristic rules may also depend on the availability of external data sources that are not available at prediction time. For instance, one can exploit the fact that an insurance claim has been raised on the real estate as an indicator that the summary may have omitted to mention some important defects or damages. Needless to say, this heuristic can only be applied on historical data, and not on new summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address those shortcomings, we use the aggregated labels obtained via weak supervision as a stepping stone to train a neural model whose task is to assess the quality of a summary in respect to its full-length document. The neural model embeds both the document and its summary into a dedicated semantic space (referred to as the summary content space) and computes the final quality score using cosine similarity. As real estate condition reports are often long documents (10 pages or more), we conduct experiments with models based not only on embeddings of entire documents, but also on embeddings of sections, sentences and words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper makes three contributions: 1. A framework to automatically (a) associate summaries with quality indicators based on expert-written rules, and (b) aggregate those indicators using weak supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. A neural model that predicts the summary quality by embedding both the document and its corresponding summary into a common summary content space, and then computing the similarity between the two vectors. The neural model is trained using the weakly supervised labels as described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. An evaluation of this approach on a large corpus of Norwegian real estate condition reports and their associated summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As detailed in Section 4, this weak supervision approach is able to outperform unsupervised methods based on Latent Semantic Analysis (Deerwester et al., 1990) or Doc2Vec embeddings (Le and Mikolov, 2014) -by a large margin. Although the approach is evaluated on a specific corpus of real estate reports, the proposed methodology can be applied to any type of summaries, provided human experts are able to specify heuristics to assess the summary quality in the target domain.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF9"
},
{
"start": 182,
"end": 204,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Summary evaluation has so far been mostly studied in relation to the task of automatic text summarization, i.e., the automated generation of summaries conditioned on the full document (Rush et al., 2015; Cheng and Lapata, 2016; Gambhir and Gupta, 2017; Cao et al., 2018; Fernandes et al., 2019) . However, few papers have investigated how to evaluate the quality of human-generated summaries such as the short summaries associated with real estate condition reports. Lloret et al. (2018) provide an overview of evaluation metrics for text summarization, focusing on three quality criteria: readability, non-redundancy and content coverage. Although readability and non-redundancy are important criteria to evaluate automatic text summarization systems, they are less relevant for assessing human-generated summaries written by professionals. The criteria of content coverage is, however, relevant in both contexts, and will be the main focus of this paper.",
"cite_spans": [
{
"start": 184,
"end": 203,
"text": "(Rush et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 204,
"end": 227,
"text": "Cheng and Lapata, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 228,
"end": 252,
"text": "Gambhir and Gupta, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 270,
"text": "Cao et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 271,
"end": 294,
"text": "Fernandes et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 467,
"end": 487,
"text": "Lloret et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary evaluation",
"sec_num": "2.1"
},
{
"text": "Metrics for summary evaluation can be divided in three overarching groups (Cabrera-Diego and Torres-Moreno, 2018; Ermakova et al., 2019):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary evaluation",
"sec_num": "2.1"
},
{
"text": "1. Manual evaluation based on human judgments, where participants fill questionnaires to rate the summary quality according to a number of criteria (Nenkova and Passonneau, 2004; .",
"cite_spans": [
{
"start": 148,
"end": 178,
"text": "(Nenkova and Passonneau, 2004;",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary evaluation",
"sec_num": "2.1"
},
{
"text": "2. Automatic evaluation from overlap-measures with reference summaries written by human experts (Lin, 2004; Conroy and Dang, 2008; Giannakopoulos, 2013; Zhang et al., 2020) . One popular metric based on this idea is ROUGE (Lin, 2004) , which is computed from the proportion of n-grams that are observed in both the generated output and the reference summaries.",
"cite_spans": [
{
"start": 96,
"end": 107,
"text": "(Lin, 2004;",
"ref_id": "BIBREF23"
},
{
"start": 108,
"end": 130,
"text": "Conroy and Dang, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 131,
"end": 152,
"text": "Giannakopoulos, 2013;",
"ref_id": "BIBREF15"
},
{
"start": 153,
"end": 172,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF45"
},
{
"start": 222,
"end": 233,
"text": "(Lin, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary evaluation",
"sec_num": "2.1"
},
{
"text": "3. Automatic evaluation without reference summaries, typically using measures of divergence between the generated summary and the source document (Torres-Moreno et al., 2010; Louis and Nenkova, 2013; Cabrera-Diego and Torres-Moreno, 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary evaluation",
"sec_num": "2.1"
},
{
"text": "The evaluation method proposed in this paper fits into the last category, as we do not require the availability of reference summaries. However, contrary to divergence-based metrics, the summary quality is estimated here on the basis of heuristic rules provided by human experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary evaluation",
"sec_num": "2.1"
},
{
"text": "The proposed approach is also related to models of semantic similarity, as the purpose of our summary evaluation is to assess the extent to which the criteria of content coverage is satisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.2"
},
{
"text": "There is a vast body of existing work on how to measure the semantic similarity between documents. This topic is also the focus of various benchmarks, such as the Microsoft Research Paraphrase (MSRP) corpus (Dolan et al., 2004) and the Semantic Textual Similarity (STS) benchmark (Cer et al., 2017) , both expressed as pairs of short documents. The ACL Anthology Network (Radev et al., 2009) is also used for measuring semantic similarity between articles in Liu et al. (2017) . Gong et al. (2019) investigates how to measure similarity between documents of varying sizes. Document similarity can be computed from topic models based on, e.g., Latent Dirichlet Allocation (Blei et al., 2003; Rus et al., 2013; Liu et al., 2017) , or through document embeddings (Le and Mikolov, 2014; Lau and Baldwin, 2016; Liu et al., 2017; Cer et al., 2017; Gong et al., 2019; Vrbanec and Me\u0161trovi\u0107, 2020) . Contextual word representations such as BERT, XLNet or GPT-3 (Devlin et al., 2018; Yang et al., 2019; Brown et al., 2020) , can also be used to derive document embeddings and have been shown to improve performance on document similarity benchmarks (Reimers and Gurevych, 2019; Li et al., 2020) , notably on the MSRP corpus and the STS benchmark.",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "(Dolan et al., 2004)",
"ref_id": "BIBREF11"
},
{
"start": 280,
"end": 298,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 371,
"end": 391,
"text": "(Radev et al., 2009)",
"ref_id": "BIBREF30"
},
{
"start": 459,
"end": 476,
"text": "Liu et al. (2017)",
"ref_id": "BIBREF25"
},
{
"start": 479,
"end": 497,
"text": "Gong et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 671,
"end": 690,
"text": "(Blei et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 691,
"end": 708,
"text": "Rus et al., 2013;",
"ref_id": "BIBREF36"
},
{
"start": 709,
"end": 726,
"text": "Liu et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 760,
"end": 782,
"text": "(Le and Mikolov, 2014;",
"ref_id": "BIBREF21"
},
{
"start": 783,
"end": 805,
"text": "Lau and Baldwin, 2016;",
"ref_id": "BIBREF20"
},
{
"start": 806,
"end": 823,
"text": "Liu et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 824,
"end": 841,
"text": "Cer et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 842,
"end": 860,
"text": "Gong et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 861,
"end": 889,
"text": "Vrbanec and Me\u0161trovi\u0107, 2020)",
"ref_id": "BIBREF42"
},
{
"start": 953,
"end": 974,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 975,
"end": 993,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 994,
"end": 1013,
"text": "Brown et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 1140,
"end": 1168,
"text": "(Reimers and Gurevych, 2019;",
"ref_id": "BIBREF34"
},
{
"start": 1169,
"end": 1185,
"text": "Li et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.2"
},
{
"text": "Of particular relevance to this paper is the text matching approach of Zhong et al. (2020) in which the source document and potential summaries are matched in a semantic space. Their approach is, however, optimised for the problem of extracting summaries, while our focus is on evaluating existing, human-generated summaries, using expertwritten rules as quality indicators.",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "Zhong et al. (2020)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.2"
},
{
"text": "The key idea behind weak supervision is to label data points using a combination of weak (noisy) supervision signals instead of relying on a single gold standard. Those supervision signals are typically expressed as labeling functions, which may take the form of heuristic rules, lookups in external knowledge bases, machine learning models, or even annotations from crowd-workers. The result of those labeling functions are then aggregated using a generative model that estimates the accuracy (and possible correlations) of each function. Once aggregated, the (probabilistic) labels can be employed to train any type of machine learning model using supervised learning. One key benefit of weak supervision frameworks lies in their ability to inject expert knowledge to learn data-driven models in situations when data is scarce or non-existent (Hu et al., 2016; Wang and Poon, 2018) .",
"cite_spans": [
{
"start": 845,
"end": 862,
"text": "(Hu et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 863,
"end": 883,
"text": "Wang and Poon, 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weak supervision",
"sec_num": "2.3"
},
{
"text": "Weak supervision makes it possible to leverage external knowledge sources to automatically label data points instead of relying exclusively on handannotated data. An early application of this idea is distant supervision (Mintz et al., 2009; Ritter et al., 2013) , where knowledge bases are used to automatically label documents with specific categories. One popular approach for weak supervision is the Snorkel framework, which was first introduced by Ratner et al. 2016, and later expanded by Ratner et al. (2017) and .",
"cite_spans": [
{
"start": 220,
"end": 240,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF28"
},
{
"start": 241,
"end": 261,
"text": "Ritter et al., 2013)",
"ref_id": "BIBREF35"
},
{
"start": 494,
"end": 514,
"text": "Ratner et al. (2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weak supervision",
"sec_num": "2.3"
},
{
"text": "Weak supervision frameworks have been applied to a number of NLP tasks, from named entity recognition to relation extraction and dialogue state tracking (Bach et al., 2019; Bringer et al., 2019; Lison et al., 2020; Safranchik et al., 2020) . There is, however, little work with weak supervision related to document similarity or summary quality evaluation.",
"cite_spans": [
{
"start": 153,
"end": 172,
"text": "(Bach et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 173,
"end": 194,
"text": "Bringer et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 195,
"end": 214,
"text": "Lison et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 215,
"end": 239,
"text": "Safranchik et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weak supervision",
"sec_num": "2.3"
},
{
"text": "The approach adopted in this paper is divided in two steps. We first define and apply a set of labeling functions to the dataset, allowing us to derive binary (good/bad) quality indicators on the summaries in relation to their full-length reports. Those quality indicators are then aggregated into a single, probabilistic measure of summary quality using weak supervision. The dataset and labeling functions are described in Sections 3.1 and 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Then, using those aggregated labels as targets, we learn a neural model that maps the reports and summaries to a common summary content space. The resulting embeddings should reflect only key semantic information that is relevant for measuring summary quality, so that it can be measured by the cosine similarity in this space. The neural architecture and associated document embedding methods are defined in Sections 3.3 and 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Assessing the summary quality using a neural model instead of relying directly on the quality indicators derived from the labeling functions has two major advantages. First, the neural model can generalise to all possible report/summary pairs, while aggregated labels may be absent for some summaries, as the rules are only triggered when specific conditions are met. Second, some labeling functions depend on external resources that may be unavailable at prediction time. For instance, one labeling function relies on whether the buyer has filed an insurance claim, which is a piece of information that is only available for historical data, and requires us to \"peek into the future\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "The corpus contains 96 534 real estate condition reports, each containing the following parts: i) Textual descriptions of various parts of the real estate (e.g., rooms) along with a textual assessment of their physical condition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "ii) Condition degrees (\"tilstandsgrad\" or TG) for parts of the real estate, in the range 0-3, where 0 indicates perfect condition (for new buildings) and 3 a seriously deteriorated condition, due to a major damage or defect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "iii) Metadata for the real estate and the condition report -e.g., size, building year, the author of the report, date of assessment, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "iv) The summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "We consider (i) as constituting the full-length report, denoted r, while the summary text (iv) will be denoted s. The metadata (ii)-(iii) is used only by the weak supervision model. The average report length is 1287 words (standard deviation: \u00b1627 words), while the average summary length is 183 words (standard deviation: \u00b1138 words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "A collection of 22 labeling functions was specified in cooperation with domain experts. Each function has two possible output values, depending on whether it implies a bad summary, denoted by (\u2212 \u2212 \u2212) or a good summary, denoted by (+ + +). If the rule condition is not met, the rule abstains from suggesting an output (Ratner et al., 2017) . The full list of labeling functions is the following: For a given summary, let y be the unknown true label, with possible values \u22121 (bad) and 1 (good), and let \u03bb be the outputs of the labeling functions. By applying these to the real estate condition reports, a generative label model P \u00b5 (y | \u03bb) can be estimated in a fully unsupervised fashion, as described by . We then obtain labels y + = P \u00b5 (y = 1 | \u03bb) \u2208 [0, 1], indicating the probability that a given summary is good.",
"cite_spans": [
{
"start": 317,
"end": 338,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Functions",
"sec_num": "3.2"
},
{
"text": "Let R denote the set of all possible reports and summaries, and let Z be the summary content space. We define the summary quality model as a function q(r, s) comparing two document embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "q(r, s) = cos sim h(r), h(s) = cos sim(z r , z s ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "where h : R \u2192 Z is a learned mapping from texts (full reports or summaries) to vectors. The general architecture is illustrated in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 139,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "The training objective for h should be such that a good (bad) summary should yield a high (low) cosine similarity. We also want our models to return quality scores distributed over the entire cosine domain [\u22121, 1], and we find that the standard crossentropy loss tends to push the values towards the edges. Instead, we use a variation of the cosine embedding loss function, given by l q(r, s), y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "= max 0, \u03c4 good \u2212 cos sim(z r , z s ) , y = 1 max 0, cos sim(z r , z s ) \u2212 \u03c4 bad , y = \u22121,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "where \u03c4 good and \u03c4 bad are thresholds on the quality scores of good/bad summaries. A loss of zero is obtained if good summaries have a quality score higher than \u03c4 good or if bad summaries have a quality score lower than \u03c4 bad . The model will thereby not perform better by pushing the quality of summaries above \u03c4 good or below \u03c4 bad , which encourages the model to return scores on a larger part of the cosine domain [\u22121, 1]. We find experimentally that \u03c4 good = 0.2 and \u03c4 bad = \u22120.2 result in models with an appropriate distribution of values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "The weak supervision labels y + are expected to be noisy. We follow in using a noise-aware version of our loss function l q(r, s), y for training, which we define by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "l * q(r, s), y + = E y\u223cP\u00b5(y|\u03bb) l q(r, s), y = y + \u2022 l q(r, s), 1 + (1 \u2212 y + ) \u2022 l q(r, s), \u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "( 1)Having defined the general model architecture and its training procedure, we now detail various solutions to express the mapping h.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality Model",
"sec_num": "3.3"
},
{
"text": "We start with unsupervised baseline models, and experiment with both Latent Semantic Analysis (Deerwester et al., 1990) and Doc2vec (Le and Mikolov, 2014) , for their ability to easily embed arbitrarily long documents. We train LSA and Doc2vec on the training set (ignoring the quality labels, as those techniques are self-supervised). These models can be described in Figure 1 by removing the training and neural network components, and by using LSA or Doc2vec for the embeddings. We use a dimensionality of 500 for LSA and 100 for Doc2vec.",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF9"
},
{
"start": 132,
"end": 154,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 369,
"end": 377,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "LSA and Doc2vec",
"sec_num": "3.4.1"
},
{
"text": "Our first supervised model for h is a feed-forward network. We first embed the reports and summaries with LSA or Doc2vec (both of dimension 500) as described above, and add a feed-forward transformation of those vectors which is optimised on the basis of the embedding loss function. The network weights are shared for both the full report r and the summary s. The architecture becomes as illustrated in Figure 1 by inserting LSA or Doc2vec into the embedding component and a feed-forward network into the neural-network component. We refer to the resulting models as LSA+FFN and Doc2vec+FFN.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "FFN-based models",
"sec_num": "3.4.2"
},
{
"text": "We employ the ReLu activation function in all layers except the last, which is linear (i.e., has no activation function). By using only a single feedforward layer, this model architecture becomes equivalent to a linear transformation of the LSA or Doc2vec embeddings. We refer to the resulting models as LSA+LinTrans and Doc2vec+LinTrans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FFN-based models",
"sec_num": "3.4.2"
},
{
"text": "The hidden layers have 1000 units, and the final layer 100 units. LSA+FFN and Doc2vec+FFN respectively use two and three hidden layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FFN-based models",
"sec_num": "3.4.2"
},
{
"text": "The second model for the function h mapping reports and summaries to the summary content space is an LSTM network. LSTMs are commonly used over word embeddings, but this approach is hard to scale due to the length of real estate condition reports. Instead, we split the reports into sections, and summaries into sentences and use LSA or Doc2vec to embed each, giving a sequence of vectors for each report and summary, and train the LSTM on these. A final, fully connected linear layer is placed on the LSTM output. In Figure 1 the pre-processing component now includes the splitting of sections/sentences, the embedding component is LSA or Doc2vec, and the neural-network component is the LSTM. We refer to the resulting models as LSA+LSTM and Doc2vec+LSTM. We use a single, unidirectional LSTM layer with a cell dimensionality of 100, along with 100 units in the final dense layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 518,
"end": 526,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "LSTM-based models",
"sec_num": "3.4.3"
},
{
"text": "The final model for h is a convolutional neural network with word embeddings as inputs. Those word embeddings are estimated either by Word2vec (dimension: 100) or a neural embedding layer (dimension: 500), both trained on the training set of the corpus. We use 1D convolutions with window size \u2208 {2, 3, 5, 7, 10} and a number of filters equivalent to the word embedding dimension. We then apply a maximum pooling to obtain a single output vector, fed to a final, fully-connected linear layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional models",
"sec_num": "3.4.4"
},
{
"text": "One benefit of convolutional neural networks is their scalability when processing long documents. The convolutional model detects local text patterns that are especially predictive for the summary quality, thereby providing a good mapping to the summary content space. In Figure 1 the pre-processing component now includes tokenisation, the embedding component is the embedding layer or Word2vec, and the neural-network is the CNN. We refer to the resulting models as EmbLayer+CNN and Word2vec+CNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Convolutional models",
"sec_num": "3.4.4"
},
{
"text": "Cov. Overlap Conflict Acc. 1 (\u2212 \u2212 \u2212) 10.4 % 96.2 % 22.1 % 100 % 2 (\u2212 \u2212 \u2212) 7.9 % 91.1 % 82.3 % 10.9 % 3 (\u2212 \u2212 \u2212) 5.1 % 90.2 % 27.5 % 71.5 % 4 (\u2212 \u2212 \u2212) 2.4 % 95.8 % 50.0 % 58.5 % 5 (\u2212 \u2212 \u2212) 2.6 % 92.3 % 30.8 % 78.0 % 6 (+ + +) 36.9 % 76.4 % 46.1 % 74.9 % 7 (+ + +) 11.6 % 93.1 % 47.4 % 97.3 % 8 (+ + +) 25.1 % 83.7 % 46.2 % 82.0 % 9 (\u2212 \u2212 \u2212) 7.6 % 84.2 % 22.4 % 73.5 % 10 (\u2212 \u2212 \u2212) 5.1 % 90.2 % 45.1 % 60.8 % 11 (\u2212 \u2212 \u2212) 8.1 % 82.7 % 34.6 % 72.9 % 12 (\u2212 \u2212 \u2212) 11.8 % 92.4 % 42.4 % 73.4 % 13 (\u2212 \u2212 \u2212) 10.7 % 93.5 % 26.2 % 100 % 14 (\u2212 \u2212 \u2212)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No.",
"sec_num": null
},
{
"text": "1.8 % 83.3 % 55.6 % 47.9 % 15 (\u2212 \u2212 \u2212)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No.",
"sec_num": null
},
{
"text": "1.6 % 93.8 % 43.8 % 71.5 % 16 (\u2212 \u2212 \u2212) 10.8 % 88.9 % 48.1 % 57.9 % 17 (\u2212 \u2212 \u2212) 10.0 % 91.0 % 37.0 % 100 % 18 (\u2212 \u2212 \u2212) 5.4 % 85.2 % 48.1 % 58.9 % 19 (\u2212 \u2212 \u2212) 3.4 % 76.5 % 14.7 % 83.4 % 20 (+ + +)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No.",
"sec_num": null
},
{
"text": "6.3 % 85.7 % 49.2 % 63.3 % 21 (\u2212 \u2212 \u2212) 7.1 % 94.4 % 11.3 % 100 % 22 (+ + +)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No.",
"sec_num": null
},
{
"text": "6.2 % 91.9 % 48.4 % 100 % Table 1 : Analysis of the 22 labeling functions when applied to the real estate condition report corpus. Table 1 shows for each labeling function its coverage (as a percentage of the full corpus), the proportion of overlaps with at least one other labeling function, the proportion of conflicts with at least one other labeling function, and its accuracy estimated through the aggregated label model. The weak supervision model abstains from labeling 15.9% of the summaries, giving us a labeled dataset of M lab = 81 195 samples. Figure 2 shows a histogram of the resulting probabilistic labels, y + m = P \u00b5 (y m = 1 | \u03bb m ) for m = 1, . . . , M lab , where each y + m is the probability of summary m being of high quality. We observe many summaries for which y + m \u2248 0 or y + m > 0.7. The labels seem otherwise quite evenly distributed on the probability range [0, 1], and their average is 0.493, which indicates that the dataset is well balanced and does not require oversampling. We split the labeled dataset of 81 195 samples in the ratio 8:1:1, yielding a training set of 64 955 samples and validation and test sets of 8 120 samples each. ",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 1",
"ref_id": null
},
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": null
},
{
"start": 556,
"end": 564,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "No.",
"sec_num": null
},
{
"text": "We evaluate our models against the weak supervision labels. The model performances on the test set are given in Table 2 , measured by the standard classification scores accuracy and F 1 , and for the supervised models also by the loss function given in (1). We train the models using the Adam optimizer with a learning rate of 1 \u00d7 10 \u22124 , reduced by a factor of 0.1 after one third of the epochs, and again after two thirds. We also employ a dropout of 0.2 in the hidden layers.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Model Performance",
"sec_num": "4.2"
},
{
"text": "For the computation of accuracy and F 1 , the probabilistic labels y + and the quality measures q(r, s) are converted to binary labels; the threshold for y + is 0.5, while for q(r, s) the threshold is tuned on the validation set. We see that the supervised models outperform the unsupervised ones and that the model Word2vec+CNN achieves the best performance both in terms of accuracy and F 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Performance",
"sec_num": "4.2"
},
{
"text": "It should be noted that the aggregated labels obtained with weak supervision only constitute a proxy for the ground truth. Although we expect them to provide good indications of the overall quality of the summaries in this domain, we cannot be certain of how well they correlate with human judgment, so our conclusions regarding the ability of various models to measure summary quality must remain somewhat tentative. Figure 3 illustrates the performance of four models by showing the distributions of quality measures for samples where the weak supervision label model is confident about the label. Summaries with y + \u2265 0.9 are shown in green and those with y + \u2264 0.1 are shown in red. We observe that all of these models are, to some degree, able to distinguish good summaries from bad ones. The unsupervised LSA baseline does, however, have much more overlap than the other models, which reflects the poorer performance in in that it pushes the quality measures just below \u03c4 bad = \u22120.2 or just above \u03c4 good = 0.2, instead of distributing them on the complete quality range [\u22121, 1]. This behavior effectively makes it a classifier rather than a model of quality measure. We observe the same behavior for the Doc2vec+LSTM model and FFN-based models. The LinTrans and CNN-based models, on the other hand, yield a good separation of good and bad summaries, while distributing them on a large portion of the quality range, which is the behavior we seek. Figure 4 illustrates the distribution of quality measures assigned to all of the M = 96 534 samples in the corpus by the Word2vec+CNN model. By comparing this histogram to the one in Figure 2 , we see that this model provides a more continuous quality measure than the labels aggregated from the labeling functions using weak supervision.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 426,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1452,
"end": 1460,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1635,
"end": 1643,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Performance",
"sec_num": "4.2"
},
{
"text": "When applied to the entire corpus of real estate condition reports and summaries, including the ones that the weak supervision model abstained from labeling, the Word2vec+CNN model finds that 35% of the summaries have a quality score q(r, s) below \u03c4 bad = \u22120.2, our chosen threshold for being of poor quality, while 33% are judged to be of high quality (i.e., q(r, s) > \u03c4 good = 0.2), while the remaining 31% are considered mediocre. The LSA+LinTrans model find 28% of the summaries to be of poor quality, and an average of the CNN and LinTrans models gives a proportion of poor summaries around 30%. If almost a third of the summaries of real estate condition reports are in fact of poor quality, this would bode ill for the real estate buyers that do not read the full reports.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality",
"sec_num": "4.3"
},
{
"text": "Three example summaries are included in Ap- pendix A. Their predicted quality measures using the weak supervision model, the LinTrans models and the CNN models are given in Table 3 . We see that all models agree that the first summary is of good quality, and that the second is relatively bad.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary Quality",
"sec_num": "4.3"
},
{
"text": "Since the first summary is quite thorough while the second is excessively short and quite uninformative, this is in line with our expectations. The third summary, however, is considered poor by the label model but quite good by the neural models. As this is also a quite thorough summary which captures the essence of its corresponding report, it would seem that the supervised models outperform in this case the labels they were trained on. We observed several such examples in the corpus, but without data from human judgments, we cannot ascertain Table 3 : Quality scores for the three example summaries given in the appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 550,
"end": 557,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary Quality",
"sec_num": "4.3"
},
{
"text": "to what extent the neural models are truly more reliable than the weak supervision labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Quality",
"sec_num": "4.3"
},
{
"text": "This paper describes a novel approach to automatically assess the quality (focusing primarily on the criteria of content coverage) of human-generated summaries, using a corpus of real estate condition reports as a concrete example. The approach relies on the creation of document embeddings that are appropriate for measuring summary quality. This gives us a particular kind of semantic space (the summary content space) where summary quality can be measured by the cosine similarity between the report and its summary. Since we have no access to \"ground truth\" values for the summary quality, we obtain indirect quality indicators based on a set of 22 heuristic rules gathered from human experts. Those quality indi-cators are then aggregated into a single probability (of a summary being of high quality) using weak supervision. The aggregated probabilities are subsequently employed as targets for training neural models optimised for the task of predicting summary quality. Evaluation results show that the best neural model, based on a convolutional architecture, achieves an overall accuracy of 89.5% when measuring the model output against the aggregated labels, while the best unsupervised model (LSA) only achieves an accuracy of 72.6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "An important limitation of the proposed method is the reliance on indirect indicators of summary quality (as expressed by the heuristic rules) instead of human judgments. A key research question for future work is thus to examine the correlations between the quality measures derived from the labeling functions and human judgments. While the heuristic rules do not capture all aspects that may influence the overall quality of a summary, our hypothesis (yet to be validated) is that they nevertheless correlate well with human judgments. An additional benefit of these heuristic rules is their explanatory power, making it possible to provide concrete, human-readable suggestions on how to improve a given summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Although not considered in this paper, the use of document embeddings relying on contextual word representations is another interesting research question that we wish to investigate in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snorkel DryBell: A case study in deploying weak supervision at industrial scale",
"authors": [
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Yintao",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haidong",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Cassandra",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Souvik",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Houman",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Alborzi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuchhal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "362--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen H. Bach, Daniel Rodriguez, Yintao Liu, Chong Luo, Haidong Shao, Cassandra Xia, Souvik Sen, Alex Ratner, Braden Hancock, Houman Al- borzi, Rahul Kuchhal, Chris R\u00e9, and Rob Malkin. 2019. Snorkel DryBell: A case study in deploying weak supervision at industrial scale. In Proceedings of the 2019 International Conference on Manage- ment of Data, pages 362-375. Association for Com- puting Machinery.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Osprey: Weak supervision of imbalanced extraction problems without code",
"authors": [
{
"first": "Eran",
"middle": [],
"last": "Bringer",
"suffix": ""
},
{
"first": "Abraham",
"middle": [],
"last": "Israeli",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Shoham",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd International Workshop on Data Management for End-to-End Machine Learning",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eran Bringer, Abraham Israeli, Yoav Shoham, Alex Ratner, and Christopher R\u00e9. 2019. Osprey: Weak supervision of imbalanced extraction problems with- out code. In Proceedings of the 3rd International Workshop on Data Management for End-to-End Ma- chine Learning, pages 1-11. Association for Com- puting Machinery.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mc-Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners",
"authors": [
{
"first": "Tom",
"middle": [
"B"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Askell",
"suffix": ""
},
{
"first": "Sandhini",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Herbert-Voss",
"suffix": ""
},
{
"first": "Gretchen",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Henighan",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Ziegler",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Clemens",
"middle": [],
"last": "Winter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hesse",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Sigler",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Litwin",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Informa- tion Processing Systems 2020, NeurIPS 2020, De- cember 6-12, 2020, virtual.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Summtriver: A new trivergent model to evaluate summaries automatically without human references",
"authors": [
{
"first": "Juan-Manuel",
"middle": [],
"last": "Luis Adri\u00e1n Cabrera-Diego",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
}
],
"year": 2018,
"venue": "Data & Knowledge Engineering",
"volume": "113",
"issue": "",
"pages": "184--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Adri\u00e1n Cabrera-Diego and Juan-Manuel Torres- Moreno. 2018. Summtriver: A new trivergent model to evaluate summaries automatically without hu- man references. Data & Knowledge Engineering, 113:184-197.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Retrieve, rerank and rewrite: Soft template based neural summarization",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "152--161",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 152-161, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), pages 1-14. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural summarization by extracting sentences and words",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "484--494",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1046"
]
},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural sum- marization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 484-494, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mind the gap: Dangers of divorcing evaluations of summary content from linguistic quality",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008)",
"volume": "",
"issue": "",
"pages": "145--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Conroy and Hoa Trang Dang. 2008. Mind the gap: Dangers of divorcing evaluations of sum- mary content from linguistic quality. In Proceedings of the 22nd International Conference on Computa- tional Linguistics (COLING 2008), pages 145-152, Manchester, UK. COLING 2008 Organizing Com- mittee.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {
"DOI": [
"10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9"
]
},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American Society for Information Science, 41(6):391-407.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "350--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of the 20th International Conference on Computational Linguistics, pages 350-356. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A survey on evaluation of summarization methods. Information processing & management",
"authors": [
{
"first": "Liana",
"middle": [],
"last": "Ermakova",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Val\u00e8re Cossu",
"suffix": ""
},
{
"first": "Josiane",
"middle": [],
"last": "Mothe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "56",
"issue": "",
"pages": "1794--1814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liana Ermakova, Jean Val\u00e8re Cossu, and Josiane Mothe. 2019. A survey on evaluation of summa- rization methods. Information processing & man- agement, 56(5):1794-1814.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Structured neural summarization",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Fernandes",
"suffix": ""
},
{
"first": "Miltiadis",
"middle": [],
"last": "Allamanis",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations, ICLR 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Structured neural summariza- tion. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recent automatic text summarization techniques: a survey",
"authors": [
{
"first": "Mahak",
"middle": [],
"last": "Gambhir",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2017,
"venue": "Artificial Intelligence Review",
"volume": "47",
"issue": "1",
"pages": "1--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahak Gambhir and Vishal Gupta. 2017. Recent auto- matic text summarization techniques: a survey. Arti- ficial Intelligence Review, 47(1):1-66.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-document multilingual summarization and evaluation tracks in ACL 2013 MultiLing workshop",
"authors": [
{
"first": "George",
"middle": [],
"last": "Giannakopoulos",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the MultiLing 2013 Workshop on Multilingual Multidocument Summarization",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Giannakopoulos. 2013. Multi-document multi- lingual summarization and evaluation tracks in ACL 2013 MultiLing workshop. In Proceedings of the MultiLing 2013 Workshop on Multilingual Multi- document Summarization, pages 20-28, Sofia, Bul- garia. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Document similarity for texts of varying lengths via hidden topics",
"authors": [
{
"first": "Hongyu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Tarek",
"middle": [],
"last": "Sakakini",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10675"
]
},
"num": null,
"urls": [],
"raw_text": "Hongyu Gong, Tarek Sakakini, Suma Bhat, and Jin- jun Xiong. 2019. Document similarity for texts of varying lengths via hidden topics. arXiv preprint arXiv:1903.10675.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning from dialogue after deployment: Feed yourself, chatbot!",
"authors": [
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Pierre-Emmanuel",
"middle": [],
"last": "Mazare",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3667--3684",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1358"
]
},
"num": null,
"urls": [],
"raw_text": "Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3667-3684, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Harnessing deep neural networks with logic rules",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhengzhong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2410--2420",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1228"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neu- ral networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2410-2420, Berlin, Germany. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Konfliktniv\u00e5et ved bolighandel m\u00e5 ned",
"authors": [
{
"first": "Huseiernes",
"middle": [],
"last": "Landsforbund",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huseiernes Landsforbund. 2017. Konfliktniv\u00e5et ved bolighandel m\u00e5 ned. [Online; accessed 3-February- 2021].",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An empirical evaluation of doc2vec with practical insights into document embedding generation",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Jey",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.05368"
]
},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An empiri- cal evaluation of doc2vec with practical insights into document embedding generation. arXiv preprint arXiv:1607.05368.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Interna- tional conference on machine learning, pages 1188- 1196. Proceedings of Machine Learning Research.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On the sentence embeddings from pre-trained language models",
"authors": [
{
"first": "Bohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9119--9130",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.733"
]
},
"num": null,
"urls": [],
"raw_text": "Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Named entity recognition without labelled data: A weak supervision approach",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Aliaksandr",
"middle": [],
"last": "Hubin",
"suffix": ""
},
{
"first": "Samia",
"middle": [],
"last": "Touileb",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1518--1533",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.139"
]
},
"num": null,
"urls": [],
"raw_text": "Pierre Lison, Jeremy Barnes, Aliaksandr Hubin, and Samia Touileb. 2020. Named entity recognition without labelled data: A weak supervision approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1518-1533, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Measuring similarity of academic articles with semantic profile and joint word embedding",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Zepeng",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Zeeshan",
"suffix": ""
}
],
"year": 2017,
"venue": "Tsinghua Science and Technology",
"volume": "22",
"issue": "6",
"pages": "619--632",
"other_ids": {
"DOI": [
"10.23919/TST.2017.8195345"
]
},
"num": null,
"urls": [],
"raw_text": "Ming Liu, Bo Lang, Zepeng Gu, and Ahmed Zeeshan. 2017. Measuring similarity of academic articles with semantic profile and joint word embedding. Ts- inghua Science and Technology, 22(6):619-632.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The challenging task of summary evaluation: An overview. Language Resources and Evaluation",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Plaza",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "52",
"issue": "",
"pages": "101--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Lloret, Laura Plaza, and Ahmet Aker. 2018. The challenging task of summary evaluation: An overview. Language Resources and Evaluation, 52(1):101-148.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Automatically assessing machine summary content without a gold standard",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "2",
"pages": "267--300",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00123"
]
},
"num": null,
"urls": [],
"raw_text": "Annie Louis and Ani Nenkova. 2013. Automatically assessing machine summary content without a gold standard. Computational Linguistics, 39(2):267- 300.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Evaluating content selection in summarization: The pyramid method",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "145--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova and Rebecca Passonneau. 2004. Evaluat- ing content selection in summarization: The pyra- mid method. In Proceedings of the Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145-152, Boston, Massachusetts, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The ACL Anthology network",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Vahed",
"middle": [],
"last": "Muthukrishnan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Qazvinian",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (NLPIR4DL)",
"volume": "",
"issue": "",
"pages": "54--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir R. Radev, Pradeep Muthukrishnan, and Va- hed Qazvinian. 2009. The ACL Anthology net- work. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Li- braries (NLPIR4DL), pages 54-61, Suntec City, Sin- gapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Snorkel: Rapid training data creation with weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the VLDB Endowment",
"volume": "11",
"issue": "",
"pages": "269--282",
"other_ids": {
"DOI": [
"10.14778/3157794.3157797"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2017. Snorkel: Rapid training data creation with weak su- pervision. Proceedings of the VLDB Endowment, 11(3):269-282.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Training complex models with multi-task weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Sala",
"suffix": ""
},
{
"first": "Shreyash",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "4763--4771",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher R\u00e9. 2019. Training complex models with multi-task weak supervision. Proceedings of the AAAI Confer- ence on Artificial Intelligence, 33:4763-4771.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Data programming: Creating large training sets, quickly",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Selsam",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3574--3582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data program- ming: Creating large training sets, quickly. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 3574-3582. Curran Associates Inc.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sentence-BERT: Sentence embeddings using siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10084"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using siamese BERT- networks. arXiv preprint arXiv:1908.10084.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Modeling missing data in distant supervision for information extraction",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "367--378",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00234"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Et- zioni. 2013. Modeling missing data in distant su- pervision for information extraction. Transactions of the Association for Computational Linguistics, 1:367-378.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Similarity measures based on latent dirichlet allocation",
"authors": [
{
"first": "Nobal",
"middle": [],
"last": "Vasile Rus",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Niraula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Banjade",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "459--470",
"other_ids": {
"DOI": [
"10.1007/978-3-642-37247-6_37"
]
},
"num": null,
"urls": [],
"raw_text": "Vasile Rus, Nobal Niraula, and Rajendra Banjade. 2013. Similarity measures based on latent dirichlet allocation. In Computational Linguistics and Intelli- gent Text Processing, pages 459-470. Springer.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Weakly supervised sequence tagging from noisy rules",
"authors": [
{
"first": "Esteban",
"middle": [],
"last": "Safranchik",
"suffix": ""
},
{
"first": "Shiying",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Bach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "5570--5578",
"other_ids": {
"DOI": [
"10.1609/aaai.v34i04.6009"
]
},
"num": null,
"urls": [],
"raw_text": "Esteban Safranchik, Shiying Luo, and Stephen Bach. 2020. Weakly supervised sequence tagging from noisy rules. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):5570-5578.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Multilingual summarization evaluation without human models",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Juan-Manuel",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "Iria",
"middle": [],
"last": "Da Cunha",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Sanjuan",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Vel\u00e1zquez-Morales",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING 2010: Posters",
"volume": "",
"issue": "",
"pages": "1059--1067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horacio Saggion, Juan-Manuel Torres-Moreno, Iria da Cunha, Eric SanJuan, and Patricia Vel\u00e1zquez- Morales. 2010. Multilingual summarization eval- uation without human models. In COLING 2010: Posters, pages 1059-1067, Beijing, China. COLING 2010 Organizing Committee.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Kj\u00f8per dyre boliger i blinde. Dagsavisen",
"authors": [
{
"first": "Tor",
"middle": [],
"last": "Sandberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Online; accessed 3",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tor Sandberg. 2017. Kj\u00f8per dyre boliger i blinde. Dagsavisen. [Online; accessed 3-February-2021].",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Summary evaluation with and without references",
"authors": [
{
"first": "Juan-Manuel",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Iria Da Cunha",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Sanjuan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vel\u00e1zquez-Morales",
"suffix": ""
}
],
"year": 2010,
"venue": "Polibits",
"volume": "",
"issue": "42",
"pages": "13--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan-Manuel Torres-Moreno, Horacio Saggion, Iria da Cunha, Eric SanJuan, and Patricia Vel\u00e1zquez- Morales. 2010. Summary evaluation with and with- out references. Polibits, (42):13-20.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Corpus-based paraphrase detection experiments and review. Information",
"authors": [
{
"first": "Tedo",
"middle": [],
"last": "Vrbanec",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Me\u0161trovi\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3390/info11050241"
]
},
"num": null,
"urls": [],
"raw_text": "Tedo Vrbanec and Ana Me\u0161trovi\u0107. 2020. Corpus-based paraphrase detection experiments and review. Infor- mation, 11(5):241.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Deep probabilistic logic: A unifying framework for indirect supervision",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1891--1902",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Hai Wang and Hoifung Poon. 2018. Deep probabilis- tic logic: A unifying framework for indirect super- vision. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1891-1902, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "BERTScore: Evaluating text generation with BERT",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Extractive summarization as text matching",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiran",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Danqing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6197--6208",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.552"
]
},
"num": null,
"urls": [],
"raw_text": "Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extrac- tive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197-6208, On- line. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Summary shorter than 50 words. (\u2212 \u2212 \u2212) 2. Summary longer than 400 words. (\u2212 \u2212 \u2212) 3. TG3 for the bathroom, but no mention of the bathroom in summary. (\u2212 \u2212 \u2212) 4. TG3 for the kitchen, but no mention of the kitchen in summary. (\u2212 \u2212 \u2212) 5. TG3 for the roof, but no mention of the roof in summary. (\u2212 \u2212 \u2212) 6. TG2 or TG3 for the bathroom, with mention of the bathroom in summary. (+ + +) 7. TG2 or TG3 for the kitchen, with mention of the kitchen in summary. (+ + +) 8. TG2 or TG3 for the roof, with mention of the roof in summary. (+ + +) 9. Correction of TG in the bathroom, but no mention of the bathroom in summary. (\u2212 \u2212 \u2212) 10. Correction of TG in the kitchen, but no mention of the kitchen in summary. (\u2212 \u2212 \u2212) 11. Correction of TG on the roof, but no mention of the roof in summary. (\u2212 \u2212 \u2212) 12. Summary with long words readability score (LIKS) above 55. (\u2212 \u2212 \u2212) 13. Summary with unique words readability score (OVR) above 96. (\u2212 \u2212 \u2212) 14. An insurance claim has been raised on the real estate after the transaction. (\u2212 \u2212 \u2212) 15. Written by an agent with insurance claims on more than 7.5% of her reports. (\u2212 \u2212 \u2212) 16.Written by an agent with LIKS-score higher than 55 on more than 40% of her reports. (\u2212 \u2212 \u2212) 17. Written by an agent with OVR-score higher than 96 on more than 40% of her reports. (\u2212 \u2212 \u2212) 18. Written by an agent with fewer than 10 reports that year. (\u2212 \u2212 \u2212) 19. Fewer than 20% of the words in the summary are found in the report. (\u2212 \u2212 \u2212) 20. Fewer than 3% of the words in the report are found in the summary. (\u2212 \u2212 \u2212) 21. More than 70% of the words in the summary are also found in the report. (+ + +) 22. More than 20% of the words in the report are also found in the summary. (+ + +) General model architecture q(r, s).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Histogram showing the distribution of labels from the weak supervision model.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Normalized histograms showing the distribution of quality measures q(r, s) for summaries from the test set that the label model considers as good (shown as green) and bad (shown as red).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Normalized histogram of q(r, s) for the entire corpus of summaries.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "The distributions for the model LSA+LSTM is unexpected,",
"content": "<table><tr><td>Model</td><td colspan=\"2\">Loss Acc.</td><td>F 1</td></tr><tr><td>LSA</td><td>-</td><td colspan=\"2\">0.726 0.755</td></tr><tr><td>Doc2vec</td><td>-</td><td colspan=\"2\">0.684 0.686</td></tr><tr><td>LSA+LinTrans</td><td colspan=\"3\">0.095 0.863 0.876</td></tr><tr><td colspan=\"4\">Doc2vec+LinTrans 0.101 0.850 0.863</td></tr><tr><td>LSA+FFN</td><td colspan=\"3\">0.080 0.882 0.893</td></tr><tr><td>Doc2vec+FFN</td><td colspan=\"3\">0.079 0.885 0.897</td></tr><tr><td>LSA+LSTM</td><td colspan=\"3\">0.079 0.882 0.895</td></tr><tr><td>Doc2vec+LSTM</td><td colspan=\"3\">0.080 0.880 0.891</td></tr><tr><td>EmbLayer+CNN</td><td colspan=\"3\">0.088 0.888 0.898</td></tr><tr><td>Word2vec+CNN</td><td colspan=\"3\">0.085 0.895 0.905</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"text": "Model performances on the test set.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}