ACL-OCL / Base_JSON /prefixN /json /nlppower /2022.nlppower-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:30:56.400804Z"
},
"title": "A global analysis of metrics used for measuring performance in natural language processing",
"authors": [
{
"first": "Kathrin",
"middle": [],
"last": "Blagec",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Medical University of Vienna",
"location": {
"settlement": "Vienna",
"country": "Austria"
}
},
"email": ""
},
{
"first": "Georg",
"middle": [],
"last": "Dorffner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Medical University of Vienna",
"location": {
"settlement": "Vienna",
"country": "Austria"
}
},
"email": ""
},
{
"first": "Milad",
"middle": [],
"last": "Moradi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Medical University of Vienna",
"location": {
"settlement": "Vienna",
"country": "Austria"
}
},
"email": ""
},
{
"first": "Simon",
"middle": [],
"last": "Ott",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Medical University of Vienna",
"location": {
"settlement": "Vienna",
"country": "Austria"
}
},
"email": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Samwald",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Medical University of Vienna",
"location": {
"settlement": "Vienna",
"country": "Austria"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Measuring the performance of natural language processing models is challenging. Traditionally used metrics, such as BLEU and ROUGE, originally devised for machine translation and summarization, have been shown to suffer from low correlation with human judgment and a lack of transferability to other tasks and languages. In the past 15 years, a wide range of alternative metrics have been proposed. However, it is unclear to what extent this has had an impact on NLP benchmarking efforts. Here we provide the first large-scale cross-sectional analysis of metrics used for measuring performance in natural language processing. We curated, mapped and systematized more than 3500 machine learning model performance results from the open repository 'Papers with Code' to enable a global and comprehensive analysis. Our results suggest that the large majority of natural language processing metrics currently used have properties that may result in an inadequate reflection of a models' performance. Furthermore, we found that ambiguities and inconsistencies in the reporting of metrics may lead to difficulties in interpreting and comparing model performances, impairing transparency and reproducibility in NLP research.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Measuring the performance of natural language processing models is challenging. Traditionally used metrics, such as BLEU and ROUGE, originally devised for machine translation and summarization, have been shown to suffer from low correlation with human judgment and a lack of transferability to other tasks and languages. In the past 15 years, a wide range of alternative metrics have been proposed. However, it is unclear to what extent this has had an impact on NLP benchmarking efforts. Here we provide the first large-scale cross-sectional analysis of metrics used for measuring performance in natural language processing. We curated, mapped and systematized more than 3500 machine learning model performance results from the open repository 'Papers with Code' to enable a global and comprehensive analysis. Our results suggest that the large majority of natural language processing metrics currently used have properties that may result in an inadequate reflection of a models' performance. Furthermore, we found that ambiguities and inconsistencies in the reporting of metrics may lead to difficulties in interpreting and comparing model performances, impairing transparency and reproducibility in NLP research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Benchmarking, i.e., the process of measuring and comparing model performance on a specific task or set of tasks, is an important driver of progress in natural language processing (NLP). Benchmark datasets are conceptualized as fixed sets of data that are manually, semi-automatically or automatically generated to form a representative sample for these specific tasks to be solved by a model. A model's performance on such a benchmark is then assessed based on a single or a small set of performance metrics. While this enables quick comparisons, it may entail the risk of conveying an incomplete picture of model performance since metrics inherently condense performance to a single number, omitting certain performance aspects completely or balancing trade-offs between different aspects (e.g. accuracy vs. fluency). Additionally the capacity of metrics to capture performance may differ strongly between tasks and languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Capturing model performance in a single metric is an inherently difficult task, and this is further aggravated in the NLP domain by the structural and semantic complexity of human language. Traditionally used NLP metrics such as BLEU or ROUGE, originally devised for machine translation and summarization, were shown to suffer from low correlation with human judgment and poor transferability to other tasks (Lin, 2004; Liu and Liu, 2008; Ng and Abrecht, 2015; Novikova et al., 2017; Chen et al., 2019) . These fundamental problems are increasingly recognized by the NLP community-e.g., metric evaluation was even introduced as an independent task at the annual Machine Translation conference (Ma et al., 2019) .",
"cite_spans": [
{
"start": 408,
"end": 419,
"text": "(Lin, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 420,
"end": 438,
"text": "Liu and Liu, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 439,
"end": 460,
"text": "Ng and Abrecht, 2015;",
"ref_id": "BIBREF21"
},
{
"start": 461,
"end": 483,
"text": "Novikova et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 484,
"end": 502,
"text": "Chen et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 693,
"end": 710,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the past 15 years, a wide variety of superior metrics for evaluating models on NLP tasks have been proposed, including task-agnostic, AI-based metrics such as BERTscore (Zhang et al., 2019; Peters et al., 2018; Clark et al., 2019) . However, it is unknown to what extent this had an impact on metrics used in NLP research.",
"cite_spans": [
{
"start": 172,
"end": 192,
"text": "(Zhang et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 193,
"end": 213,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 214,
"end": 233,
"text": "Clark et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We aim to address this question by providing a global analysis of performance measures used in NLP benchmarking. Our contributions are threefold: (1) We curated, mapped and systematized performance metrics covering more than 3500 performance results from the open repository 'Papers with Code' to enable a global and comprehensive analysis. (2) Based on this dataset, we provide a cross-sectional analysis of the prevalence of performance measures in the subset of natural language processing benchmarks. 3We describe inconsistencies and ambiguities in the reporting and usage of metrics, which may lead to difficulties in interpreting and comparing model performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our analyses are based on data available from Papers with Code (PWC), a large, web-based open platform that collects Machine learning papers and summarizes evaluation results on benchmark datasets. PWC is built on automatically extracted data from arXiv submissions and manual crowdsourced annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The Intelligence Task Ontology (ITO) aims to provide a comprehensive map of artificial intelligence tasks using a richly structured hierarchy of processes, algorithms, data and performance metrics. 1 ITO is based on data from PWC and the EDAM ontology 2 . The development process of ITO is detailed in (Blagec et al., 2021) . We built on ITO for further curation and on of a hierarchical mapping of the raw performance metric data from PWC.",
"cite_spans": [
{
"start": 302,
"end": 323,
"text": "(Blagec et al., 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The raw dataset exported from PWC contained a total number of 812 different strings representing metric names that appeared as distinct data property instances in ITO. These metric names were used by human annotators on the PWC platform to add results for a given model to the evaluation table of the relevant benchmark dataset's leaderboard on PWC. This list of raw metrics in the PWC database was manually curated into a canonical hierarchy by our team. This entailed some complexities and required extensive manual curation which was conducted based on the mapping proceduce described below. In many cases, the same metric was reported under multiple different synonyms and abbreviations. Furthermore, many results were reported in specialized sub-variants of established metrics. For each metric a canonical property denoting its general form (e.g., 'BLEU score') was created, and synonyms and sub-variants were mapped to it. For example, the reported performance metrics 'BLEU-1', 'BLEU-2' and 'B-3' were made sub-metrics of 'BLEU score'. Throughout the paper, we will refer to canonical properties and mapped metrics as 'top-level metrics' and 'sub-metrics', respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical mapping and further curation of metric names",
"sec_num": "2.2"
},
{
"text": "In case a library that implemented a metric was used as the metric name (e.g., SacreBLEU, which is a reference implementation of the BLEU score available as a Python package), this property was made sub-metric of the more general metric name, in this case 'BLEU score'. 271 entries from the original list could not be assigned a metric and were subsumed under a separate category 'Undetermined'. After this extensive manual curation, the resulting list covered by our dataset could be reduced from 812 to 187 distinct performance metrics. Where possible, we used the respective preferred Wikipedia article titles as canonical names for the metrics. For an excerpt of the resulting property hierarchy, see Figure A .1 in Appendix A.",
"cite_spans": [],
"ref_spans": [
{
"start": 705,
"end": 713,
"text": "Figure A",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hierarchical mapping and further curation of metric names",
"sec_num": "2.2"
},
{
"text": "Top-level metrics were further grouped into categories based on the task type they are usually applied to: Classification, Computer vision, Natural language processing, Regression, Game playing, Ranking, Clustering and 'Other'. We limited our main analysis to the category 'Natural language processing', which only contains metrics that are specific to NLP, such as ROUGE, BLEU or ME-TEOR. We provide additional statistics on general classification metrics, such as Accuracy or F1 score that are also often used in NLP benchmarks but are not specific to NLP tasks in Table B .1 in Appendix B.",
"cite_spans": [],
"ref_spans": [
{
"start": 567,
"end": 574,
"text": "Table B",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grouping of top-level metrics",
"sec_num": "2.3"
},
{
"text": "Analyses were performed based on the ITO release of 13.7.2020. Raw statistics were generated based on the ITO ontology using SPARQL queries and further processed and analyzed using Jupyter Notebooks and the Python 'pandas' library. Data, code and notebooks to generate these statistics are available on Github (see section 'Data and code availability').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.4"
},
{
"text": "32,209 benchmark results across 2,298 distinct benchmark datasets reported in a total number of 3,867 papers were included in this analysis. Included papers consist of papers in the PWC database that were annotated with at least one performance metric as of July 2020. A single paper can thus contribute results to more than one benchmark and to one or more performance metrics. The publication period of the analyzed papers covers twenty years, from 2000 until 2020, with the majority having been published in the past ten years (see Figure B .2 in Appendix B).",
"cite_spans": [],
"ref_spans": [
{
"start": 535,
"end": 543,
"text": "Figure B",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data basis",
"sec_num": "3.1"
},
{
"text": "The subset of NLP benchmark datasets considered in our analysis included 4,812 benchmark results across 491 benchmark datasets (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data basis",
"sec_num": "3.1"
},
{
"text": "frequently reported in NLP benchmarking? Table 2 lists the top 10 most frequently reported performance metrics. Considering submetrics, ROUGE-1, ROUGE-2 and ROUGE-L were the most commonly annotated ROUGE variants, and BLEU-4 and BLEU-1 were the most frequently annotated BLEU variants. For a large fraction of BLEU and ROUGE annotations, the subvariant was not specified in the annotation. The BLEU score was used across a wide range of NLP benchmark tasks, such as machine translation, question answering, summarization and text generation. ROUGE metrics were mostly used for text generation, video captioning and summarization tasks while METEOR was mainly used for image and video captioning, text generation and question answering tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Which performance metrics are most",
"sec_num": "3.2"
},
{
"text": "The BLEU score was reported without any other metrics in 80.2% of the cases, whereas the ROUGE metrics more often appeared together with other metrics and stood alone in only nine out of 24 occurrences. METEOR was, in all cases, reported together with at least one other metric. Figure B .1 in Appendix B shows the co-occurrence matrix for the top 10 most frequently used NLP-specific metrics. BLEU was most often reported together with the ROUGE metrics (n=12) and METEOR (n=12). ROUGE likewise frequently appeared together with METEOR (n=10). We additionally provide statistics on the number of distinct metrics per benchmark for the total dataset in Figure B .3 in Appendix B.",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 287,
"text": "Figure B",
"ref_id": null
},
{
"start": 653,
"end": 661,
"text": "Figure B",
"ref_id": null
}
],
"eq_spans": [],
"section": "Are metrics reported together with other metrics or do they stand alone?",
"sec_num": "3.3"
},
{
"text": "During the mapping process it became evident that performance metrics are often reported in an inconsistent or ambiguous manner. One example for this are the ROUGE metrics, which have originally been proposed in different variants (e.g., ROUGE-1, ROUGE-L) but are often simply referred to as 'ROUGE'. Furthermore, ROUGE metrics have originally been proposed in a 'recall' and 'precision' sub-variant, such as 'ROUGE-1 precision' and 'ROUGE-1 recall'. Further, the harmonic mean between these two scores (ROUGE-1 F1 score) can be calculated. However, results are often reported as, e.g., 'ROUGE-1' without specifying the variant, which may lead to ambiguities when comparing results between different publications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistencies and ambiguities in the reporting of performance metrics",
"sec_num": "3.4"
},
{
"text": "NLP covers a wide range of different tasks and thus shows a large diversity of utilized metrics. We limited our analysis to more complex NLP tasks beyond simple classification, such as machine translation, question answering, and summarization. Metrics designed for these tasks generally aim to assess the similarity between a machine-generated text and a reference text or set of reference texts that are human-generated. We found that, despite their known shortcomings, the BLEU score and ROUGE metrics continue to be the most frequently used metrics for such tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Several weaknesses of BLEU have been pointed out by the research community, such as its sole focus on n-gram precision without considering recall and its reliance on exact n-gram matchings. Zhang et al. have discussed properties of the BLEU score and NIST, a variant of the BLEU score that gives more weight to rarer n-grams than to more frequent ones, and came to the conclusion that neither of the two metrics necessarily show high correlation with human judgments of machine translation quality (Doddington, 2002; Zhang et al., 2004) . The ROUGE (Recall-Oriented Understudy for Gisting Evaluation) metrics family was the second most used NLP-specific metric in our dataset after the BLEU score. While originally proposed for summarization tasks, a subset of the ROUGE metrics (i.e., ROUGE-L, ROUGE-W and ROUGE-S) has also been shown to perform well in machine translation evaluation tasks (Lin, 2004; Och, 2004) . However, the ROUGE metrics set has also been shown to not adequately cover multi-document summarization, tasks that rely on extensive paraphrasing, such as abstractive summarization, and extractive summarization of multi-logue text types (i.e., transcripts with many different speakers), such as meeting transcripts (Lin, 2004; Liu and Liu, 2008; Ng and Abrecht, 2015) . Several new variants have been proposed in recent years, which make use of the incorporation of word embeddings (ROUGE-WE), graph-based approaches (ROUGE-G), or the extension with additional lexical features (ROUGE 2.0) (Ng and Abrecht, 2015; ShafieiBavani et al., 2018; Ganesan, 2018) . ROUGE-1, ROUGE-2 and ROUGE-L were the most common ROUGE metrics in our analyzed dataset, while newer proposed ROUGE variants were not represented.",
"cite_spans": [
{
"start": 498,
"end": 516,
"text": "(Doddington, 2002;",
"ref_id": "BIBREF6"
},
{
"start": 517,
"end": 536,
"text": "Zhang et al., 2004)",
"ref_id": "BIBREF30"
},
{
"start": 892,
"end": 903,
"text": "(Lin, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 904,
"end": 914,
"text": "Och, 2004)",
"ref_id": "BIBREF23"
},
{
"start": 1233,
"end": 1244,
"text": "(Lin, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 1245,
"end": 1263,
"text": "Liu and Liu, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 1264,
"end": 1285,
"text": "Ng and Abrecht, 2015)",
"ref_id": "BIBREF21"
},
{
"start": 1508,
"end": 1530,
"text": "(Ng and Abrecht, 2015;",
"ref_id": "BIBREF21"
},
{
"start": 1531,
"end": 1558,
"text": "ShafieiBavani et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 1559,
"end": 1573,
"text": "Ganesan, 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "METEOR (Metric for Evaluation of Translation with Explicit Ordering) was proposed in 2005 to address weaknesses of previous metrics (Banerjee and Lavie, 2005) . METEOR is an F-measure derived metric that has repeatedly been shown to yield higher correlation with human judgment across several tasks as compared to BLEU and NIST (Lavie et al., 2004; Graham et al., 2015; Chen et al., 2019) . Matchings are scored based on their unigram precision, unigram recall (given higher weight than precision), and a comparison of the word ordering of the translation compared to the reference text. This is in contrast to the BLEU score, which does not take into account n-gram recall. Furthermore, while BLEU only considers exact word matches in its scoring, METEOR also takes into account words that are morphologically related or synonymous to each other by using stemming, lexical resources and a paraphrase table. Additionally, ME-TEOR was designed to provide informative scores at sentence-level and not only at corpus-level. An adapted version of METEOR, called METEOR++ 2.0, was proposed in 2019 (Guo and Hu, 2019) . This variant extends METEOR's paraphrasing table with a large external paraphrase database and has been shown to correlate better with human judgement across many machine translation tasks.",
"cite_spans": [
{
"start": 132,
"end": 158,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF0"
},
{
"start": 328,
"end": 348,
"text": "(Lavie et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 349,
"end": 369,
"text": "Graham et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 370,
"end": 388,
"text": "Chen et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1093,
"end": 1111,
"text": "(Guo and Hu, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Compared to BLEU and ROUGE, METEOR was rarely used as a performance metric (8%) across the NLP benchmark datasets included in our dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "The GLEU score was proposed as an evaluation metric for NLP applications, such as machine translation, summarization and natural language generation, in 2007 (Mutton et al., 2007) . It is a Support Vector Machine-based metric that uses a combination of individual parser-derived metrics as features. GLEU aims to assess how well the generated text conforms to 'normal' use of human language, i.e., its 'fluency'. This is in contrast to other commonly used metrics that focus on how well a generated text reflects a reference text or vice versa. GLEU was reported only in 1.8% of NLP benchmark datasets.",
"cite_spans": [
{
"start": 158,
"end": 179,
"text": "(Mutton et al., 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Additional alternative metrics that have been proposed by the NLP research community but do not appear as performance metrics in the analyzed dataset include Translation error rate (TER), TER-Plus, \"Length Penalty, Precision, n-gram Position difference Penalty and Recall\" (LEPOR), Sentence Mover's Similarity, and BERTScore. Figure 1 depicts the timeline of introduction of NLP metrics and their original application.",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "TER was proposed as a metric for evaluating machine translation quality. TER measures quality by the number of edits that are needed to change the machine-generated text into the reference text(s), with lower TER scores indicating higher translation quality (Snover et al., 2006) . TER considers five edit operations to change the output into the reference text: Matches, insertions, deletions, substitutions and shifts. An adaptation of TER, TER-Plus, was proposed in 2009. Ter-Plus extends TER with three additional edit operations, i.e., stem matches, synonym matches and phrase substitution (Snover et al., 2009) . TER-Plus was shown to have higher correlations with human judgements in machine translation tasks than BLEU, METEOR and TERp (Snover et al., 2009) . LEPOR and its variants hLEPOR and nLEPOR were proposed as a language-independent model that aims to address the issue that several previous metrics tend to perform worse on languages other than those it was originally designed for. It has been shown to yield higher correlations with human judgement than METEOR, BLEU, or TER (Han et al., 2012) .",
"cite_spans": [
{
"start": 258,
"end": 279,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF27"
},
{
"start": 595,
"end": 616,
"text": "(Snover et al., 2009)",
"ref_id": "BIBREF28"
},
{
"start": 744,
"end": 765,
"text": "(Snover et al., 2009)",
"ref_id": "BIBREF28"
},
{
"start": 1081,
"end": 1112,
"text": "BLEU, or TER (Han et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Sentence Mover's Similarity (SMS) is a metric based on ELMo word embeddings and Earth mover's distance, which measures the minimum cost of turning a set of machine generated sentences into a reference text's sentences (Peters et al., 2018; Clark et al., 2019) . It was proposed in 2019 and was shown to yield better results as compared to ROUGE-L in terms of correlation with human judgment in summarization tasks.",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 240,
"end": 259,
"text": "Clark et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "BERTScore was proposed as a task-agnostic performance metric in 2019 (Zhang et al., 2019) . It computes the similarity of two sentences based on the sum of cosine similarities between their token's contextual embeddings (BERT), and optionally weighs them by inverse document frequency scores (Devlin et al., 2018) . BERTScore was shown to outperform established metrics, such as BLEU, ME-TEOR and ROUGE-L in machine translation and image captioning tasks. It was also more robust than other metrics when applied to an adversarial paraphrase detection task. However, the authors also state that BERTScore's configuration should be adapted to task-specific needs since no single configuration consistently outperforms all others across tasks.",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 292,
"end": 313,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Difficulties associated with automatic evaluation of machine generated texts include poor correlation with human judgement, language bias (i.e., the metric shows better correlation with human judgment for certain languages than others), and worse suitability for language generation tasks other than the one it was proposed for (Novikova et al., 2017) . In fact, most NLP metrics have originally been con-ceptualized for a very specific application, such as BLEU and METEOR for machine translation, or ROUGE for the evaluation of machine generated text summaries, but have since then been introduced as metrics for several other NLP tasks, such as question-answering, where all three of the above mentioned scores are regularly used. Non-transferability to other tasks has recently been shown by Chen et al. who have compared several metrics (i.e., ROUGE-L, METEOR, BERTScore, BLEU-1, BLEU-4, Conditional BERTScore and Sentence Mover's Similarity) for evaluating generative Question-Answering (QA) tasks based on three QA datasets. They recommend that from the evaluated metrics, METEOR should preferably be used and point out that metrics originally introduced for evaluating machine translation and summarization do not necessarily perform well in the evaluation of question answering tasks (Chen et al., 2019) .",
"cite_spans": [
{
"start": 328,
"end": 351,
"text": "(Novikova et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1293,
"end": 1312,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Many NLP metrics use very specific sets of features, such as specific word embeddings or linguistic elements, which may complicate comparability and replicability. To address the issue of replicability, reference open source implementations have been published for some metrics, such as, ROUGE, sentBleu-moses as part of the Moses toolkit and sacreBLEU (Lin, 2004) .",
"cite_spans": [
{
"start": 353,
"end": 364,
"text": "(Lin, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In summary, we found that the large majority of metrics currently used to report NLP research results have properties that may result in an inadequate reflection of a models' performance. While several alternative metrics that address problematic properties have been proposed, they are currently rarely used in NLP benchmarking. Our findings are in line with a recent, focused meta-analysis on machine translation conducted by Marie et al. who found that 82.1% of papers report BLEU as the only performance metric despite its well-known shortcomings (Marie et al., 2021) . Our analysis extends these findings by providing a global overview of metrics used in the entire NLP domain.",
"cite_spans": [
{
"start": 551,
"end": 571,
"text": "(Marie et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In the following, we provide recommendations on the reporting of performance metrics and discuss potential future avenues for improving measuring performance using benchmarks in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations for reporting performance results and future considerations",
"sec_num": "4.1"
},
{
"text": "Performance metrics should be reported in a clear and unambiguous way to improve transparency, avoid misinterpretation and enable reproducibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing transparency and consistency in the reporting of performance metrics",
"sec_num": "4.1.1"
},
{
"text": "\u2022 For performance metrics that have various sub-variants, it should be clearly stated which variant is reported (e.g., ROUGE-1 F1 score instead of ROUGE-1). If multiple metrics are averaged, it should be stated what kind of mean is used (e.g., arithmetic mean, geometric mean, harmonic mean) if this is not clear from the definition of the metric itself (e.g., F1 score).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing transparency and consistency in the reporting of performance metrics",
"sec_num": "4.1.1"
},
{
"text": "\u2022 If a metric is used that allows for adaptations, such as weighting, these should be explicitly stated and be marked clearly in the result tables. Ideally, when using abbreviations, the variant should be included in the abbreviation or e.g., marked by a subscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing transparency and consistency in the reporting of performance metrics",
"sec_num": "4.1.1"
},
{
"text": "\u2022 To increase transparency and allow reproducibility, the formula for calculating the metric should be included in the manuscript or in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing transparency and consistency in the reporting of performance metrics",
"sec_num": "4.1.1"
},
{
"text": "\u2022 For more complex metrics, if available, a reference implementation should be used and cited. If such a reference implementation is not available, or a custom implementation or adaptation is used, the code should be made available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing transparency and consistency in the reporting of performance metrics",
"sec_num": "4.1.1"
},
{
"text": "In the future, a taxonomic hierarchy of performance metrics that captures definitions, systematizes metrics together with all existing variants and lists recommended applications based on comparative evaluation studies. In this work, we have created a starting point for creating such a taxonomy using a bottom-up approach as part of ITO (Blagec et al., 2021) .",
"cite_spans": [
{
"start": 338,
"end": 359,
"text": "(Blagec et al., 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing transparency and consistency in the reporting of performance metrics",
"sec_num": "4.1.1"
},
{
"text": "Developing metrics for NLP tasks is an ongoing research area, new metrics outperforming previous ones are proposed on a regular basis, and suitability is strongly task-and dataset-dependent, therefore general advice on which metric to use cannot be given. Instead, it should be critically evaluated whether a metric is suitable for a given dataset, task or language, especially if the metric was originally proposed for a different application. Comparative evaluation studies, such as in (Chen et al., 2019) can provide an indication for the suitability.",
"cite_spans": [
{
"start": 488,
"end": 507,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximizing the informative value in the reporting of performance results",
"sec_num": "4.1.2"
},
{
"text": "If a metric is used that has been shown to have limited informative value (in general, or in specific use cases) and no alternative is available, the limitations and their relevance for the task and/or dataset should be discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximizing the informative value in the reporting of performance results",
"sec_num": "4.1.2"
},
{
"text": "If more than one suitable metric is available, consider reporting all of them, especially if there is a discrepancy in performance results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximizing the informative value in the reporting of performance results",
"sec_num": "4.1.2"
},
{
"text": "Even if a benchmark is historically evaluated based on a certain metric, consider additionally reporting newer proposed metrics if they are suitable and have been evaluated to be useful for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximizing the informative value in the reporting of performance results",
"sec_num": "4.1.2"
},
{
"text": "Comparative evaluation studies investigating performance metrics, their properties and their correlation across multiple tasks, datasets and languages could help to better understand metrics and their suitability for different applications. While studies focusing on a small set of metrics exist, such as in (Chen et al., 2019) , larger studies are, to the best of our knowledge, yet to be undertaken. Recent work introduced the notions of dynamic benchmarks that allow users to weigh different performance metrics of interest. An example of this is 'Dynascore' which allows customizable aggregation of performance across different aspects including non-traditionally assessed performance dimensions, such as memory, robustness, and \"fairness (Ma et al., 2021) . Further, bidimensional leaderboards based on linear ensembles of metrics have been proposed (Gehrmann et al., 2021; Ruder, 2021; Kasai et al., 2021) . These approaches could further improve the practical utility of benchmark results.",
"cite_spans": [
{
"start": 308,
"end": 327,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 743,
"end": 760,
"text": "(Ma et al., 2021)",
"ref_id": null
},
{
"start": 855,
"end": 878,
"text": "(Gehrmann et al., 2021;",
"ref_id": null
},
{
"start": 879,
"end": 891,
"text": "Ruder, 2021;",
"ref_id": null
},
{
"start": 892,
"end": 911,
"text": "Kasai et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future considerations on performance metrics in the context of benchmarking",
"sec_num": "4.2"
},
{
"text": "Our analyses are based on ITO v0.21 which encompasses data until mid 2020. To ensure that our results are still relevant given the fast pace of research, we checked whether considering data from the recently released ITO v1.01 which includes data until mid 2021 leads to any significant timedependent changes of our results 3 . Including this more recent data did, however, not alter the described usage patterns of NLP metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4.3"
},
{
"text": "The results presented in this paper are based on a large set of machine learning papers available from the PWC database, which is the largest annotated dataset of benchmark results currently available. The database comprises both preprints of papers published on arXiv and papers published in peerreviewed journals. While it could be argued that arXiv preprints are not representative of scientific journal articles, it has recently been shown that a large fraction of arXiv preprints (77%) are subsequently published in peer-reviewed venues (Lin et al., 2020) .",
"cite_spans": [
{
"start": 542,
"end": 560,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4.3"
},
{
"text": "The reporting of metrics was partly inconsistent and partly unspecific, which may lead to ambiguities when comparing model performances, thus negatively impacting the transparency and reproducibility of NLP research. Large comparative evaluation studies of different NLP-specific metrics across multiple benchmarking tasks are needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The OWL (Web Ontology Language) file of the ITO model is made available on Github 4 and Bio-Portal 5 . The ontology file is distributed under a CC-BY-SA license. ITO includes data from the Papers With Code project 6 . Papers With Code is licensed under the CC-BY-SA license. Data from Papers With Code are partially altered (manual curation to improve ontological structure and data quality). ITO includes data from the EDAM ontology. The EDAM ontology is licensed under a CC-BY-SA license.",
"cite_spans": [
{
"start": 99,
"end": 100,
"text": "5",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and code availability",
"sec_num": null
},
{
"text": "Notebooks containing the queries and code for data analysis are also accessible via GitHub. Table B .1: Top 10 reported simple classification metrics and percent of benchmark datasets that use the respective metric. R@k: Recall at k, AUC: Area under the curve, IoU: Intersection over union, P@k: Precision at k. AUC contains both ROC-AUC and PR-AUC. Figure B .3: Count of distinct metrics per benchmark dataset when considering only top-level metrics as distinct metrics (blue bars), and when considering sub-metrics as distinct metrics (grey bars). Median number of distinct metrics per benchmark: 1. Data is shown for the complete dataset (n=2,298).",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table B",
"ref_id": null
},
{
"start": 350,
"end": 358,
"text": "Figure B",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data and code availability",
"sec_num": null
},
{
"text": "https://github.com/OpenBioLink/ITO 2 http://edamontology.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Data curation in ITO v1.01 is still incomplete. Therefore, results are based on the fully curated ITO v0.21.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the team from 'Papers With Code' for making their database available and all annotators who contributed to it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "METEOR: An automatic metric for MT evaluation with",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- 4 https://github.com/OpenBioLink/ITO",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "58 proved correlation with human judgments. Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "https://bioportal.bioontology.org/ontologies/ITO 6 https://paperswithcode.com/ 58 proved correlation with human judgments. Proceed- ings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A curated, ontology-based, large-scale knowledge graph of artificial intelligence tasks and benchmarks",
"authors": [
{
"first": "Kathrin",
"middle": [],
"last": "Blagec",
"suffix": ""
},
{
"first": "Adriano",
"middle": [],
"last": "Barbosa-Silva",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Samwald",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathrin Blagec, Adriano Barbosa-Silva, Simon Ott, and Matthias Samwald. 2021. A curated, ontology-based, large-scale knowledge graph of artificial intelligence tasks and benchmarks.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating question answering evaluation",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
"volume": "",
"issue": "",
"pages": "119--124",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5817"
]
},
"num": null,
"urls": [],
"raw_text": "Anthony Chen, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Evaluating question answering evaluation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 119- 124, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentence mover's similarity: Automatic evaluation for multi-sentence texts",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2748--2760",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. 2019. Sentence mover's similarity: Automatic evalu- ation for multi-sentence texts. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 2748-2760, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics |proceedings of the second international conference on human language technology research",
"authors": [
{
"first": "George",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/1289189.1289273"
]
},
"num": null,
"urls": [],
"raw_text": "George Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics |proceedings of the second international con- ference on human language technology research.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "ROUGE 2.0: Updated and improved measures for evaluation of summarization tasks",
"authors": [
{
"first": "Kavita",
"middle": [],
"last": "Ganesan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kavita Ganesan. 2018. ROUGE 2.0: Updated and im- proved measures for evaluation of summarization tasks. arXiv.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Accurate evaluation of segment-level machine translation metrics",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1183--1191",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1124"
]
},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1183-1191, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Meteor++ 2.0: Adopt syntactic level paraphrase knowledge into machine translation evaluation",
"authors": [
{
"first": "Yinuo",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Junfeng",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "501--506",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5357"
]
},
"num": null,
"urls": [],
"raw_text": "Yinuo Guo and Junfeng Hu. 2019. Meteor++ 2.0: Adopt syntactic level paraphrase knowledge into ma- chine translation evaluation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 501-506, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "LEPOR: A robust evaluation metric for machine translation with augmented factors",
"authors": [
{
"first": "L",
"middle": [
"F"
],
"last": "Aaron",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Han",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chao",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron L. F. Han, Derek F. Wong, and Lidia S. Chao. 2012. LEPOR: A robust evaluation metric for ma- chine translation with augmented factors. Proceed- ings of COLING 2012.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bidimensional leaderboards: Generate and evaluate language hand in hand",
"authors": [
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Lavinia",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Dunagan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"R"
],
"last": "Morrison",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/arxiv.2112.04139"
]
},
"num": null,
"urls": [],
"raw_text": "Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fab- bri, Yejin Choi, and Noah A. Smith. 2021. Bidimen- sional leaderboards: Generate and evaluate language hand in hand. arXiv.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The significance of recall in automatic metrics for MT evaluation",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Shyamsundar",
"middle": [],
"last": "Jayaraman",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frederking",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [
"B"
],
"last": "Taylor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hutchison",
"suffix": ""
},
{
"first": "Takeo",
"middle": [],
"last": "Kanade",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Kittler",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"M"
],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2004,
"venue": "Machine translation: from real users to research",
"volume": "3265",
"issue": "",
"pages": "134--143",
"other_ids": {
"DOI": [
"10.1007/978-3-540-30194-3_16"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Lavie, Kenji Sagae, and Shyamsundar Jayaraman. 2004. The significance of recall in automatic met- rics for MT evaluation. In Robert E. Frederking, Kathryn B. Taylor, David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mat- tern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Dough Tygar, Moshe Y. Vardi, and Gerhard Weikum, editors, Machine translation: from real users to research, volume 3265 of Lecture notes in computer science, pages 134-143. Springer Berlin Heidelberg, Berlin, Heidelberg.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ROUGE: A package for automatic evaluation of summaries. Text Summarization Branches Out",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How many preprints have actually been printed and why: a case study of computer science preprints on arXiv",
"authors": [
{
"first": "Jialiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhiyang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2020,
"venue": "Scientometrics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s11192-020-03430-8"
]
},
"num": null,
"urls": [],
"raw_text": "Jialiang Lin, Yao Yu, Yu Zhou, Zhiyang Zhou, and Xi- aodong Shi. 2020. How many preprints have actually been printed and why: a case study of computer sci- ence preprints on arXiv. Scientometrics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Correlation between ROUGE and human evaluation of extractive meeting summaries",
"authors": [
{
"first": "Feifan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies Short Papers -HLT '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1557690.1557747"
]
},
"num": null,
"urls": [],
"raw_text": "Feifan Liu and Yang Liu. 2008. Correlation between ROUGE and human evaluation of extractive meeting summaries. In Proceedings of the 46th Annual Meet- ing of the Association for Computational Linguistics on Human Language Technologies Short Papers - HLT '08, page 201, Morristown, NJ, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "62--90",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5302"
]
},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT sys- tems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adina Williams, and Douwe Kiela. 2021. Dynaboard: An evaluation-as-a-service platform for holistic nextgeneration benchmarking",
"authors": [
{
"first": "Zhiyi",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Thrush",
"suffix": ""
},
{
"first": "Somya",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/arxiv.2106.06052"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts, Ad- ina Williams, and Douwe Kiela. 2021. Dynaboard: An evaluation-as-a-service platform for holistic next- generation benchmarking. arXiv.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scientific credibility of machine translation research: A meta-evaluation of 769 papers",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "7297--7306",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.566"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie, Atsushi Fujita, and Raphael Rubino. 2021. Scientific credibility of machine translation research: A meta-evaluation of 769 papers. In Pro- ceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7297- 7306, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "GLEU: Automatic evaluation of sentence-level fluency |semantic scholar",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mutton",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Mutton, Mark Dras, Stephen Wan, and Robert Dale. 2007. [PDF] GLEU: Automatic evaluation of sentence-level fluency |semantic scholar. undefined.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Better summarization evaluation with word embeddings for ROUGE",
"authors": [
{
"first": "Jun-Ping",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Viktoria",
"middle": [],
"last": "Abrecht",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1925--1930",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1222"
]
},
"num": null,
"urls": [],
"raw_text": "Jun-Ping Ng and Viktoria Abrecht. 2015. Better sum- marization evaluation with word embeddings for ROUGE. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925-1930, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Why we need new evaluation metrics for NLG",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Amanda",
"middle": [
"Cercas"
],
"last": "Curry",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2241--2252",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1238"
]
},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2241-2252, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "606--613",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2004. Automatic evaluation of ma- chine translation quality using longest common sub- sequence and skip-bigram statistics. Proceedings of the 42nd annual meeting of the association for computational linguistics, pages 606-613.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 2227-2237, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "2021. Challenges and Opportunities in NLP Benchmarking",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2021. Challenges and Opportuni- ties in NLP Benchmarking. http://ruder.io/ nlp-benchmarking.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A graph-theoretic summary evaluation for ROUGE",
"authors": [
{
"first": "Elaheh",
"middle": [],
"last": "Shafieibavani",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "762--767",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1085"
]
},
"num": null,
"urls": [],
"raw_text": "Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, and Fang Chen. 2018. A graph-theoretic sum- mary evaluation for ROUGE. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 762-767, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human anno- tation. In Proceedings of Association for Machine Translation in the Americas, pages 223-231.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "TER-plus: paraphrase, semantic, and alignment enhancements to translation edit rate",
"authors": [
{
"first": "G",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Translation",
"volume": "23",
"issue": "2-3",
"pages": "117--127",
"other_ids": {
"DOI": [
"10.1007/s10590-009-9062-9"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew G. Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2009. TER-plus: paraphrase, se- mantic, and alignment enhancements to translation edit rate. Machine Translation, 23(2-3):117-127.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BERTScore: Evaluating text generation with BERT. arXiv",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. arXiv.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "terpreting BLEU/NIST scores: How much improvement do we need to have a better system? Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. In- terpreting BLEU/NIST scores: How much improve- ment do we need to have a better system? Pro- ceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Performance metric property hierarchy B Appendix: Additional statistics on the dataset",
"authors": [
{
"first": "A",
"middle": [],
"last": "Appendix",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Appendix: Performance metric property hierarchy B Appendix: Additional statistics on the dataset",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Timeline of the introduction of NLP metrics and their original application. SMS: Sentence Mover's Similarity.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Property hierarchy after manual curation of the raw list of metrics. The left side of the image shows an excerpt of the list of top-level performance metrics; the right side shows an excerpt of the list of submetrics for the top-level metric 'Accuracy'.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "B.1: Co-occurrence matrix for the top 10 most frequently used NLP metrics (y-axis). Only metrics that were reported at least one time together with either one of the selected metrics are shown (x-axis).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Number of publications covered by the total dataset per year. The y-axis is scaled logarithmically.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "General descriptives of the analyzed dataset (as of July 2020).",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "Top 10 reported NLP metrics and percent of NLP benchmark datasets (n=491) that use the respective metric. BLEU: Bilingual Evaluation Understudy, CIDEr: Consensus-based Image Description Evaluation, ROUGE: Recall-Oriented Understudy for Gisting Evaluation, METEOR: Metric for Evaluation of Translation with Explicit ORdering.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}