ACL-OCL / Base_JSON /prefixG /json /gem /2021.gem-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:39.917557Z"
},
"title": "The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": "gehrmann@google.com"
},
{
"first": "Tosin",
"middle": [],
"last": "Adewumi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Karmanya",
"middle": [],
"last": "Aggarwal",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Pawan",
"middle": [
"Sasanka"
],
"last": "Ammanamanchi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Aremu",
"middle": [],
"last": "Anuoluwapo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lagos",
"location": {
"addrLine": "39 University of Michigan Ann Arbor, 40"
}
},
"email": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {
"addrLine": "29"
}
},
"email": ""
},
{
"first": "Khyathi",
"middle": [
"Raghavi"
],
"last": "Chandu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Miruna",
"middle": [],
"last": "Clinciu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": ""
},
{
"first": "Kaustubh",
"middle": [
"D"
],
"last": "Dhole",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amelia R&D",
"location": {
"settlement": "New York"
}
},
"email": ""
},
{
"first": "Wanyu",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {
"addrLine": "43"
}
},
"email": ""
},
{
"first": "Esin",
"middle": [],
"last": "Durmus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University",
"location": {
"addrLine": "6 DFKI",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {
"settlement": "Prague"
}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Emezue",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Garbacea",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tatsunori",
"middle": [],
"last": "Hashimoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {
"addrLine": "29"
}
},
"email": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Jhamtani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {
"addrLine": "43"
}
},
"email": ""
},
{
"first": "Shailza",
"middle": [],
"last": "Jolly",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Faisal",
"middle": [],
"last": "Ladhak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
},
{
"first": "Aman",
"middle": [],
"last": "Madaan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Mounica",
"middle": [],
"last": "Maddela",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Tech",
"location": {}
},
"email": ""
},
{
"first": "Khyati",
"middle": [],
"last": "Mahajan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North",
"location": {}
},
"email": ""
},
{
"first": "Saad",
"middle": [],
"last": "Mahamood",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bodhisattwa",
"middle": [
"Prasad"
],
"last": "Majumder",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Pedro",
"middle": [
"Henrique"
],
"last": "Martins",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Mcmillan-Major",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Van Miltenburg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tilburg University",
"location": {
"addrLine": "32 trivago, 33"
}
},
"email": ""
},
{
"first": "Moin",
"middle": [],
"last": "Nadeem",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Nikolaev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": ""
},
{
"first": "Rubungo",
"middle": [
"Andre"
],
"last": "Niyongabo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Salomey",
"middle": [],
"last": "Osei",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "36"
}
},
"email": ""
},
{
"first": "Niranjan",
"middle": [
"Ramesh"
],
"last": "Rao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Raunak",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Juan",
"middle": [
"Diego"
],
"last": "Rodriguez",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sashank",
"middle": [],
"last": "Santhanam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North",
"location": {}
},
"email": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {}
},
"email": ""
},
{
"first": "Samira",
"middle": [],
"last": "Shaikh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North",
"location": {}
},
"email": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Marco",
"middle": [],
"last": "Antonio",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sobrevilla",
"middle": [],
"last": "Cabezudo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Strobelt",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Subramani",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Tech",
"location": {}
},
"email": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Tech",
"location": {}
},
"email": ""
},
{
"first": "Akhila",
"middle": [],
"last": "Yerukola",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {
"addrLine": "11"
}
},
"email": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Charlotte",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "San",
"middle": [],
"last": "Diego",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language generation is the task to automatically generate understandable texts, typically using a non-linguistic or textual representation of information as input (Reiter and Dale, 2000) . These texts aim to fulfill an underlying communicative goal (e.g., to produce a summary of an article) while remaining faithful to the input information, fluent, grammatical, and natural-looking. An NLG system needs to be robust to shifts in the data distribution and be able to produce text in many different languages. Finally, it is often desired that repeated interactions with the model produce diverse outputs, for example, to explain concepts in multiple ways or to become a more interesting conversational agent. These optimization objectives can often be conflicting (Hashimoto et al., 2019) and, as a result, evaluations that focus only on a single aspect may fail to recognize the drawbacks of a particular method. To demonstrate this trade-off, consider an improvement on the CNN-DM summarization dataset (Hermann et al., 2015; Nallapati et al., 2016) measured by the ROUGE-L met-ric (Lin, 2004) . Since ROUGE only tests the extent to which a generated summary has a lexical overlap with a reference summary, it can erroneously produce high scores for fluent, yet meaningless and unfaithful outputs as long as many of the same words are used (Maynez et al., 2020; Gabriel et al., 2020) . Moreover, ROUGE tends to favor systems that produce longer summaries (Sun et al., 2019) . It is thus crucial to carefully assess the progress of NLG toward all of its goals at the same time in ways that evolve alongside the models. This is currently not the case; new models are evaluated on different datasets, most of which focus only on the English language (Bender, 2019) , and using these flawed metrics. Moreover, while human evaluations of generated texts can provide complementary insights to automatic evaluation (Manning et al., 2020) , it can also lead to contradicting results since studies often omit crucial replication details and assume different definitions of the measured quantities (Howcroft et al., 2020) .",
"cite_spans": [
{
"start": 171,
"end": 194,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF101"
},
{
"start": 773,
"end": 797,
"text": "(Hashimoto et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 1014,
"end": 1036,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF40"
},
{
"start": 1037,
"end": 1060,
"text": "Nallapati et al., 2016)",
"ref_id": "BIBREF84"
},
{
"start": 1093,
"end": 1104,
"text": "(Lin, 2004)",
"ref_id": "BIBREF62"
},
{
"start": 1351,
"end": 1372,
"text": "(Maynez et al., 2020;",
"ref_id": "BIBREF75"
},
{
"start": 1373,
"end": 1394,
"text": "Gabriel et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 1466,
"end": 1484,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF113"
},
{
"start": 1758,
"end": 1772,
"text": "(Bender, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1919,
"end": 1941,
"text": "(Manning et al., 2020)",
"ref_id": "BIBREF72"
},
{
"start": 2099,
"end": 2122,
"text": "(Howcroft et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a living benchmark called GEM (Generation, Evaluation, and Metrics) that aims to enable research on a wide range of NLG challenges. To avoid the fallacy of encouraging hill climbing on a leaderboard (Linzen, 2020), GEM focuses on an in-depth evaluation of model outputs across human and automatic evaluation that aims to uncover shortcomings and opportunities for progress. As datasets, metrics, and models improve, the benchmark environment will improve as well, replacing \"solved\" tasks with more challenging ones, incorporating newly developed metrics, and addressing discovered flaws in the experimental setup, as demonstrated in Figure 1 . Making all model outputs available under an open-source license will support evaluation research and integrating new metrics will, in turn, help their adoption and increase the robustness of model evaluations.",
"cite_spans": [],
"ref_spans": [
{
"start": 645,
"end": 653,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The initial set of eleven included datasets is presented in Table 1 . They measure specific generation challenges, such as the content selection and planning (What to say?), and the surface realization (How to say it?) (Reiter and Dale, 2000; Gatt and Krahmer, 2018) . Models need to be capable of paraphrasing, simplification, and others. In addition to those challenges, GEM datasets also differ in their communicative goals, languages, the noisiness of data, and resource availability, to evaluate the consistency of evaluation schemes. About half of the datasets have multiple references and more Figure 1: The opportunities of living benchmarks and pitfalls of evaluation. As models improve, we need consistent evaluations such that models can be compared to each other. This can only happen if we develop robust human evaluation standards and improve our automated metrics. Otherwise, results are challenging to interpret and compare to each other. Finally, as models improve and metrics saturate, we need to evaluate them on more challenging datasets instead of continuing to move sideways on old ones. GEM aims to provide this environment for natural language generation. than half were post-processed to improve data quality. The sizes range from 5k to 500k data points. GEM features 18 languages across all tasks and two of the datasets do not include English at all. To be able to properly assess the performance of models in a way robust to the shortcuts a model can take, we additionally introduce ten types of challenging test sets that probe for specific modeling aspects Ribeiro et al., 2020) . To ensure that research with GEM is conducted responsibly, all the datasets are documented in an NLG-specific version of data cards (Bender and Friedman, 2018; Gebru et al., 2018) we developed and for which we release a template and guide. Moreover, all submitted models will have an associated data card (Mitchell et al., 2019) .",
"cite_spans": [
{
"start": 219,
"end": 242,
"text": "(Reiter and Dale, 2000;",
"ref_id": "BIBREF101"
},
{
"start": 243,
"end": 266,
"text": "Gatt and Krahmer, 2018)",
"ref_id": "BIBREF36"
},
{
"start": 601,
"end": 610,
"text": "Figure 1:",
"ref_id": null
},
{
"start": 1587,
"end": 1608,
"text": "Ribeiro et al., 2020)",
"ref_id": "BIBREF102"
},
{
"start": 1743,
"end": 1770,
"text": "(Bender and Friedman, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 1771,
"end": 1790,
"text": "Gebru et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 1916,
"end": 1939,
"text": "(Mitchell et al., 2019)",
"ref_id": "BIBREF83"
}
],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes the selection and construction of the GEM datasets in support of the announcement of the shared task at ACL 2021. More detailed information can be found on our website https://gem-benchmark.com/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we summarize common criticisms of benchmarks in NLP, discuss how they apply to NLG, and how we plan to address them. Then, we describe opportunities that GEM can provide. NLP benchmarks such as GLUE (Wang et al., 2019b) are common for natural language understanding",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF117"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarks in NLG",
"sec_num": "2"
},
{
"text": "Communicative Goal Language(s) Size Input Type CommonGEN Produce a likely sentence which mentions all of the source concepts. en 67k Concept Set Czech Restaurant (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2019) Produce a text expressing the given intent and covering the specified attributes. cs 5k Meaning Representation DART (Radev et al., 2020) Describe cells in a table, covering all information provided in triples. en 82k Triple Set E2E clean ) Describe a restaurant, given all and only the attributes specified on the input. en 42k Meaning Representation MLSum Summarize relevant points within a news article *de/es *520k Articles Schema-Guided Dialog Provide the surface realization for a virtual assistant en *165k Dialog Act ToTTo Produce an English sentence that describes the highlighted cells in the context of the given table.",
"cite_spans": [
{
"start": 162,
"end": 188,
"text": "(Du\u0161ek and Jur\u010d\u00ed\u010dek, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 300,
"end": 325,
"text": "DART (Radev et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 344,
"end": 425,
"text": "a table, covering all information provided in triples. en 82k Triple Set E2E",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "Highlighted Table XSum ( Highlight relevant points in a news article en *25k Articles",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 24,
"text": "Table XSum",
"ref_id": null
}
],
"eq_spans": [],
"section": "en 136k",
"sec_num": null
},
{
"text": "WebNLG Produce a text that verbalises the input triples in a grammatical and natural way. en/ru 50k RDF triple",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "en 136k",
"sec_num": null
},
{
"text": "WikiAuto + Turk/ASSET ) (Xu et al., 2016 ) (Alva-Manchego et al., 2020 Communicate the same information as the source sentence using simpler words and grammar.",
"cite_spans": [
{
"start": 24,
"end": 40,
"text": "(Xu et al., 2016",
"ref_id": null
},
{
"start": 41,
"end": 70,
"text": ") (Alva-Manchego et al., 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 136k",
"sec_num": null
},
{
"text": "WikiLingua Produce high quality summaries of an instructional article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "*ar/cs/de/en es/fr/hi/id/it ja/ko/nl/pt/ru th/tr/vi/zh *550k Article Table 1 : A description of all the datasets included in GEM. The tasks vary in communicative goal, data size, and input type. * indicates changes from the originally published dataset made for GEM.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "(NLU) tasks. They aggregate multiple tasks under a unified evaluation framework, which enables researchers to fairly compare their models to others. Due to the improved model comparability, benchmarks are critical in measuring modeling progress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "However, they also pose a risk that progress is reduced to the single number shown in a benchmark's leaderboard and thus may encourage blindly optimizing it without regard to other considerations like model size or fairness (Ethayarajh and Jurafsky, 2020) . This is especially challenging for benchmarks in NLG since, as discussed above, the performance cannot be described through a single metric and it is often not clear what metric to optimize for. This shortfall can be seen in benchmarks like DecaNLP (McCann et al., 2018) and GLGE which include NLG tasks but focus only on a single metric and, as a result, may mischaracterize a system's performance.",
"cite_spans": [
{
"start": 224,
"end": 255,
"text": "(Ethayarajh and Jurafsky, 2020)",
"ref_id": "BIBREF25"
},
{
"start": 499,
"end": 528,
"text": "DecaNLP (McCann et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "Moreover, an easy-to-use data infrastructure also disincentivizes researchers from interacting with and conducting in-depth analyses of the data sets that models are trained on. The limited analysis delegates the responsibility to ensure that all included datasets have been collected fairly to the creators of the benchmark (Denton et al., 2020) . The dataset and benchmark creators thus must provide in-depth statements that describe the data characteristics and surface potential issues and consider these issues when selecting datasets for a benchmark (Gebru et al., 2018; Bender and Friedman, 2018) .",
"cite_spans": [
{
"start": 325,
"end": 346,
"text": "(Denton et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 556,
"end": 576,
"text": "(Gebru et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 577,
"end": 603,
"text": "Bender and Friedman, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "These dangers emphasize selecting datasets for a benchmark needs to be carefully done, that the setup has to remain flexible to be able to address newly found limitations, and that the benchmark should focus on climbing a leaderboard. Instead, a living benchmark that can adjust its datasets and specific evaluation metrics can be much more powerful and long-lived. This can, for example, be seen in Dynabench, 1 (Potts et al., 2020) which has a static evaluation, but interactively adds more test data through a human-in-the-loop approach.",
"cite_spans": [
{
"start": 411,
"end": 412,
"text": "1",
"ref_id": null
},
{
"start": 413,
"end": 433,
"text": "(Potts et al., 2020)",
"ref_id": "BIBREF93"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "Increasing multilingualism of NLG research. Another potentially harmful choice by benchmark creators is the choice of the languages of the included datasets. It is often assumed that work on English transfers to other languages (Bender, 2011) . However, this assumption does not consider differences between the languages that lead to higher modeling complexity, for example, a richer morphology or a flexible word-order. Still, the majority of work in NLP and almost all benchmarks exclusively focus on English (e.g., Wang et al., 2019b; McCann et al., 2018) . Even if multiple languages are considered, the availability of data in a language often does not represent the number of speakers of a language. This means that work on languages with little available data can potentially impact many more people than work on highly resourced languages (Joshi et al., 2020) .",
"cite_spans": [
{
"start": 228,
"end": 242,
"text": "(Bender, 2011)",
"ref_id": "BIBREF4"
},
{
"start": 519,
"end": 538,
"text": "Wang et al., 2019b;",
"ref_id": "BIBREF117"
},
{
"start": 539,
"end": 559,
"text": "McCann et al., 2018)",
"ref_id": "BIBREF76"
},
{
"start": 848,
"end": 868,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "As a result, many recent benchmarking and dataset creation efforts in NLU develop and focus on tasks that are inherently multilingual or which explore cross-lingual transfer. For example, XTREME (Hu et al., 2020) introduces a benchmark covering 40 languages across multiple NLU and retrieval tasks, XCOPA (Ponti et al., 2020) is a commonsense reasoning dataset for eleven languages, and MLQA ) is a dataset for extractive question answering across seven languages. We can observe a similar recent trend in natural language generation, where ML-Sum and WikiLingua (Ladhak et al., 2020) were created as multilingual summarization datasets. There also have been first steps toward including NLG tasks in multilingual NLU benchmarks. For example, XGLUE includes Question and News Title Generation (Liang et al., 2020) . Unfortunately, XGLUE reduces the generation evaluation to BLEU-4, a metric that is inadequate for NLG (Reiter, 2018) .",
"cite_spans": [
{
"start": 195,
"end": 212,
"text": "(Hu et al., 2020)",
"ref_id": "BIBREF43"
},
{
"start": 793,
"end": 813,
"text": "(Liang et al., 2020)",
"ref_id": null
},
{
"start": 918,
"end": 932,
"text": "(Reiter, 2018)",
"ref_id": "BIBREF100"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "There have also been multiple shared tasks in NLG that focus on multilingualism, for instance, the shared task on multilingual surface realization which includes eleven languages . The shared task on document-level generation and translation featured German and English generation challenges (Heafield et al., 2020 ). The WebNLG+ shared task asked participants to contribute models that can realize text in Russian and English (Ferreira et al., 2020) .",
"cite_spans": [
{
"start": 292,
"end": 314,
"text": "(Heafield et al., 2020",
"ref_id": "BIBREF39"
},
{
"start": 407,
"end": 450,
"text": "Russian and English (Ferreira et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "A benchmark that focuses only on NLG can en-able much richer evaluation (as described in the next sections), and promote non-English datasets. In addition, it can ensure that the datasets created for those shared tasks continue being evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "Providing a testbed for automated evaluation. Most traditional automated metrics, such as ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) , measure the n-gram overlap between a reference and the generated text. However, in most cases, there is more than one correct way to generate a text, especially in tasks with a latent content planning or selection step (Reiter and Dale, 2000) . That means that a correct solution may score low on a metric. While multiple references alleviate the issue somewhat, these metrics still have a low correlation with human judgments (Reiter, 2018; Fabbri et al., 2020) . To address the issue, the machine translation community has been organizing yearly metrics shared tasks which produce metrics that achieve a high correlation (Stanojevi\u0107 et al., 2015; Bojar et al., 2016 Bojar et al., , 2017 Ma et al., 2018 Ma et al., , 2019 Mathur et al., 2020b) . The latest metrics focus on semantic equivalence instead of lexical similarity, which improves the correlations drastically. However, recent work by Fabbri et al. (2020) demonstrates that this may not hold in summarization, where the automated metric BERTScore (Zhang et al., 2020b) does not improve upon the correlation of ROUGE. Moreover, Mathur et al. (2020a) and Freitag et al. (2020) find that when comparing two high-quality systems, differences according to a metric may also stem from how references are written or flaws in the metric itself. 2 Given that automated metrics perform differently across tasks, setups, and languages, a multi-task NLG benchmark has the opportunity to act as a testbed to evaluate how the latest advances in automated metrics perform on these different tasks. The benchmark can facilitate this research through the release of system outputs and associated human annotations, which is what we are planning to do with GEM. Moreover, we allow the integration of additional metrics into our living benchmark system, which enables a much faster adoption.",
"cite_spans": [
{
"start": 96,
"end": 107,
"text": "(Lin, 2004)",
"ref_id": "BIBREF62"
},
{
"start": 117,
"end": 140,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF88"
},
{
"start": 362,
"end": 385,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF101"
},
{
"start": 570,
"end": 584,
"text": "(Reiter, 2018;",
"ref_id": "BIBREF100"
},
{
"start": 585,
"end": 605,
"text": "Fabbri et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 766,
"end": 791,
"text": "(Stanojevi\u0107 et al., 2015;",
"ref_id": "BIBREF112"
},
{
"start": 792,
"end": 810,
"text": "Bojar et al., 2016",
"ref_id": "BIBREF7"
},
{
"start": 811,
"end": 831,
"text": "Bojar et al., , 2017",
"ref_id": "BIBREF6"
},
{
"start": 832,
"end": 847,
"text": "Ma et al., 2018",
"ref_id": "BIBREF70"
},
{
"start": 848,
"end": 865,
"text": "Ma et al., , 2019",
"ref_id": "BIBREF71"
},
{
"start": 866,
"end": 887,
"text": "Mathur et al., 2020b)",
"ref_id": null
},
{
"start": 1039,
"end": 1059,
"text": "Fabbri et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 1151,
"end": 1172,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF125"
},
{
"start": 1231,
"end": 1252,
"text": "Mathur et al. (2020a)",
"ref_id": "BIBREF73"
},
{
"start": 1257,
"end": 1278,
"text": "Freitag et al. (2020)",
"ref_id": "BIBREF32"
},
{
"start": 1441,
"end": 1442,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "Developing reproducible human evaluation standards. In recent work, Howcroft et al. (2020) investigated NLG papers from the last twenty years and the evaluation methodologies differ drastically across papers. Moreover, in most cases, it is not even mentioned what the human evaluation aims to measure and that definitions of measures like \"accuracy\" or \"fluency\" are inconsistent. They thus suggest reporting standards for criteria and methods, following a classification system proposed by . In addition, regularly scheduled shared tasks like WMT have lead to standardization of human evaluation setups and enabled controlled experimentation with them. GEM has the opportunity to develop reproducible standards for how human evaluation for NLG tasks beyond translation should be conducted while at the same time incorporating lessons from related work. Acting on the same need, the recently proposed GENIE (Khashabi et al., 2021 ) system aims to automate and standardize the human evaluation of different NLG systems, however with the contrasting goal of reducing the evaluating to a leaderboard-like score. To avoid further fragmentation of the field, GEM is developing its own human evaluation approaches, but uses the infrastructure provided by GENIE to run its human evaluation.",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "Howcroft et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 907,
"end": 929,
"text": "(Khashabi et al., 2021",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "In addition to GENIE, multiple other related efforts exist that work toward the goal of reproducible and robust in-depth human and automatic evaluation for NLG tasks, and which focus on specific modeling-or task-aspects that are different from those in GEM. Among those are KILT (Petroni et al., 2020) which focuses on knowledge-intensive tasks and retrieval-based models, Storium (Akoury et al., 2020) which focuses on open-ended story generation, and BIG bench 3 which focuses on measuring few-shot and zero-shot capabilities of language models.",
"cite_spans": [
{
"start": 279,
"end": 301,
"text": "(Petroni et al., 2020)",
"ref_id": null
},
{
"start": 381,
"end": 402,
"text": "(Akoury et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "en 594k Sentence",
"sec_num": null
},
{
"text": "As highlighted in Figure 1 , the selection of included datasets is an integral part of a benchmark. They should be challenging for models, but it should still be possible to evaluate models trained on them. Moreover, the datasets should cover a wide range of relevant generation challenges that allow for findings to be as general as possible. Finally, the datasets should cover tasks that are interesting for contributors to work on to facilitate the wide adoption of the benchmark.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "To collect datasets with those desired properties, the selection methodology for GEM is composed of three steps. First, we elicited a set of proposals from everyone involved in the effort. Second, we identified criteria for the selection. Third, all GEM members voted on individual dataset and criteria utilities. The final selection maximizes the utility under constrained resources, similar to a knapsack solver. 4 This can be seen as an extension of the selection process of SuperGLUE (Wang et al., 2019a) that had similar first and second steps but made the final decision based on which were harder for a baseline model to solve after identifying a final set of candidate datasets. Since we are going to introduce challenge sets, the baseline performance of models on a dataset matters less.",
"cite_spans": [
{
"start": 415,
"end": 416,
"text": "4",
"ref_id": null
},
{
"start": 488,
"end": 508,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF116"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "Dataset Elicitation. In the first step, all GEM participants were asked to suggest datasets following the schema provided in Appendix A. The categories included multiple brief categorizations, such as a description of the challenge that this dataset provides, its high-level task, and the communicative goal of an agent trained on the data. Following our goal to focus on non-English languages, we further asked for the languages included in the dataset, as well as the language locale. This step yielded 35 proposed datasets, listed in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "Estimating Task+Criterion Utility. The second step focused on the selection of criteria to inform the selection. The initial set of criteria was selected through open discussion involving all members. We split criteria into \"hard\" and \"soft\" ones -hard criteria would lead to the definite inclusion/exclusion of a task if (not) satisfied. Soft criteria inform the utility of the remaining tasks. All GEM members filled out a survey asking them to rate, on a 5-point Likert scale, how much they wanted to see a task included in GEM. Additionally, we posed yes/no questions for all considered hard criteria and various questions about the soft criteria (e.g., \"what percentage of the tasks should feature non-English language?\", or \"do we prefer noisy or clean datasets?\"). Finally, the survey included open text fields that asked for (1) comments on any of the tasks, (2) comments or suggestions on hard exclusion criteria, and (3) suggestions of additional criterion/criteria. The full list of questions is shown in Appendix C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "The survey received 28 responses, revealing that the initial version of GEM should include a median of 10 tasks or an average of 12. Of those tasks, about a third should feature non-English language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "Selected Criteria. For the hard criteria, there was an agreement to focus only on open-access datasets and that concurrent or past shared tasks for the same datasets are not an issue. Overall, the sentiment determined the following selection principles:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "\u2022 We focus on diverse high-level tasks over a single high-level task evaluated in-depth. However, each high-level task should include multiple datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "\u2022 We focus on clean datasets to avoid conflating model mistakes and learned noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "\u2022 We include a mix of high-and low-resource datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "\u2022 We focus on data with interesting test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "\u2022 We should not focus on the quality of current evaluation strategies for a given dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "\u2022 We prefer multi-reference datasets since those have been shown to lead to more robust automatic evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "High-Level Tasks. Since these principles dictate that we should focus on a small set of high-level tasks, we used the free-text replies to evaluate the interest in different high-level tasks. Grouping the proposed tasks yielded the following candidates: Summarization, Dialog, Simplification/Compression, Question Answering, Creative Writing, Data-to-Text, and Question Generation. 5 There was a preference to exclude image inputs and question answering because those tasks add complexity to the evaluation beyond the generated text. Moreover, since creative generation tasks like story generation and poetry generation suffer even more from inadequate evaluation approaches, there was a consensus to not include them. There was, however, a strong preference for the high-level tasks Summarization, Data-to-text, and Dialog. 6 5 For a full overview of potential future expansions and challenges, we refer to the survey by Gatt and Krahmer (2018) . 6 One may question the absence of Translation from this list. While it is a generation task, we excluded it since Translation already has regular benchmarking efforts with WMT.",
"cite_spans": [
{
"start": 382,
"end": 383,
"text": "5",
"ref_id": null
},
{
"start": 825,
"end": 826,
"text": "6",
"ref_id": null
},
{
"start": 922,
"end": 945,
"text": "Gatt and Krahmer (2018)",
"ref_id": "BIBREF36"
},
{
"start": 948,
"end": 949,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "Specific Datasets. The final selection is shown in Table 1 . To arrive at the selection, we first ranked all datasets by their average rating. For this, we treated positive ratings as 1, negative ratings as -1, and neutral ratings as 0. The highestranked datasets were E2E with 0.577, XSum with 0.538, and ToTTo with 0.461. Unfortunately, non-English datasets were ranked lower, with only WebNLG and MLSum among the top 15 datasets. We grouped all datasets by their high-level tasks and selected a group that would not violate the selection principles (e.g., only high-resource tasks). If two datasets fit, we picked the one with a higher interest rating. Among the 11 datasets, we have 18different languages, and the dataset sizes range from 5,000 examples to 1.5M, with most datasets between 50-150k examples. Two of them do not include English at all, which we hope reduces the dependence of the modeling approaches on anglocentric pretraining (Anastasopoulos and Neubig, 2020). The high-level tasks include Dialog, Summarization, Data-to-Text, and Simplification. About half of the datasets have multiple references and more than half had post-processing steps applied to them to ensure high data quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3"
},
{
"text": "We produce data cards (Bender and Friedman, 2018; Gebru et al., 2018) for all data sets in GEM, for which we developed an NLG-specific template. 7 In addition to describing the data itself, the cards acknowledge potential limitations of a dataset regarding its creation process and describe its real-world use cases to ensure that the research is conducted responsibly.",
"cite_spans": [
{
"start": 22,
"end": 49,
"text": "(Bender and Friedman, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 50,
"end": 69,
"text": "Gebru et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 145,
"end": 146,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GEMifying the data",
"sec_num": "3.1"
},
{
"text": "These datasets are the base selection, and as part of GEM, we may change datasets and how they are used. For example, we may improve the training sets, make the test sets more challenging, or probe for specific skills a model must exhibit with testonly datasets Linzen, 2020; Ribeiro et al., 2020; Schlegel et al., 2020) . We may also ask to evaluate a single model on multiple test sets, following the design by Dua et al. (2019) .",
"cite_spans": [
{
"start": 262,
"end": 275,
"text": "Linzen, 2020;",
"ref_id": "BIBREF63"
},
{
"start": 276,
"end": 297,
"text": "Ribeiro et al., 2020;",
"ref_id": "BIBREF102"
},
{
"start": 298,
"end": 320,
"text": "Schlegel et al., 2020)",
"ref_id": "BIBREF103"
},
{
"start": 413,
"end": 430,
"text": "Dua et al. (2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GEMifying the data",
"sec_num": "3.1"
},
{
"text": "We are including modifications to several of the datasets: (1) MLSum: We excluded all languages besides Spanish and German since the sources for other languages disallow scraping content. Addi- tionally, we removed all duplicate items (i.e., items with the same input text) and we used langdetect 8 to filter out examples that were in the wrong language. In total, 147 examples were removed from the German portion (0.06%) and 7417 examples were removed from the Spanish portion (2.5%). 2XSum: Summaries in this dataset often have divergence issues between the source and target texts since gold summaries are introductory sentences prefacing each article. Models agnostic to such noises are vulnerable to hallucinations Dhingra et al., 2019) . To combat this, we fine-tuned a BERT-based (Devlin et al., 2019) classifier on 500 document and gold summary pairs, manually annotated for faithfulness (Maynez et al., 2020) and excluded all document-summary pairs from the original XSum dataset where the classifier was not confident (p(faithful) > 0.8) whether the summary is faithful to the document or not. 3Schema-Guided Dialog: We are focusing on the response-generation part of the dataset and thus reformatted the dataset to treat the service agent utterances as the targets to be generated and the previous customer utterance and the agent's dialog act as the input. We additionally reformat the dialog acts to directly conform to the format described in the paper (Kale and Rastogi, 2020) . 4Wik-iLingua: We focus on the same five languages that were benchmarked in its original release (en, es, ru, tr, vi) in a cross-lingual setup in which the inputs are in the respective language and the outputs are in English. However, we re-split the original data to avoid train-test overlaps between languages and provide training data in 13 additional languages (as shown in Table 1 ). For GEM, we allow submis- 8 https://pypi.org/project/langdetect/ sions trained on any of the languages in isolation or as part of a multilingual model.",
"cite_spans": [
{
"start": 297,
"end": 298,
"text": "8",
"ref_id": null
},
{
"start": 721,
"end": 742,
"text": "Dhingra et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 788,
"end": 809,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 897,
"end": 918,
"text": "(Maynez et al., 2020)",
"ref_id": "BIBREF75"
},
{
"start": 1468,
"end": 1492,
"text": "(Kale and Rastogi, 2020)",
"ref_id": "BIBREF49"
},
{
"start": 1909,
"end": 1910,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1872,
"end": 1879,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "GEMifying the data",
"sec_num": "3.1"
},
{
"text": "In addition to applying consistent metrics to existing test sets, understanding specific model behavior, such as model generalization capabilities or performance under targeted cases, is also key for improvement. This is difficult to assess through evaluations on i.i.d. test splits. We thus release challenge sets to evaluate data-to-text and text-to-text models (overview in Table 2 ). In addition to enabling a more specific breakdown of how a model performs in the presence of challenging inputs, the set of system outputs on these test sets also constitutes a rich corpus that enables further error analysis and research. We apply multiple strategies to create the special test sets, in particular (I) alteration of the existing test sets (e.g., the introduction of distractors), (II) breaking down of the existing sets into subsets with certain properties (e.g., subsets with different complexity), and (III) the compilation of new test sets (e.g., out-of-vocabulary inputs). We restrict the size of each challenge set to about 500 examples to minimize computational overhead. On the WebNLG challenge sets, all subset items are selected proportionally from each category to ensure a similar distribution to the original set; on all other datasets the subset items are selected from the whole set. The results of the different systems on these subsets will be compared to the results obtained by the same systems on the same subsets of the original test data. For case (I), altering existing test sets, the first challenge set adds numerical variation in WebNLG. This variation attempts to respect the format of the current cardinal value (e.g. alpha, integer, or floating-point) and replaces the existing value with a new random value as a means to challenge existing trained models. The generated number is lower-bounded between zero and upper bounded to be within to the highest power of 10 unit for the given value (e.g. replacing a value of 54 would result in a random value between 0-100). Floating values are also bounded to have the same degree of precision as the input value. For structureto-text and dialog datasets, we produce a version of the test sets in which the order of the components of the input structures (triples, concepts, dialog acts, table rows, etc.) is randomly changed. For text-to-text datasets and Schema-guided Dialog, we introduce several types of perturbations: (a) typographical errors, using butter-fingers 9 with two thresholds 0.02 and 0.05, which respectively correspond to lower and higher error frequencies; (b) removal of the final punctuation sign (if any); (c) substitution of the input text by a backtranslated version, using the backtranslation implementation by Xie et al. (2020) . We rejected backtranslation outputs based on a character length to ensure that the difference in character length between original and backtranslation does not exceed 35% of the original source character length. For XSum 99.8% of the backtranslations were accepted, for Wiki-Auto 94.42% (ASSET) and 87.18% (TURK), and for Schema-Guided Dialog 78%.",
"cite_spans": [
{
"start": 2714,
"end": 2731,
"text": "Xie et al. (2020)",
"ref_id": "BIBREF119"
}
],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Challenge Sets",
"sec_num": "3.2"
},
{
"text": "In case (II), the breaking down existing sets, we first provide for each dataset random samples of training and validation data, in order to assess to what extent the scores of the different systems drop when run on the test data. Then, specific splits are created for particular datasets, in order to assess possible biases of the models, and their robustness across inputs with different specifications. For ToTTo, test set splits are built according to several aspects that can be identified using Wiki-Data: gender, ethnicity and nationality grouped by continent. For gender, we compare the performance between male and female people, but cannot compare other genders due to a lack of original data -only seven people in the original test set are marked as having a different gender. We compare across the continent of the underlying nationality to address the issue that data for each country can be very sparse -i.e., only 19 coun- 9 https://github.com/alexyorke/ butter-fingers tries are represented by more than ten people and only one of these is located in Africa (Kenya). In case a person has citizenships across multiple continents, we may include the person in any of the included continents. Finally, we compare African Americans vs. all Americans. Ethnicity is very sparsely annotated in WikiData with fewer than 150 annotated test examples in total and 128 of these are African Americans. We thus are unable to compare the performance on, e.g., Yoruba or Punjabi people, both of which have fewer than five instances. Another caveat here is that only 21 of the 128 people are female. Our contrast subset that can include any US citizens matches these counts. Across all three challenge subsets, we additionally match the fraction of the existing non-overlap and overlap properties. For WebNLG, we propose subsets based on the shape of the inputs (number of triples, number of common subjects and/or objects, depth, etc.) For Turk/ASSET, splits are created in terms of the syntactic complexity of the sentences to be simplified. To characterise sentence complexity we use the developmental level scale proposed by Covington et al. (2006). 10 Although Turk and ASSET contain similar input sentences, the human references in Turk were created without allowing sentence splits and ASSET was created by encouraging annotators to split long sentences. For all datasets, we propose splits based on the frequency of the parts that compose the input in the training data; the resulting test sets range from being made of very common components to being made only from components unseen in the training data. For case (III), we collect time-shifted test data for news summarization in the form of articles with Covid19-related keywords. Since MLSum and XSum were collected before the pandemic, we can measure how a model responds to context not seen in the training data (outside of potential pretraining). The new set of articles covers existing article topics (economy, sports, etc.) but all in relation to the Covid19 pandemic. In addition, some new topics appear in the collected data derived from outlet sections that were not part of the original data collection. 11 ",
"cite_spans": [
{
"start": 938,
"end": 939,
"text": "9",
"ref_id": null
},
{
"start": 2153,
"end": 2155,
"text": "10",
"ref_id": null
},
{
"start": 3175,
"end": 3177,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge Sets",
"sec_num": "3.2"
},
{
"text": "Since the GEM test sets and final metrics selection have not been released yet, we describe an experimental setup that will ensure that participating models are trained correctly and evaluated on publicly available data with available metrics that will give a sufficient indication of a model's performance. To do this, we are reporting the results of the baseline models on the validation sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Much of the recent modeling progress in NLP can be attributed to the rise of the pretrain-then-finetune paradigm which has led to consistently better results. This finding is consistent with human judgments for summarization, as shown by Fabbri et al. (2020) , among others. However, many of the tasks included in GEM may not benefit from a language model encoder since their input is not natural language. We thus apply a variety of different architectures that vary in size, complexity, and training schema. Our main baselines are T5 with 60M parameters (Raffel et al., 2020) and BART with 139M parameters (Lewis et al., 2020a) . For non-English datasets, we use their multilingual counterparts mT5 in various sizes (Xue et al., 2020) and mBART (Liu et al., 2020b) . We additionally train the following baselines on a subset of tasks: TGen (with added language model and lemma tags denoted as TGen+/++) (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016b) , an architecture for generation from dialog acts, an LSTM-based Sequence-to-sequence model with attention (Bahdanau et al., 2015), DialoGPT (Zhang et al., 2020c ), a pretraining approach for conversational models, and PEGASUS , which uses a summarization-specific pretraining schema that masks and predicts entire sentences.For WikiLingua, we additionally report results on a setup proposed by which includes first training a monolingual model followed by finetuning with the correct source language, coupled with synthetic data generated through translation (mBART+). Almost all baselines can be reproduced on a GPUbased colaboratory notebook within 2-3 hours.",
"cite_spans": [
{
"start": 238,
"end": 258,
"text": "Fabbri et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 556,
"end": 577,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF96"
},
{
"start": 608,
"end": 629,
"text": "(Lewis et al., 2020a)",
"ref_id": "BIBREF56"
},
{
"start": 718,
"end": 736,
"text": "(Xue et al., 2020)",
"ref_id": null
},
{
"start": 747,
"end": 766,
"text": "(Liu et al., 2020b)",
"ref_id": "BIBREF66"
},
{
"start": 905,
"end": 932,
"text": "(Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016b)",
"ref_id": "BIBREF22"
},
{
"start": 1074,
"end": 1094,
"text": "(Zhang et al., 2020c",
"ref_id": "BIBREF127"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Baselines",
"sec_num": "4.1"
},
{
"text": "As mentioned above, GEM provides a testbed for automated metrics and can be used to popularize newly developed ones. Thus, models are evaluated via a constantly expanding list of metrics and, to avoid overfitting to known metrics, we will use metrics on the test submissions that are not included in this initial writeup. Consequentially, the baseline results are an incomplete list which will be expanded upon the announcement of the test metrics. The set of metrics can be computed via the framework described at https://gem-benchmark. com/shared_task which comprises metrics in the following categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "Lexical Similarity. We include multiple \"traditional\" metrics as baseline metrics, notably BLEU (Papineni et al., 2002) , ROUGE-1/2/L (Lin, 2004) , and METEOR (Banerjee and Lavie, 2005). These metrics can often be gamed, for example, ROUGE can be improved by increased the output length of the model (Sun et al., 2019) . Moreover, the reliability of these metrics depends on the quality and number of the references (Mathur et al., 2020a; Freitag et al., 2020) . However, on a system-level, they still correlate well with human judgments for some tasks (Reiter, 2018) .",
"cite_spans": [
{
"start": 96,
"end": 119,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF88"
},
{
"start": 134,
"end": 145,
"text": "(Lin, 2004)",
"ref_id": "BIBREF62"
},
{
"start": 300,
"end": 318,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF113"
},
{
"start": 416,
"end": 438,
"text": "(Mathur et al., 2020a;",
"ref_id": "BIBREF73"
},
{
"start": 439,
"end": 460,
"text": "Freitag et al., 2020)",
"ref_id": "BIBREF32"
},
{
"start": 553,
"end": 567,
"text": "(Reiter, 2018)",
"ref_id": "BIBREF100"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "Semantic Equivalence. More recently, metrics that rely on pretrained language models have shown improved correlations with human judgments on the segment-level. We thus include BERTScore (Zhang et al., 2020b) , a metric based on the similarity of sentence embeddings, and BLEURT (Sellam et al., 2020) , a metric that is fine-tuned on human ratings. The reported baseline results use RoBERTa-large and mBERT (Devlin et al., 2019) for BERTScore and the English-only BLEURT-base-128 for BLEURT.",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF125"
},
{
"start": 279,
"end": 300,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF107"
},
{
"start": 407,
"end": 428,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "Probing for Faithfulness. Another approach that has shown promise in summarization. The approach relies on the insight that a reader of a reference and generated summary should be able to answer the same question, regardless of how the summary is phrased. There has been much development toward these QA-based approaches (Eyal et al., 2019; Scialom et al., 2019; Wang et al., 2020, among others) and they can provide an alternative angle to model evaluation that does not highly correlate with other evaluation approaches (Fabbri et al., 2020) . While most related work on these metrics is limited to summarization, we are evaluating systems using a QA-based method called QuestEval (Scialom et al., 2021) that supports all of our tasks.",
"cite_spans": [
{
"start": 321,
"end": 340,
"text": "(Eyal et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 341,
"end": 362,
"text": "Scialom et al., 2019;",
"ref_id": "BIBREF106"
},
{
"start": 363,
"end": 395,
"text": "Wang et al., 2020, among others)",
"ref_id": null
},
{
"start": 522,
"end": 543,
"text": "(Fabbri et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 683,
"end": 705,
"text": "(Scialom et al., 2021)",
"ref_id": "BIBREF105"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "In addition to QA-based evaluation, there have also been related efforts to develop more fine- Table 4 : Results of the baseline results we release with GEM, focusing on diversity of the outputs and neutral system characterizations. grained and interpretable evaluation metrics, for example to measure consistency in data-to-text problems (Opitz and Frank, 2020; Dhingra et al., 2019) . We are using one such metric called NUBIA (Kane et al., 2020) , the NeUral Based Interchangeability Assessor, which combines multiple measures such as entailment and similarity into a decomposable and interpretable score.",
"cite_spans": [
{
"start": 339,
"end": 362,
"text": "(Opitz and Frank, 2020;",
"ref_id": "BIBREF87"
},
{
"start": 363,
"end": 384,
"text": "Dhingra et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 429,
"end": 448,
"text": "(Kane et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "Diversity. As argued by Hashimoto et al. (2019) among many others, NLG models intrinsically trade off diversity and quality. A model can produce more diverse outputs through sampling but at the cost of output quality. To account for this aspect, we compute multiple diversity metrics, starting with those proposed for the analysis of the results of the E2E NLG challenge (Dusek et al., 2020) and by van Miltenburg et al. (2018). These include the Shannon Entropy (Shannon and Weaver, 1963) over unigrams and bigrams (H 1 , H 2 ), the mean segmented type token ratio over segment lengths of 100 (MSTTR, Johnson, 1944) , the ratio of distinct n-grams over the total number of n-grams (Distinct 1,2 ), and the count of n-grams that only appear once across the entire test output (Unique 1,2 , Li et al., 2016) .",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "Hashimoto et al. (2019)",
"ref_id": "BIBREF38"
},
{
"start": 371,
"end": 391,
"text": "(Dusek et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 594,
"end": 616,
"text": "(MSTTR, Johnson, 1944)",
"ref_id": null
},
{
"start": 790,
"end": 806,
"text": "Li et al., 2016)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "System Characterization. The final section of metrics will characterize the systems. While the focus of this section will be on qualitative descriptions through model cards, we also gather quantitative information that is not necessarily associated with a judgment. As part of this, we collect the number of parameters of a system, as suggested by Ethayarajh and Jurafsky (2020) . For each task, we additionally report the vocabulary size over the output (|V|) and the mean output length of a system (Sun et al., 2019) .",
"cite_spans": [
{
"start": 348,
"end": 378,
"text": "Ethayarajh and Jurafsky (2020)",
"ref_id": "BIBREF25"
},
{
"start": 500,
"end": 518,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF113"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "One of the central aims of GEM is to measure the progress in NLG without misrepresenting the complex interactions between the sometimes contradicting measures. We thus will not distill the complex interplay of the data, metrics, and model outputs into a single number or statement, and we do not present results in a traditional leaderboard. Instead, we developed an interactive result exploration system that allows analyses of model results, and which we describe in this section. To further motivate this change, consider the following conclusion someone may draw from looking at a leaderboard:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "System Foo performs the best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our interactive system aims to enable more nuanced statements such as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "System Foo leads to consistent performance increases in Bar-type metrics on challenges that measure Baz while maintaining equal performance on most metrics of type Qux.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "A screenshot of our system is presented in Figure 2. 12 In addition, our baseline results are presented in a tabular view in Tables 3 and 4 . Our interactive system is centered around a parallel coordinates plot (Inselberg, 1985) which shows all results as lines through parallel axes. Every line intersects the axes at the corresponding mapped value. For instance, see the red line representing the results for task \"ToTTo\" of baseline \"t5-small\". Filters can be applied along axes (see BLEURT axis in Figure 2 ) and the filtered selection is highlighted through bold lines. A selection can be a set of metrics, systems, or tasks. This style of presentation has not been used before for a benchmark. The closest prior work is by Fu et al. (2020) for namedentity recognition which allows similar filtering and sorting, but presents the results in a table. However, the parallel coordinates approach can scale to a much greater number of metrics than a table. Moreover, by using a parallel coordinates plot instead of a table, it is easy to spot patterns that span multiple metrics, systems, or tasks. For example, the highlighted line in Figure 2 uncovers that, for the T5 baseline on ToTTo, the diversity metrics score higher than other systems while scoring lower on reference-based metrics. Since we only have a single baseline for ToTTo, it is unclear whether this difference can be attributed to the dataset or the system but this relationship will be uncovered once we receive submissions.",
"cite_spans": [
{
"start": 212,
"end": 229,
"text": "(Inselberg, 1985)",
"ref_id": "BIBREF44"
},
{
"start": 730,
"end": 746,
"text": "Fu et al. (2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 43,
"end": 52,
"text": "Figure 2.",
"ref_id": "FIGREF0"
},
{
"start": 125,
"end": 139,
"text": "Tables 3 and 4",
"ref_id": "TABREF3"
},
{
"start": 503,
"end": 511,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1138,
"end": 1146,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The final system will additionally be able to display the model cards and other related metainformation associated with submissions. It will also be able to show (and compare) exemplary outputs for each test set. Those two features will improve the transparency of the results and systems to those who are not familiar with a task and provide necessary information to those who consider using a particular system. The combination of all components will enable analysis on quantitative, individual, and qualitative level which can support formulating new research hypotheses and gather in-depth insights about system performance. For example, the functionality to compare human anno-tation and automatic measures could lead to a better understanding how fluency affect BERTScore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In addition to the interactive self-directed result exploration, our shared task features an evaluation and analysis part. Instead of dictating the interpretation of the modeling shared task results, we will release all system outputs and metrics in this second part and participants of this part may run their own evaluation and conduct interesting analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "While we ask submitters to try to cover as many tasks as possible, we acknowledge potential restrictions on computation resources. We thus do not require that a submissions to GEM has to include predictions on every included test and challenge sets. All predictions from a model should be formatted and added into a single file as outlined on our website.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitting to the benchmark",
"sec_num": "6"
},
{
"text": "In addition, we require every submitter to answer a series of questions that we will use to construct a model card (Mitchell et al., 2019) and externalize potential concerns regarding the social impact of a model and its use, or its training data. The card will additionally display information to replicate the experiments. While we require responses to these questions at submission time, we allow the information about a model to remain anonymous during required anonymization periods should a paper describing the model be under submission elsewhere. All submitted model outputs will be made publicly available for download.",
"cite_spans": [
{
"start": 115,
"end": 138,
"text": "(Mitchell et al., 2019)",
"ref_id": "BIBREF83"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Submitting to the benchmark",
"sec_num": "6"
},
{
"text": "After a submission, we will run the evaluation suite on the submitted outputs and additionally collect human annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitting to the benchmark",
"sec_num": "6"
},
{
"text": "Human Evaluation GEM will be used to develop reproducible and consistent human evaluation strategies for generated text. This task involves selecting and defining which quantities of the generated text should be measured, developing annotation schemes and rater guidelines to capture these quantities accurately, and infrastructure to annotate system outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitting to the benchmark",
"sec_num": "6"
},
{
"text": "We aim to develop these setups for all task setups such as summarization, dialogue, simplification, and data-to-text. To approach this task, we will follow the recently proposed taxonomy of human evaluation measures by and follow the reporting strategies proposed by Howcroft et al. (2020) . The detailed setups will be described in a evaluation datasheet (Shimorina and Belz, 2021) .",
"cite_spans": [
{
"start": 267,
"end": 289,
"text": "Howcroft et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 356,
"end": 382,
"text": "(Shimorina and Belz, 2021)",
"ref_id": "BIBREF110"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Submitting to the benchmark",
"sec_num": "6"
},
{
"text": "All shared task participants will be asked to provide gold annotations on system outputs, which we will then use to evaluate the consistency of crowdsourced annotations. 13 ",
"cite_spans": [
{
"start": 170,
"end": 172,
"text": "13",
"ref_id": "BIBREF148"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Submitting to the benchmark",
"sec_num": "6"
},
{
"text": "This section lists the currently active developments and the long-term steps we will take to ensure that GEM will continue to evolve and improve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Steps",
"sec_num": "7"
},
{
"text": "Many of the initial datasets in GEM are focused on (American or British) English; we see this release as a starting point for the collection of new datasets to improve the inclusiveness of other languages and cultures. From the task point of view, to ensure the longevity of the dataset, we want it to be practical and socially beneficial. Through GEM, we have developed a set of desired criteria for NLG datasets and we aim to apply this knowledge to data collection and actively work toward reducing the disparity in data availability between languages (Joshi et al., 2020) . To this end, we are focusing on a task that requires content selection, planning, and surface realization along in a grounded scenario. The idea is in the prototyping stage with prospects broadly towards dialog response generation and topic summarization in multiple languages. We plan to do so by collaborating with speakers of low-resourced languages through a participatory research approach, as suggested by (\u2200 et al., 2020) . Toward this goal, GEM welcomes anyone interested in collaborating on this effort.",
"cite_spans": [
{
"start": 555,
"end": 575,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 990,
"end": 1006,
"text": "(\u2200 et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting more multilingual data",
"sec_num": "7.1"
},
{
"text": "GEM currently focuses on tasks that deterministically transform an input into an output. With the increasing use of NLG models in real-world applications, how to enable and evaluate personalized NLG systems (e.g., in dialect or formality) remains challenging. Several related tasks have been proposed, for example, the transfer of writing style from informal to formal (Rao and Tetreault, 2018) , personalization of machine translation systems to align with particular personal traits (Mirkin and Meunier, 2015) , or persona-guided response generation of dialogue systems . We envision our framework to be extended (e.g., dataset, evaluation) to incorporate this line of userfocused NLG.",
"cite_spans": [
{
"start": 369,
"end": 394,
"text": "(Rao and Tetreault, 2018)",
"ref_id": "BIBREF97"
},
{
"start": 485,
"end": 511,
"text": "(Mirkin and Meunier, 2015)",
"ref_id": "BIBREF82"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Personalizing and Controlling NLG",
"sec_num": "7.2"
},
{
"text": "To activate the benefits of a living benchmark that is focused on evaluation, we commit to regular updates for GEM. We invite contributions in the form of model outputs, analyses, and metrics at any time and will automatically update the results presented on our website to incorporate them. For the updates to the dataset selection, we want to consider the input of the wider NLG research community. To do so, we will set up a yearly selection process similar to the one described in Section 3. The first update process will be run after the GEM workshop at ACL 2021. To be able to have a robust comparison between different versions of GEM, we will only replace a small subset of datasets at a time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular updates to the living benchmark",
"sec_num": "7.3"
},
{
"text": "In this paper, we have introduced GEM, a living natural language generation benchmark with a focus on evaluation. While GEM does not claim to instantly solve all issues of benchmarks in NLG, we aim to provide an environment in which systems can be tested in a principled manner and which can elevate the prominence of interesting evaluation approaches. By providing a testbed to easily conduct experiments across many datasets and evaluate in a repeatable, consistent, and more interpretable way, we will be able to track progress toward the goals in NLG research much more clearly. Moreover, we will be able to extend and shape GEM in the future to include more multilingual datasets, which will assist in their adoption across the wider research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "GEM is a large effort with a decentralized organization that is split into different task-specific subgroups. To acknowledge everyone's contribution, we list the contribution statements below for all groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution Statements",
"sec_num": "9"
},
{
"text": "Steering Committee. Antoine Bosselut, Esin Durmus, Varun Prashant Gangal, Sebastian Gehrmann, Laura Perez-Beltrachini, Samira Shaikh, and Wei Xu make up the steering committee. Sebastian Gehrmann coordinates and leads the GEM effort. All others provide feedback and discuss larger decisions regarding the direction of GEM and act as conference organizers for the ACL 2021 workshop. Simplification. Dhruv Kumar, Mounica Maddela, and Wei Xu contributed to the GEM Simpli-fication task. Dhruv Kumar created the data cards for the datasets, added Wiki-Auto and Turk/ASSET datasets to TFDS, and integrated the SARI metric (Xu et al., 2016) into the GEM evaluation framework. Mounica Maddela created baselines for the task and added the Turk benchmark corpus to Hugging Face and TFDS. Wei Xu helped in the organization and planning of the task setup. Human Evaluation. Samira Shaikh was the point of contact for this working group. She led the discussions to make progress on the group goals. She also worked with the group to select the general evaluation criteria as well as the criteria for dialogue and simplification tasks. Khyathi Chandu and Miruna Clinciu worked on selecting evaluation criteria for the summarization task and participated in the group discussions. Simon Mille provided support on using the criteria taxonomy and the annotated evaluation sheets for selecting and defining the criteria to use; worked on selecting the D2T criteria. Vitaly Nikolaev and Sashank Santhanam worked on selecting evaluation criteria for dialog and simplification tasks. Jo\u00e3o Sedoc worked with the group to select the evaluation criteria in general as well as the specific ones for dialog and simplification. He also helped to select among annotation interfaces. Anastasia Shimorina worked with the group to select the evaluation criteria and participated in the discussions. Chris Emezue, Sebastian Gehrmann, Khyati Mahajan, and Yufang Hou participated in discussions.",
"cite_spans": [
{
"start": 617,
"end": 634,
"text": "(Xu et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution Statements",
"sec_num": "9"
},
{
"text": "Website and Submission System. Aman Madaan, Moin Nadeem, Hendrik Strobelt, and Sebastian Gehrmann are part of this group. Sebastian Gehrmann developed the website. Aman Madaan wrote the initial version of the result presentation. Hendrik Strobelt leads the visualization effort for interactive exploration of results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": null
},
{
"text": "Model Infrastructure. Yacine Jernite wrote the initial script template for evaluating and fine-tuning Hugging Face models with the CommonGen example. Sebastian Gehrmann generalized the script to work with other datasets. Tosin Adewumi wrote a script for fine-tuning the DialoGPT model for dialogue datasets. Juan Diego Rodriguez worked on the infrastructure to fine-tune mBART on MLSum. Mihir Kale trained all mT5 baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": null
},
{
"text": "Data and Model Sheets and Statements. Salomey Osei, Pawan Sasanka Ammanamanchi, Juan Diego Rodriguez, Sebastian Gehrmann, Yacine Jernite, and Angelina McMillan-Major are part of this group. The Data Sheet structure was adapted from a combination of designs created for the Hugging Face Datasets library by Angelina McMillan-Major and Yacine Jernite and one written by Sebastian Gehrmann. Juan Diego Rodriguez and Yacine Jernite wrote initial statements for ASSET and Com-monGen respectively. The feedback on those was used to improve the structure of the final template. Everyone contributed to the model card template.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": null
},
{
"text": "Challenge Sets. Simon Mille, Emiel van Miltenburg, Kaustubh Dhole, Varun Prashant Gangal, Saad Mahamood, and Laura Perez-Beltrachini proposed and discussed ideas of interest for the data-to-text and the text-to-text tasks. Simon Mille coordinated the group. Emiel van Miltenburg, Saad Mahamood, and Simon Mille worked on the creation of the data-to-text datasets, while Varun Prashant Gangal, Kaustubh Dhole and Laura Perez-Beltrachini worked on the text-to-text datasets. Sebastian Gehrmann contributed the ToTTo challenge set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": null
},
{
"text": "Crowdsourcing New Data. Chris Emezue, Rubungo Andre Niyongabo, Aremu Anuoluwapo, Khyathi Chandu, Yufang Hou, Samira Shaikh, Varun Prashant Gangal, and Dimitra Gkatzia are members of this group. Khyathi Chandu worked on identifying where the current datasets fall short to motivate the crowdsourcing of data for a new task. Based on the suggestions from collaborators, she wrote two task proposals in the domains of longform text, conversations, and data-to-text that address an array of challenges in generation and easily scale to multiple languages. Samira Shaikh participated in the discussions and gave feedback on the task proposals in the pilot study phase. Dimitra Gkatzia looked into potential resources for crowdsourcing. Chris Emezue and Rubungo Andre Niyongabo explored potential low-resource African languages for crowdsourcing. We are in the process of piloting the tasks internally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": null
},
{
"text": "Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Beno\u00eet Sagot, and Lucia Specia. 2020. ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4668-4679, Online. Association for Computational Linguistics. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": null
},
{
"text": "https://dynabench.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For a more complete description of recent developments in NLG evaluation, we refer to the survey by.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google/BIG-bench",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Consider the criterion \"We need equal representation of large and small datasets\" under the constraint that only two datasets can be selected. If we have two large datasets with utility 10, and one small one with utility 5, we may want to include the smaller dataset over the second large dataset to satisfy the criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our template extends and restructures that from Hugging Face Datasets and along with a guide can be found at https: //gem-benchmark.com/data_cards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the implementation provided by Lu (2010).11 To collect this data we use the scripts provided for the re-creation of MLSum and XSum datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An initial version showcasing our baseline results is deployed on our website.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This approach has been successfully used by WMT for many years. See, e.g., http://www.statmt.org/ wmt20/translation-task.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.kaggle.com/shishu1421/hindi-poetrydataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors of this paper not named in the groups participated in initial discussions, participated in the surveys, and provided regular feedback and guidance. Many participants commented on and helped write this paper. We additionally thank all participants of INLG 2019, the Generation Birdsof-a-Feather meeting at ACL 2020, the EvalNL-GEval Workshop at INLG 2020, and members of the generation challenge mailing list of SIGGEN for their participation in the discussions that inspired and influenced the creation of GEM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "Participants were required to provide information for the following categories when suggesting a dataset for GEM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Task Suggestion Categories",
"sec_num": null
},
{
"text": "As part of our selection process, we queried all GEM members about the utility of tasks and selection criteria. The questions below were included in the survey.\u2022 For each suggested task, \"Should this task be included in GEM?\" on a 5-point Likert scale (1 being strongly against, and 5 strongly in favor).\u2022 We should exclude tasks that are the focus of a shared task in 2021. [yes/no] \u2022 We should exclude tasks that were the focus of a shared task since 2020. [yes/no] \u2022 We should exclude tasks that were ever part of a shared task. ",
"cite_spans": [
{
"start": 375,
"end": 383,
"text": "[yes/no]",
"ref_id": null
},
{
"start": 459,
"end": 467,
"text": "[yes/no]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Task and Criteria Selection Survey",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "STO-RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation",
"authors": [
{
"first": "Nader",
"middle": [],
"last": "Akoury",
"suffix": ""
},
{
"first": "Shufan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Whiting",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Hood",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6470--6484",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.525"
]
},
"num": null,
"urls": [],
"raw_text": "Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6470-6484, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The first surface realisation shared task: Overview and evaluation results",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Espinosa",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz, Mike White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evalu- ation results. In Proceedings of the 13th European Workshop on Natural Language Generation, pages 217-226, Nancy, France. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Disentangling the properties of human evaluation methods: A classification system to support comparability, meta-evaluation and reproducibility testing",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Howcroft",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 13th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "183--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anya Belz, Simon Mille, and David M. Howcroft. 2020. Disentangling the properties of human eval- uation methods: A classification system to support comparability, meta-evaluation and reproducibility testing. In Proceedings of the 13th International Conference on Natural Language Generation, pages 183-194, Dublin, Ireland. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The #benderrule: On naming the languages we study and why it matters. The Gradient",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Bender",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Bender. 2019. The #benderrule: On naming the languages we study and why it matters. The Gradi- ent.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On achieving and evaluating language-independence in NLP. Linguistic Issues in Language Technology",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender. 2011. On achieving and evaluating language-independence in NLP. Linguistic Issues in Language Technology, 6.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00041"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Results of the WMT17 metrics shared task",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamran",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "489--513",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4755"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "199--231",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2302"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Yvette Graham, Amir Kamran, and Milo\u0161 Stanojevi\u0107. 2016. Results of the WMT16 met- rics shared task. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers, pages 199-231, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Evaluation of text generation: A survey. CoRR, abs",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. CoRR, abs/2006.14799.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A discourse-aware attention model for abstractive summarization of long documents",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Soon",
"middle": [],
"last": "Doo",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Seokhwan",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "615--621",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2097"
]
},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Na- zli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long docu- ments. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "How complex is that sentence? a proposed revision of the rosenberg and abbeduto d-level scale",
"authors": [
{
"first": "Congzhou",
"middle": [],
"last": "Michael A Covington",
"suffix": ""
},
{
"first": "Cati",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lorina",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Naci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A Covington, Congzhou He, Cati Brown, Lo- rina Naci, and John Brown. 2006. How complex is that sentence? a proposed revision of the rosenberg and abbeduto d-level scale.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bringing the people back in: Contesting benchmark machine learning datasets",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Amironesei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Smart",
"suffix": ""
},
{
"first": "Hilary",
"middle": [],
"last": "Nicole",
"suffix": ""
},
{
"first": "Morgan",
"middle": [
"Klaus"
],
"last": "Scheuerman",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Denton, Alex Hanna, Razvan Amironesei, An- drew Smart, Hilary Nicole, and Morgan Klaus Scheuerman. 2020. Bringing the people back in: Contesting benchmark machine learning datasets. CoRR, abs/2007.07399.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Handling divergent reference texts when evaluating table-to-text generation",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4884--4895",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1483"
]
},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Co- hen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mimic and rephrase: Reflective listening in open-ended dialogue",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Dieter",
"suffix": ""
},
{
"first": "Tian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Tejasvi Chaganty",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Angel",
"middle": [
"X"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "393--403",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1037"
]
},
"num": null,
"urls": [],
"raw_text": "Justin Dieter, Tian Wang, Arun Tejasvi Chaganty, Ga- bor Angeli, and Angel X. Chang. 2019. Mimic and rephrase: Reflective listening in open-ended dialogue. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 393-403, Hong Kong, China. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations, ICLR 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In 7th International Conference on Learn- ing Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning to ask: Neural question generation for reading comprehension",
"authors": [
{
"first": "Xinya",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Junru",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1342--1352",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1123"
]
},
"num": null,
"urls": [],
"raw_text": "Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1342-1352, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ORB: An open reading benchmark for comprehensive evaluation of machine reading comprehension",
"authors": [
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Ananth",
"middle": [],
"last": "Gottumukkala",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP 2019 MRQA Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, and Matt Gardner. 2019. ORB: An open reading benchmark for comprehensive evalua- tion of machine reading comprehension. In EMNLP 2019 MRQA Workshop, page 147.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization",
"authors": [
{
"first": "Esin",
"middle": [],
"last": "Durmus",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5055--5070",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.454"
]
},
"num": null,
"urls": [],
"raw_text": "Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5055- 5070, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic noise matters for neural natural language generation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Howcroft",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "421--426",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8652"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural lan- guage generation. In Proceedings of the 12th Inter- national Conference on Natural Language Genera- tion, pages 421-426, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A contextaware natural language generation dataset for dialogue systems",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jurc\u0131cek",
"suffix": ""
}
],
"year": 2016,
"venue": "RE-WOCHAT: Workshop on Collecting and Generating Resources for Chatbots and Conversational Agents-Development and Evaluation Workshop Programme",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Du\u0161ek and Filip Jurc\u0131cek. 2016. A context- aware natural language generation dataset for di- alogue systems. In RE-WOCHAT: Workshop on Collecting and Generating Resources for Chatbots and Conversational Agents-Development and Eval- uation Workshop Programme (May 28 th, 2016), page 6.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A contextaware natural language generator for dialogue systems",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jur\u010d\u00ed\u010dek",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "185--190",
"other_ids": {
"DOI": [
"10.18653/v1/W16-3622"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2016a. A context- aware natural language generator for dialogue sys- tems. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 185-190, Los Angeles. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jur\u010d\u00ed\u010dek",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "45--51",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2008"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2016b. Sequence-to- sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 45-51, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural generation for Czech: Data and baselines",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jur\u010d\u00ed\u010dek",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "563--574",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8670"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2019. Neural gener- ation for Czech: Data and baselines. In Proceed- ings of the 12th International Conference on Nat- ural Language Generation, pages 563-574, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Dusek",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2020,
"venue": "Comput. Speech Lang",
"volume": "59",
"issue": "",
"pages": "123--156",
"other_ids": {
"DOI": [
"10.1016/j.csl.2019.06.009"
]
},
"num": null,
"urls": [],
"raw_text": "Ondrej Dusek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG chal- lenge. Comput. Speech Lang., 59:123-156.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Utility is in the eye of the user: A critique of NLP leaderboards",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4846--4853",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.393"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4846-4853, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Question answering as an automatic evaluation metric for news article summarization",
"authors": [
{
"first": "Matan",
"middle": [],
"last": "Eyal",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Baumel",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3938--3948",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1395"
]
},
"num": null,
"urls": [],
"raw_text": "Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation met- ric for news article summarization. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3938-3948, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SummEval: Reevaluating summarization evaluation. CoRR, abs",
"authors": [
{
"first": "Alexander",
"middle": [
"R"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Kryscinski",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2020. SummEval: Re- evaluating summarization evaluation. CoRR, abs/2007.12626.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "ELI5: Long form question answering",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3558--3567",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1346"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Yacine Jernite, Ethan Perez, David Grang- ier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3558-3567, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "889--898",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Diego Moussalem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Thiago Castro Ferreira",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Ilinykh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mille",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd WebNLG Workshop on Natural Language Generation from the Semantic Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago Castro Ferreira, Claire Gardent, Chris van der Lee, Nikolai Ilinykh, Simon Mille, Diego Mous- salem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020). In Proceedings of the 3rd WebNLG Workshop on Nat- ural Language Generation from the Semantic Web (WebNLG+ 2020), Dublin, Ireland (Virtual). Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Alp \u00d6ktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages. In Findings of the Association for Computational Linguistics: EMNLP 2020",
"authors": [
{
"first": "\u2200",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Wilhelmina",
"middle": [],
"last": "Nekoto",
"suffix": ""
},
{
"first": "Vukosi",
"middle": [],
"last": "Marivate",
"suffix": ""
},
{
"first": "Tshinondiwa",
"middle": [],
"last": "Matsila",
"suffix": ""
},
{
"first": "Timi",
"middle": [],
"last": "Fasubaa",
"suffix": ""
},
{
"first": "Taiwo",
"middle": [],
"last": "Fagbohungbe",
"suffix": ""
},
{
"first": "Shamsuddeen",
"middle": [],
"last": "Solomon Oluwole Akinola",
"suffix": ""
},
{
"first": "Salomon",
"middle": [
"Kabongo"
],
"last": "Muhammad",
"suffix": ""
},
{
"first": "Salomey",
"middle": [],
"last": "Kabenamualu",
"suffix": ""
},
{
"first": "Freshia",
"middle": [],
"last": "Osei",
"suffix": ""
},
{
"first": "Rubungo",
"middle": [
"Andre"
],
"last": "Sackey",
"suffix": ""
},
{
"first": "Ricky",
"middle": [],
"last": "Niyongabo",
"suffix": ""
},
{
"first": "Perez",
"middle": [],
"last": "Macharm",
"suffix": ""
},
{
"first": "Orevaoghene",
"middle": [],
"last": "Ogayo",
"suffix": ""
},
{
"first": "Musie",
"middle": [],
"last": "Ahia",
"suffix": ""
},
{
"first": "Mofetoluwa",
"middle": [],
"last": "Meressa Berhe",
"suffix": ""
},
{
"first": "Masabata",
"middle": [],
"last": "Adeyemi",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Mokgesi-Selinga",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Okegbemi",
"suffix": ""
},
{
"first": "Kolawole",
"middle": [],
"last": "Martinus",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Tajudeen",
"suffix": ""
},
{
"first": "Kelechi",
"middle": [],
"last": "Degila",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Ogueji",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Siminyu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Jamiil Toure",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Iroro",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Orife",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2144--2160",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.195"
]
},
"num": null,
"urls": [],
"raw_text": "\u2200, Wilhelmina Nekoto, Vukosi Marivate, Tshi- nondiwa Matsila, Timi Fasubaa, Taiwo Fagbo- hungbe, Solomon Oluwole Akinola, Shamsud- deen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Ore- vaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulka- dir Dangana, Herman Kamper, Hady Elsahar, Good- ness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefu- luchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ay- odele Olabiyi, Arshath Ramkilowan, Alp \u00d6ktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 2144-2160, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "BLEU might be guilty but references are not innocent",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "61--71",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.5"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be guilty but references are not innocent. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 61-71, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Interpretable multi-dataset evaluation for named entity recognition",
"authors": [
{
"first": "Jinlan",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6058--6069",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.489"
]
},
"num": null,
"urls": [],
"raw_text": "Jinlan Fu, Pengfei Liu, and Graham Neubig. 2020. In- terpretable multi-dataset evaluation for named entity recognition. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 6058-6069, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Go figure! A meta evaluation of factuality in summarization",
"authors": [
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, and Jianfeng Gao. 2020. Go figure! A meta evaluation of factuality in summarization. CoRR, abs/2010.12834.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The WebNLG challenge: Generating text from RDF data",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "124--133",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3518"
]
},
"num": null,
"urls": [],
"raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133, San- tiago de Compostela, Spain. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2018,
"venue": "J. Artif. Intell. Res",
"volume": "61",
"issue": "",
"pages": "65--170",
"other_ids": {
"DOI": [
"10.1613/jair.5477"
]
},
"num": null,
"urls": [],
"raw_text": "Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. J. Artif. Intell. Res., 61:65-170.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Datasheets for datasets",
"authors": [
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Morgenstern",
"suffix": ""
},
{
"first": "Briana",
"middle": [],
"last": "Vecchione",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Wortman"
],
"last": "Vaughan",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on Fairness, Accountability, and Transparency in Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2018. Datasheets for datasets. In Proceedings of the Fifth Workshop on Fairness, Accountability, and Transparency in Ma- chine Learning, Stockholm, Sweden.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Unifying human and statistical evaluation for natural language generation",
"authors": [
{
"first": "Tatsunori",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Hugh",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1689--1701",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1169"
]
},
"num": null,
"urls": [],
"raw_text": "Tatsunori Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 1689-1701, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Findings of the fourth workshop on neural generation and translation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.18653/v1/2020.ngt-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioan- nis Konstas, Andrew Finch, Graham Neubig, Xian Li, and Alexandra Birch. 2020. Findings of the fourth workshop on neural generation and transla- tion. In Proceedings of the Fourth Workshop on Neu- ral Generation and Translation, pages 1-9, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Infor- mation Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693-1701.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Howcroft",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Miruna-Adriana",
"middle": [],
"last": "Clinciu",
"suffix": ""
},
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Saad",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mahamood",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mille",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 13th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "169--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised def- initions. In Proceedings of the 13th International Conference on Natural Language Generation, pages 169-182, Dublin, Ireland. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "LC-STS: A large scale Chinese short text summarization dataset",
"authors": [
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fangze",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1967--1972",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1229"
]
},
"num": null,
"urls": [],
"raw_text": "Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. LC- STS: A large scale Chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967-1972, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "2020",
"issue": "",
"pages": "4411--4421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13- 18 July 2020, Virtual Event, volume 119 of Proceed- ings of Machine Learning Research, pages 4411- 4421. PMLR.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The plane with parallel coordinates",
"authors": [
{
"first": "Alfred",
"middle": [],
"last": "Inselberg",
"suffix": ""
}
],
"year": 1985,
"venue": "Vis. Comput",
"volume": "1",
"issue": "2",
"pages": "69--91",
"other_ids": {
"DOI": [
"10.1007/BF01898350"
]
},
"num": null,
"urls": [],
"raw_text": "Alfred Inselberg. 1985. The plane with parallel coordi- nates. Vis. Comput., 1(2):69-91.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Neural CRF model for sentence alignment in text simplification",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mounica",
"middle": [],
"last": "Maddela",
"suffix": ""
},
{
"first": "Wuwei",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7943--7960",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.709"
]
},
"num": null,
"urls": [],
"raw_text": "Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7943- 7960, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Bangla Natural Language Image to Text (BNLIT)",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.7910/DVN/DZZ1ZB"
]
},
"num": null,
"urls": [],
"raw_text": "Md. Asifuzzaman Jishan, Khan Raqib Mahmud, and Abul Kalam Al Azad. 2019. Bangla Natural Lan- guage Image to Text (BNLIT).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Studies in language behavior: A program of research",
"authors": [
{
"first": "Wendell",
"middle": [
"Johnson"
],
"last": "",
"suffix": ""
}
],
"year": 1944,
"venue": "Psychological Monographs",
"volume": "56",
"issue": "2",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wendell Johnson. 1944. Studies in language behavior: A program of research. Psychological Monographs, 56(2):1-15.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "The state and fate of linguistic diversity and inclusion in the NLP world",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sebastin",
"middle": [],
"last": "Santy",
"suffix": ""
},
{
"first": "Amar",
"middle": [],
"last": "Budhiraja",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6282--6293",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.560"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Few-shot natural language generation by rewriting templates",
"authors": [
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.15006"
]
},
"num": null,
"urls": [],
"raw_text": "Mihir Kale and Abhinav Rastogi. 2020. Few-shot natural language generation by rewriting templates. arXiv preprint arXiv:2004.15006.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Pelkins Ajanoh, and Mohamed Coulibali. 2020. NU-BIA: NeUral based interchangeability assessor for text generation",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Kane",
"suffix": ""
},
{
"first": "Yusuf",
"middle": [],
"last": "Muhammed",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Kocyigit",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abdalla",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 1st Workshop on Evaluating NLG Evaluation",
"volume": "",
"issue": "",
"pages": "28--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hassan Kane, Muhammed Yusuf Kocyigit, Ali Abdalla, Pelkins Ajanoh, and Mohamed Coulibali. 2020. NU- BIA: NeUral based interchangeability assessor for text generation. In Proceedings of the 1st Work- shop on Evaluating NLG Evaluation, pages 28-37, Online (Dublin, Ireland). Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Content selection in deep learning models of summarization",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Kedzie",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1818--1828",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1208"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Kedzie, Kathleen McKeown, and Hal Daum\u00e9 III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1818-1828, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "GENIE: A leaderboard for human-in-the-loop evaluation of text generation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Bragg",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, and Daniel S. Weld. 2021. GENIE: A leader- board for human-in-the-loop evaluation of text gen- eration. CoRR, abs/2101.06561.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "The narrativeQA reading comprehension challenge",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The narrativeQA read- ing comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317- 328.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization",
"authors": [
{
"first": "Faisal",
"middle": [],
"last": "Ladhak",
"suffix": ""
},
{
"first": "Esin",
"middle": [],
"last": "Durmus",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4034--4048",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.360"
]
},
"num": null,
"urls": [],
"raw_text": "Faisal Ladhak, Esin Durmus, Claire Cardie, and Kath- leen McKeown. 2020. WikiLingua: A new bench- mark dataset for cross-lingual abstractive summa- rization. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 4034- 4048, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Neural text generation from structured data with application to the biography domain",
"authors": [
{
"first": "R\u00e9mi",
"middle": [],
"last": "Lebret",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1203--1213",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1128"
]
},
"num": null,
"urls": [],
"raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203-1213, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "MLQA: Evaluating cross-lingual extractive question answering",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7315--7330",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.653"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020b. MLQA: Evalu- ating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7315- 7330, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Visual question generation as dual task of visual question answering",
"authors": [
{
"first": "Yikang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Bolei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Wanli",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Xiaogang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6116--6124",
"other_ids": {
"DOI": [
"10.1109/CVPR.2018.00640"
]
},
"num": null,
"urls": [],
"raw_text": "Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, and Ming Zhou. 2018. Vi- sual question generation as dual task of visual ques- tion answering. In 2018 IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 6116-6124. IEEE Computer Society.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Shuguang Liu, Fan Yang, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training",
"authors": [
{
"first": "Yaobo",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fenfei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Guihong",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Sining",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Taroon",
"middle": [],
"last": "Bharti",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Jiun-Hung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Winnie",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fen- fei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation. CoRR, abs/2004.01401.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "CommonGen: A constrained text generation challenge for generative commonsense reasoning",
"authors": [
{
"first": "Wangchunshu",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1823--1840",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.165"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text gen- eration challenge for generative commonsense rea- soning. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1823-1840, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "How can we accelerate progress towards human-like linguistic generalization?",
"authors": [
{
"first": "",
"middle": [],
"last": "Tal Linzen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5210--5217",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.465"
]
},
"num": null,
"urls": [],
"raw_text": "Tal Linzen. 2020. How can we accelerate progress to- wards human-like linguistic generalization? In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210- 5217, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "GLGE: A new general language generation evaluation benchmark",
"authors": [
{
"first": "Dayiheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Jiao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiusheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jiancheng",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Winnie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, and Nan Duan. 2020a. GLGE: A new general language generation evaluation bench- mark. CoRR, abs/2011.11928.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Generating wikipedia by summarizing long sequences",
"authors": [
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Pot",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Goodrich",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Sepassi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations, ICLR 2018, Vancouver",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In 6th International Conference on Learning Representations, ICLR 2018, Vancou- ver, BC, Canada, April 30 -May 3, 2018, Confer- ence Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020b. Multilingual denoising pre-training for neural machine translation. Trans. Assoc. Comput. Linguistics, 8:726-742.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Nissan",
"middle": [],
"last": "Pow",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "285--294",
"other_ids": {
"DOI": [
"10.18653/v1/W15-4640"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294, Prague, Czech Re- public. Association for Computational Linguistics.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Automatic analysis of syntactic complexity in second language writing",
"authors": [
{
"first": "Xiaofei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2010,
"venue": "International Journal of Corpus Linguistics",
"volume": "15",
"issue": "4",
"pages": "474--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofei Lu. 2010. Automatic analysis of syntactic com- plexity in second language writing. International Journal of Corpus Linguistics, 15(4):474-496.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "671--688",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6450"
]
},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Ond\u0159ej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good perfor- mance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671-688, Belgium, Brussels. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "62--90",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5302"
]
},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT sys- tems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "A human evaluation of amr-to-english generation systems",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Shira",
"middle": [],
"last": "Wein",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "4773--4786",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.420"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Manning, Shira Wein, and Nathan Schneider. 2020. A human evaluation of amr-to-english gen- eration systems. In Proceedings of the 28th In- ternational Conference on Computational Linguis- tics, COLING 2020, Barcelona, Spain (Online), De- cember 8-13, 2020, pages 4773-4786. International Committee on Computational Linguistics.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4984--4997",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.448"
]
},
"num": null,
"urls": [],
"raw_text": "Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020a. Tangled up in BLEU: Reevaluating the eval- uation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Qingsong Ma, and Ond\u0159ej Bojar. 2020b. Results of the WMT20 metrics shared task",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "688--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitika Mathur, Johnny Wei, Markus Freitag, Qing- song Ma, and Ond\u0159ej Bojar. 2020b. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "On faithfulness and factuality in abstractive summarization",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Maynez",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1906--1919",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.173"
]
},
"num": null,
"urls": [],
"raw_text": "Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "The natural language decathlon: Multitask learning as question answering",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. CoRR, abs/1806.08730.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "The first multilingual surface realisation shared task (SR'18): Overview and evaluation results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Multilingual Surface Realisation",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3601"
]
},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, Yvette Gra- ham, Emily Pitler, and Leo Wanner. 2018. The first multilingual surface realisation shared task (SR'18): Overview and evaluation results. In Proceedings of the First Workshop on Multilingual Surface Realisa- tion, pages 1-12, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, Yvette Gra- ham, and Leo Wanner, editors. 2019. Proceedings of the 2nd Workshop on Multilingual Surface Real- isation (MSR 2019). Association for Computational Linguistics, Hong Kong, China.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "The third multilingual surface realisation shared task (SR'20): Overview and evaluation results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Thiago",
"middle": [],
"last": "Castro Ferreira",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Multilingual Surface Realisation",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anya Belz, Bernd Bohnet, Thiago Cas- tro Ferreira, Yvette Graham, and Leo Wanner. 2020. The third multilingual surface realisation shared task (SR'20): Overview and evaluation results. In Pro- ceedings of the Third Workshop on Multilingual Sur- face Realisation, pages 1-20, Barcelona, Spain (On- line). Association for Computational Linguistics.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Measuring the diversity of automatic image descriptions",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1730--1741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emiel van Miltenburg, Desmond Elliott, and Piek Vossen. 2018. Measuring the diversity of automatic image descriptions. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 1730-1741, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "AmbigQA: Answering ambiguous open-domain questions",
"authors": [
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5783--5797",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.466"
]
},
"num": null,
"urls": [],
"raw_text": "Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering am- biguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 5783- 5797, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "Personalized machine translation: Predicting translational preferences",
"authors": [
{
"first": "Shachar",
"middle": [],
"last": "Mirkin",
"suffix": ""
},
{
"first": "Jean-Luc",
"middle": [],
"last": "Meunier",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2019--2025",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1238"
]
},
"num": null,
"urls": [],
"raw_text": "Shachar Mirkin and Jean-Luc Meunier. 2015. Person- alized machine translation: Predicting translational preferences. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 2019-2025, Lisbon, Portugal. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Model cards for model reporting",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zaldivar",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Spitzer",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Inioluwa",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Raji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gebru",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the conference on fairness, accountability, and transparency",
"volume": "",
"issue": "",
"pages": "220--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, account- ability, and transparency, pages 220-229.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "\u00c7aglar",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Gu\u00ec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7aglar Gu\u00cc \u2021l\u00e7ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1797--1807",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1206"
]
},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "The E2E dataset: New challenges for endto-end generation",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "201--206",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5525"
]
},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2017. The E2E dataset: New challenges for end- to-end generation. In Proceedings of the 18th An- nual SIGdial Meeting on Discourse and Dialogue, pages 201-206, Saarbr\u00fccken, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Towards a decomposable metric for explainable evaluation of text generation from amr",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.08896"
]
},
"num": null,
"urls": [],
"raw_text": "Juri Opitz and Anette Frank. 2020. Towards a decom- posable metric for explainable evaluation of text gen- eration from amr. arXiv preprint arXiv:2008.08896.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "ToTTo: A controlled table-totext generation dataset",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Xuezhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1173--1186",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.89"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to- text generation dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1173-1186, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "Analysing data-to-text generation benchmarks",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Beltrachini",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "238--242",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3537"
]
},
"num": null,
"urls": [],
"raw_text": "Laura Perez-Beltrachini and Claire Gardent. 2017. Analysing data-to-text generation benchmarks. In Proceedings of the 10th International Conference on Natural Language Generation, pages 238-242, San- tiago de Compostela, Spain. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "Vassilis Plachouras, Tim Rockt\u00e4schel, and Sebastian Riedel. 2020. KILT: a benchmark for knowledge intensive language tasks",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "Majid",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Nicola",
"middle": [
"De"
],
"last": "Yazdani",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jernite",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick S. H. Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vassilis Plachouras, Tim Rockt\u00e4schel, and Sebastian Riedel. 2020. KILT: a benchmark for knowledge intensive language tasks. CoRR, abs/2009.02252.",
"links": null
},
"BIBREF92": {
"ref_id": "b92",
"title": "XCOPA: A multilingual dataset for causal commonsense reasoning",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Edoardo Maria Ponti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Qianchu",
"middle": [],
"last": "Majewska",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2362--2376",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.185"
]
},
"num": null,
"urls": [],
"raw_text": "Edoardo Maria Ponti, Goran Glava\u0161, Olga Majewska, Qianchu Liu, Ivan Vuli\u0107, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal common- sense reasoning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362-2376, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF93": {
"ref_id": "b93",
"title": "Dynasent: A dynamic benchmark for sentiment analysis",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Zhengxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Atticus",
"middle": [],
"last": "Geiger",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2020. Dynasent: A dy- namic benchmark for sentiment analysis. CoRR, abs/2012.15349.",
"links": null
},
"BIBREF94": {
"ref_id": "b94",
"title": "Data-to-text generation with entity modeling",
"authors": [
{
"first": "Ratish",
"middle": [],
"last": "Puduppully",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2023--2035",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1195"
]
},
"num": null,
"urls": [],
"raw_text": "Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with entity modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2023-2035, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF95": {
"ref_id": "b95",
"title": "DART: open-domain structured data record to text generation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Amrit",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Abhinand",
"middle": [],
"last": "Rau",
"suffix": ""
},
{
"first": "Chiachun",
"middle": [],
"last": "Sivaprasad",
"suffix": ""
},
{
"first": "Nazneen",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Xiangru",
"middle": [],
"last": "Fatema Rajani",
"suffix": ""
},
{
"first": "Aadit",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Neha",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "Yangxiaokang",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Nadia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Irwanto",
"suffix": ""
},
{
"first": "Faiaz",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Murori",
"middle": [],
"last": "Zaidi",
"suffix": ""
},
{
"first": "Yasin",
"middle": [],
"last": "Mutuma",
"suffix": ""
},
{
"first": "Ankit",
"middle": [],
"last": "Tarabar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir R. Radev, Rui Zhang, Amrit Rau, Abhi- nand Sivaprasad, Chiachun Hsieh, Nazneen Fatema Rajani, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Murori Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, and Richard Socher. 2020. DART: open-domain structured data record to text generation. CoRR, abs/2007.02871.",
"links": null
},
"BIBREF96": {
"ref_id": "b96",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "J. Mach. Learn. Res",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.",
"links": null
},
"BIBREF97": {
"ref_id": "b97",
"title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "129--140",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1012"
]
},
"num": null,
"urls": [],
"raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Cor- pus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 129-140, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF98": {
"ref_id": "b98",
"title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Xiaoxue",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sunkara",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Khaitan",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "8689--8696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In The Thirty- Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Appli- cations of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8689- 8696. AAAI Press.",
"links": null
},
"BIBREF99": {
"ref_id": "b99",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00266"
]
},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.",
"links": null
},
"BIBREF100": {
"ref_id": "b100",
"title": "A structured review of the validity of BLEU",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Comput. Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/coli_a_00322"
]
},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter. 2018. A structured review of the validity of BLEU. Comput. Linguistics, 44(3).",
"links": null
},
"BIBREF101": {
"ref_id": "b101",
"title": "Building natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press.",
"links": null
},
"BIBREF102": {
"ref_id": "b102",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF103": {
"ref_id": "b103",
"title": "Beyond leaderboards: A survey of methods for revealing weaknesses in natural language inference data and models. CoRR, abs",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Schlegel",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Nenadic",
"suffix": ""
},
{
"first": "Riza",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktor Schlegel, Goran Nenadic, and Riza Batista- Navarro. 2020. Beyond leaderboards: A sur- vey of methods for revealing weaknesses in natu- ral language inference data and models. CoRR, abs/2005.14709.",
"links": null
},
"BIBREF104": {
"ref_id": "b104",
"title": "MLSUM: The multilingual summarization corpus",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Scialom",
"suffix": ""
},
{
"first": "Paul-Alexis",
"middle": [],
"last": "Dray",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Lamprier",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Piwowarski",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Staiano",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8051--8067",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.647"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. MLSUM: The multilingual summarization corpus. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8051-8067, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF105": {
"ref_id": "b105",
"title": "Safeval: Summarization asks for fact-based evaluation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Scialom",
"suffix": ""
},
{
"first": "Paul-Alexis",
"middle": [],
"last": "Dray",
"suffix": ""
},
{
"first": "Gallinari",
"middle": [],
"last": "Patrick",
"suffix": ""
},
{
"first": "Lamprier",
"middle": [],
"last": "Sylvain",
"suffix": ""
},
{
"first": "Piwowarski",
"middle": [],
"last": "Benjamin",
"suffix": ""
},
{
"first": "Staiano",
"middle": [],
"last": "Jacopo",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Alex",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.12693"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Scialom, Paul-Alexis Dray, Gallinari Patrick, Lamprier Sylvain, Piwowarski Benjamin, Staiano Ja- copo, and Wang Alex. 2021. Safeval: Summariza- tion asks for fact-based evaluation. arXiv preprint arXiv:2103.12693.",
"links": null
},
"BIBREF106": {
"ref_id": "b106",
"title": "Answers unite! unsupervised metrics for reinforced summarization models",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Scialom",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Lamprier",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Piwowarski",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Staiano",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3246--3256",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1320"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Scialom, Sylvain Lamprier, Benjamin Pi- wowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summa- rization models. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3246-3256, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF107": {
"ref_id": "b107",
"title": "BLEURT: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7881--7892",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.704"
]
},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF108": {
"ref_id": "b108",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "E",
"middle": [],
"last": "Claude",
"suffix": ""
},
{
"first": "Warren",
"middle": [],
"last": "Shannon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weaver",
"suffix": ""
}
],
"year": 1963,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude E Shannon and Warren Weaver. 1963. A math- ematical theory of communication.",
"links": null
},
"BIBREF109": {
"ref_id": "b109",
"title": "BIG-PATENT: A large-scale dataset for abstractive and coherent summarization",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2204--2213",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1212"
]
},
"num": null,
"urls": [],
"raw_text": "Eva Sharma, Chen Li, and Lu Wang. 2019. BIG- PATENT: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2204-2213, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF110": {
"ref_id": "b110",
"title": "The human evaluation datasheet 1.0: A template for recording details of human evaluation experiments in nlp",
"authors": [
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.09710"
]
},
"num": null,
"urls": [],
"raw_text": "Anastasia Shimorina and Anya Belz. 2021. The hu- man evaluation datasheet 1.0: A template for record- ing details of human evaluation experiments in nlp. arXiv preprint arXiv:2103.09710.",
"links": null
},
"BIBREF111": {
"ref_id": "b111",
"title": "What should I ask? using conversationally informative rewards for goal-oriented visual dialog",
"authors": [
{
"first": "Pushkar",
"middle": [],
"last": "Shukla",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Elmadjian",
"suffix": ""
},
{
"first": "Richika",
"middle": [],
"last": "Sharan",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Turk",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6442--6451",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1646"
]
},
"num": null,
"urls": [],
"raw_text": "Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, and William Yang Wang. 2019. What should I ask? using conversa- tionally informative rewards for goal-oriented visual dialog. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 6442-6451, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF112": {
"ref_id": "b112",
"title": "Results of the WMT15 metrics shared task",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "256--273",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3031"
]
},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107, Amir Kamran, Philipp Koehn, and Ond\u0159ej Bojar. 2015. Results of the WMT15 met- rics shared task. In Proceedings of the Tenth Work- shop on Statistical Machine Translation, pages 256- 273, Lisbon, Portugal. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF113": {
"ref_id": "b113",
"title": "How to compare summarizers without target length? pitfalls, solutions and re-examination of the neural summarization literature",
"authors": [
{
"first": "Simeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ori",
"middle": [],
"last": "Shapira",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation",
"volume": "",
"issue": "",
"pages": "21--29",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2303"
]
},
"num": null,
"urls": [],
"raw_text": "Simeng Sun, Ori Shapira, Ido Dagan, and Ani Nenkova. 2019. How to compare summarizers without target length? pitfalls, solutions and re-examination of the neural summarization literature. In Proceedings of the Workshop on Methods for Optimizing and Eval- uating Neural Language Generation, pages 21-29, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF114": {
"ref_id": "b114",
"title": "A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Ke",
"middle": [
"M"
],
"last": "Tran",
"suffix": ""
},
{
"first": "Saleema",
"middle": [],
"last": "Amershi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "340--350",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1033"
]
},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Chris Brockett, Ke M. Tran, and Saleema Amershi. 2016. A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 340-350, Austin, Texas. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF115": {
"ref_id": "b115",
"title": "Asking and answering questions to evaluate the factual consistency of summaries",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5008--5020",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.450"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF116": {
"ref_id": "b116",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3261--3275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Infor- mation Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3261-3275.",
"links": null
},
"BIBREF117": {
"ref_id": "b117",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF118": {
"ref_id": "b118",
"title": "Challenges in data-to-document generation",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2253--2263",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1239"
]
},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF119": {
"ref_id": "b119",
"title": "Unsupervised data augmentation for consistency training",
"authors": [
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmenta- tion for consistency training. Advances in Neural Information Processing Systems, 33.",
"links": null
},
"BIBREF120": {
"ref_id": "b120",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00107"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.",
"links": null
},
"BIBREF121": {
"ref_id": "b121",
"title": "Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A mas- sively multilingual pre-trained text-to-text trans- former. CoRR, abs/2010.11934.",
"links": null
},
"BIBREF122": {
"ref_id": "b122",
"title": "MultiWOZ 2.2 : A dialogue dataset with additional annotation corrections and state tracking baselines",
"authors": [
{
"first": "Xiaoxue",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sunkara",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jindong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlp4convai-1.13"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020. MultiWOZ 2.2 : A dialogue dataset with additional annotation corrections and state tracking baselines. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 109-117, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF123": {
"ref_id": "b123",
"title": "PEGASUS: pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "2020",
"issue": "",
"pages": "11328--11339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. PEGASUS: pre-training with extracted gap-sentences for abstractive summariza- tion. In Proceedings of the 37th International Con- ference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.",
"links": null
},
"BIBREF124": {
"ref_id": "b124",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2204--2213",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1205"
]
},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF125": {
"ref_id": "b125",
"title": "Bertscore: Evaluating text generation with BERT",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
},
"BIBREF126": {
"ref_id": "b126",
"title": "Chinese poetry generation with recurrent neural networks",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1074"
]
},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang and Mirella Lapata. 2014. Chinese po- etry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670-680, Doha, Qatar. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF127": {
"ref_id": "b127",
"title": "DIALOGPT : Largescale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "270--278",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.30"
]
},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020c. DIALOGPT : Large- scale generative pre-training for conversational re- sponse generation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270- 278, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF128": {
"ref_id": "b128",
"title": "High-level Task, e.g., data-to-text, or summarization",
"authors": [],
"year": null,
"venue": "Dataset Name 2. Reference",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dataset Name 2. Reference 3. High-level Task, e.g., data-to-text, or summa- rization",
"links": null
},
"BIBREF129": {
"ref_id": "b129",
"title": "entity tracking/generation, referring expression generation, surface realization, content selection",
"authors": [
{
"first": "",
"middle": [],
"last": "Challenges",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Challenges, e.g., entity tracking/generation, referring expression generation, surface real- ization, content selection",
"links": null
},
"BIBREF130": {
"ref_id": "b130",
"title": "Communicative goal, e.g., provide specific information, or entertainment, or accomplish a task",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Communicative goal, e.g., provide specific information, or entertainment, or accomplish a task",
"links": null
},
"BIBREF131": {
"ref_id": "b131",
"title": "Wikipedia, or news articles",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dataset Domain, e.g., Wikipedia, or news arti- cles, Reddit chat, etc)",
"links": null
},
"BIBREF132": {
"ref_id": "b132",
"title": "Language(s)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Language(s)",
"links": null
},
"BIBREF133": {
"ref_id": "b133",
"title": "en-US, es-MX 10. Input modality",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Language locale (if known), e.g., en-US, es- MX 10. Input modality, e.g., text, graph, table, images 11. Input length 12. Output length 13. Output form, e.g., monologue, dialog",
"links": null
},
"BIBREF134": {
"ref_id": "b134",
"title": "# Examples in dataset Test split",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "# Examples in dataset Test split, e.g., i.i.d., or non-overlap dimension",
"links": null
},
"BIBREF135": {
"ref_id": "b135",
"title": "# References per example 16. Data Quality / potential Issues, e.g., noisy, clean, biased, code-mixing (different languages/writing systems)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "# References per example 16. Data Quality / potential Issues, e.g., noisy, clean, biased, code-mixing (differ- ent languages/writing systems), (over)-",
"links": null
},
"BIBREF136": {
"ref_id": "b136",
"title": "Evaluation strategies (in original paper / papers that use dataset)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evaluation strategies (in original paper / pa- pers that use dataset)",
"links": null
},
"BIBREF137": {
"ref_id": "b137",
"title": "Alex Context NLG (Du\u0161ek and Jurc\u0131cek",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Context NLG (Du\u0161ek and Jurc\u0131cek, 2016; Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016a)",
"links": null
},
"BIBREF139": {
"ref_id": "b139",
"title": "Bangla Natural Language Image to Text",
"authors": [
{
"first": "",
"middle": [],
"last": "Jishan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bangla Natural Language Image to Text (Jis- han et al., 2019)",
"links": null
},
"BIBREF148": {
"ref_id": "b148",
"title": "2015) 14. Mimic and Rephrase",
"authors": [
{
"first": "",
"middle": [],
"last": "Lcsts (hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LCSTS (Hu et al., 2015) 14. Mimic and Rephrase (Dieter et al., 2019)",
"links": null
},
"BIBREF157": {
"ref_id": "b157",
"title": "SQUAD Question Generation",
"authors": [
{
"first": "",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SQUAD Question Generation (Du et al., 2017)",
"links": null
},
"BIBREF158": {
"ref_id": "b158",
"title": "SR'11, SR'18, SR'19",
"authors": [
{
"first": "",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SR'11, SR'18, SR'19 (Belz et al., 2011; Mille et al., 2018, 2019)",
"links": null
},
"BIBREF160": {
"ref_id": "b160",
"title": "Ubuntu Dialogue Generation",
"authors": [
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ubuntu Dialogue Generation (Lowe et al., 2015)",
"links": null
},
"BIBREF161": {
"ref_id": "b161",
"title": "Visual Question Generation",
"authors": [
{
"first": "",
"middle": [],
"last": "Shukla",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Visual Question Generation (Shukla et al., 2019; Li et al., 2018)",
"links": null
},
"BIBREF166": {
"ref_id": "b166",
"title": "Wizard of Wikipedia",
"authors": [
{
"first": "(",
"middle": [],
"last": "Wikisum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WikiSum (Liu et al., 2018) 32. Wizard of Wikipedia (Dinan et al., 2019)",
"links": null
},
"BIBREF167": {
"ref_id": "b167",
"title": "Writing Prompts",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Writing Prompts (Fan et al., 2018)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "A screenshot of the interactive result exploration tool. [Top Left] The selection of tasks, task-groups, or individual submissions. [Top Right] The selection of metric-groups or metrics [Bottom] The parallel coordinates visualization of the selection. The selection here can be filtered by brushing over a section of an individual metric, as is shown here for BLEURT. Hovering over a line presents detailed information of the particular submission.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Varun Gangal and Miruna Clinciu are part of this group. Miruna Clinciu was responsible primarily for DART and Varun Gangal for ToTTo while maintaining a close correspondence and understanding between them to ensure all steps, such as code structure, preprocessing primitives, baselines were as uniform as possible.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Automated Evaluation. Ondrej Dusek wrote the base code and included BLEU, Meteor, ROUGE, and referenceless metrics (the latter based on code supplied by Emiel van Miltenburg). He also prepared reference sets for E2E, Czech Restaurants and WebNLG. Sebastian Gehrman included BLEURT and BERTScore and prepared the reference sets. Dhruv Kumar included SARI and adapted the code for source-based metrics. Nishant Subramani helped with code refactoring. Miruna Clinciu , Emiel van Miltenburg and Thibault Sellam provided feedback and participated in discussions.",
"uris": null,
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"text": "An overview of the types of challenge sets for GEM. The first category are modifications to inputs of a model, the second category identifies contrast sets which are subsets of the original test set, and the third describes newly collected data.",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"text": "The set of baseline results we release alongside GEM with a focus on reference-based evaluation.",
"content": "<table><tr><td>Dataset</td><td>Model</td><td colspan=\"7\">Metrics (Diversity and System Characterization) MSTTR Distinct1 Distinct2 H1 H2 Unique1 Unique2 |V| Output Len.</td></tr><tr><td>CommonGen</td><td>BART T5</td><td>0.57 0.51</td><td>0.12 0.11</td><td>0.41 7.1 10.7 0.36 6.5 10.1</td><td>583 465</td><td colspan=\"2\">2.7k 1.2k 2.0k 1.0k</td><td>10.5 9.6</td></tr><tr><td/><td>mT5-small</td><td>0.51</td><td>0.04</td><td>0.1 6.2 7.8</td><td>86</td><td>278</td><td>287</td><td>10.2</td></tr><tr><td>Czech Restaurant</td><td>mT5-base</td><td>0.49</td><td>0.03</td><td>0.09 6.1 7.6</td><td>80</td><td>249</td><td>273</td><td>10.5</td></tr><tr><td/><td>mT5-large</td><td>0.57</td><td>0.05</td><td>0.13 6.6 8.4</td><td>103</td><td>387</td><td>361</td><td>10.1</td></tr><tr><td/><td>mT5-XL</td><td>0.6</td><td>0.06</td><td>0.19 6.8 9.0</td><td>146</td><td>614</td><td>438</td><td>9.5</td></tr><tr><td/><td>TGen</td><td>0.57</td><td>0.03</td><td>0.11 6.4 8.0</td><td>58</td><td>239</td><td>245</td><td>9.1</td></tr><tr><td/><td>TGen+</td><td>0.61</td><td>0.04</td><td>0.12 6.5 8.1</td><td>84</td><td>290</td><td>305</td><td>9.2</td></tr><tr><td/><td>TGen++</td><td>0.56</td><td>0.04</td><td>0.11 6.5 8.1</td><td>85</td><td>280</td><td>297</td><td>9.5</td></tr><tr><td>DART</td><td>BART T5</td><td>0.55 0.51</td><td>0.19 0.19</td><td>0.45 8.4 11.3 0.42 8.0 10.7</td><td>1.3k 1.2k</td><td colspan=\"2\">3.6k 2.4k 3.1k 2.1k</td><td>12.0 10.8</td></tr><tr><td/><td>BART</td><td>0.32</td><td>0.005</td><td>0.02 5.7 7.2</td><td>16</td><td>104</td><td>149</td><td>22.0</td></tr><tr><td>E2E clean</td><td>LSTM T5</td><td>0.31 0.30</td><td>0.004 0.004</td><td>0.02 5.6 7.1 0.01 5.6 6.9</td><td>19 7</td><td>106 60</td><td>139 125</td><td>23.1 23.0</td></tr><tr><td/><td>TGen</td><td>0.31</td><td>0.004</td><td>0.02 5.6 7.2</td><td>19</td><td>116</td><td>140</td><td>23.2</td></tr><tr><td>MLSum (de)</td><td>mBART</td><td>0.78</td><td>0.11</td><td>0.52 10.6 16.3</td><td>27k</td><td>166k</td><td>46k</td><td>35.7</td></tr><tr><td/><td>mT5-small</td><td>0.75</td><td>0.12</td><td>0.52 10.4 15.8</td><td colspan=\"3\">20.1k 113.8k 33.6k</td><td>24.7</td></tr><tr><td/><td>mT5-base</td><td>0.76</td><td>0.12</td><td>0.53 10.4 15.8</td><td colspan=\"3\">20.2k 113.0k 33.3k</td><td>24.2</td></tr><tr><td/><td>mT5-large</td><td>0.76</td><td>0.12</td><td>0.53 10.4 15.8</td><td colspan=\"3\">20.0k 114.0k 33.3k</td><td>24.4</td></tr><tr><td/><td>mT5-XL</td><td>0.77</td><td>0.12</td><td>0.53 10.4 15.8</td><td colspan=\"3\">20.0k 114.6k 33.3k</td><td>24.5</td></tr><tr><td>MLSum (es)</td><td>mBART</td><td>0.71</td><td>0.10</td><td>0.47 10.1 15.7</td><td>19k</td><td>120k</td><td>35k</td><td>32.3</td></tr><tr><td/><td>mT5-small</td><td>0.69</td><td>0.12</td><td>0.48 10.0 15.1</td><td>14.0k</td><td colspan=\"2\">77.6k 25.5k</td><td>21.7</td></tr><tr><td/><td>mT5-base</td><td>0.71</td><td>0.12</td><td>0.5 10.1 15.3</td><td>15.1k</td><td colspan=\"2\">85.2k 27.2k</td><td>23.0</td></tr><tr><td/><td>mT5-large</td><td>0.71</td><td>0.12</td><td>0.5 10.1 15.3</td><td>14.9k</td><td colspan=\"2\">82.0k 26.6k</td><td>22.1</td></tr><tr><td/><td>mT5-XL</td><td>0.72</td><td>0.12</td><td>0.5 10.1 15.3</td><td>14.8k</td><td colspan=\"2\">80.5k 26.1k</td><td>21.4</td></tr><tr><td>Schema-Guided</td><td>BART T5</td><td>0.56 0.67</td><td>0.02 0.03</td><td>0.06 7.0 9.2 0.10 7.9 10.6</td><td>1.8k 1.6k</td><td colspan=\"2\">6.2k 3.9k 5.8k 3.8k</td><td>22.0 11.8</td></tr><tr><td>ToTTo</td><td>T5</td><td>0.73</td><td>0.18</td><td>0.54 10.1 14.4</td><td>15k</td><td>60k</td><td>21k</td><td>15.3</td></tr><tr><td>XSum</td><td>PEGASUS</td><td>0.73</td><td>0.20</td><td>0.64 9.3 13.1</td><td>3.0k</td><td>13k</td><td>5k</td><td>22.9</td></tr><tr><td>WebNLG (en)</td><td>mBART</td><td>0.53</td><td>0.09</td><td>0.27 8.6 11.8</td><td>969</td><td colspan=\"2\">4.0k 3.2k</td><td>20.7</td></tr><tr><td/><td>mT5-small</td><td>0.5</td><td>0.09</td><td>0.25 8.6 11.8</td><td>864</td><td colspan=\"2\">3.9k 3.2k</td><td>22.7</td></tr><tr><td/><td>mT5-base</td><td>0.53</td><td>0.09</td><td>0.27 8.7 11.9</td><td>983</td><td colspan=\"2\">4.4k 3.3k</td><td>21.7</td></tr><tr><td/><td>mT5-large</td><td>0.54</td><td>0.09</td><td>0.29 8.7 12.0</td><td>1.1k</td><td colspan=\"2\">4.8k 3.4k</td><td>21.7</td></tr><tr><td/><td>mT5-XL</td><td>0.54</td><td>0.09</td><td>0.29 8.7 12.0</td><td>1.1k</td><td colspan=\"2\">4.8k 3.4k</td><td>21.6</td></tr><tr><td>WebNLG (ru)</td><td>mBART</td><td>0.46</td><td>0.08</td><td>0.20 8.1 10.3</td><td>334</td><td colspan=\"2\">1.1k 1.2k</td><td>18.9</td></tr><tr><td/><td>mT5-small</td><td>0.43</td><td>0.08</td><td>0.20 7.9 10.2</td><td>349</td><td colspan=\"2\">1.2k 1.2k</td><td>19.2</td></tr><tr><td/><td>mT5-base</td><td>0.47</td><td>0.09</td><td>0.23 8.2 10.7</td><td>482</td><td colspan=\"2\">1.6k 1.4k</td><td>19.9</td></tr><tr><td/><td>mT5-large</td><td>0.48</td><td>0.09</td><td>0.24 8.2 10.7</td><td>474</td><td colspan=\"2\">1.6k 1.4k</td><td>19.4</td></tr><tr><td/><td>mT5-XL</td><td>0.46</td><td>0.09</td><td>0.22 8.2 10.5</td><td>418</td><td colspan=\"2\">1.4k 1.3k</td><td>19.5</td></tr><tr><td>Turk</td><td>BART T5</td><td>0.73 0.73</td><td>0.23 0.22</td><td>0.74 9.8 14.1 0.72 9.9 14.2</td><td>5.5k 5.9k</td><td colspan=\"2\">23k 8.6k 25k 9.3k</td><td>18.4 20.1</td></tr><tr><td>ASSET</td><td>BART T5</td><td>0.73 0.73</td><td>0.23 0.22</td><td>0.73 9.8 14.1 0.72 9.9 14.2</td><td>5.9k 5.9k</td><td colspan=\"2\">24k 9.1k 26k 9.4k</td><td>20.1 21.3</td></tr><tr><td colspan=\"2\">WikiLingua (es\u2192en) mBART mBART+</td><td>0.55 0.58</td><td>0.03 0.03</td><td>0.19 8.8 14.0 0.21 9.1 14.5</td><td>4.7k 5.9k</td><td>63k 83k</td><td>15k 18k</td><td>29.4 32.5</td></tr><tr><td/><td>mT5-small</td><td>0.39</td><td>0.03</td><td>0.15 8.3 12.8</td><td>2.3k</td><td colspan=\"2\">20.9k 8.2k</td><td>31.8</td></tr><tr><td/><td>mT5-base</td><td>0.52</td><td>0.04</td><td>0.23 8.7 13.7</td><td>3.5k</td><td colspan=\"2\">34.4k 10.3k</td><td>28.7</td></tr><tr><td/><td>mT5-large</td><td>0.57</td><td>0.04</td><td>0.26 8.9 14.0</td><td>4.2k</td><td colspan=\"2\">44.4k 11.7k</td><td>30.8</td></tr><tr><td/><td>mT5-XL</td><td>0.6</td><td>0.04</td><td>0.29 9.1 14.4</td><td>5.0k</td><td colspan=\"2\">57.7k 13.5k</td><td>34.7</td></tr><tr><td colspan=\"2\">WikiLingua (ru\u2192en) mBART mBART+</td><td>0.54 0.55</td><td>0.04 0.04</td><td>0.20 8.5 13.3 0.23 8.8 13.7</td><td>2.8k 3.5k</td><td colspan=\"2\">28k 8.7k 35k 10k</td><td>27.3 28.4</td></tr><tr><td/><td>mT5-small</td><td>0.4</td><td>0.04</td><td>0.19 8.2 12.6</td><td>1.5k</td><td colspan=\"2\">11.6k 5.5k</td><td>31.8</td></tr><tr><td/><td>mT5-base</td><td>0.55</td><td>0.06</td><td>0.3 8.6 13.4</td><td>2.5k</td><td colspan=\"2\">21.0k 7.1k</td><td>28.7</td></tr><tr><td/><td>mT5-large</td><td>0.59</td><td>0.06</td><td>0.32 8.7 13.6</td><td>3.0k</td><td colspan=\"2\">26.1k 7.9k</td><td>31.1</td></tr><tr><td/><td>mT5-XL</td><td>0.6</td><td>0.07</td><td>0.35 8.8 13.8</td><td>3.4k</td><td colspan=\"2\">29.0k 8.5k</td><td>31.4</td></tr><tr><td colspan=\"2\">WikiLingua (tr\u2192en) mBART mBART+</td><td>0.45 0.52</td><td>0.08 0.12</td><td>0.28 7.7 11.2 0.38 8.0 11.9</td><td>743 1.2k</td><td colspan=\"2\">4.1k 2.1k 6.1k 2.8k</td><td>34.2 30.7</td></tr><tr><td/><td>mT5-small</td><td>0.55</td><td>0.14</td><td>0.46 8.1 11.6</td><td>935</td><td colspan=\"2\">4.4k 2.1k</td><td>40.2</td></tr><tr><td/><td>mT5-base</td><td>0.59</td><td>0.16</td><td>0.51 8.2 11.9</td><td>1.0k</td><td colspan=\"2\">4.8k 2.2k</td><td>38.7</td></tr><tr><td/><td>mT5-large</td><td>0.58</td><td>0.16</td><td>0.5 8.1 11.8</td><td>1.0k</td><td colspan=\"2\">4.7k 2.2k</td><td>38.0</td></tr><tr><td/><td>mT5-XL</td><td>0.58</td><td>0.16</td><td>0.51 8.2 11.8</td><td>1.0k</td><td colspan=\"2\">4.7k 2.1k</td><td>36.8</td></tr><tr><td colspan=\"2\">WikiLingua (vi\u2192en) mBART mBART+</td><td>0.54 0.54</td><td>0.07 0.08</td><td>0.28 8.2 12.3 0.33 8.6 12.9</td><td>1.5k 2.1k</td><td colspan=\"2\">9.3k 4.0k 13k 5.3k</td><td>26.9 29.8</td></tr><tr><td/><td>mT5-small</td><td>0.5</td><td>0.09</td><td>0.33 8.2 12.1</td><td>1.2k</td><td colspan=\"2\">6.4k 3.1k</td><td>32.9</td></tr><tr><td/><td>mT5-base</td><td>0.58</td><td>0.12</td><td>0.43 8.4 12.6</td><td>1.6k</td><td colspan=\"2\">8.9k 3.7k</td><td>31.1</td></tr><tr><td/><td>mT5-large</td><td>0.6</td><td>0.12</td><td>0.45 8.5 12.7</td><td>1.7k</td><td colspan=\"2\">9.3k 3.8k</td><td>30.7</td></tr><tr><td/><td>mT5-XL</td><td>0.61</td><td>0.12</td><td>0.47 8.6 12.9</td><td>1.8k</td><td colspan=\"2\">10.2k 4.0k</td><td>31.5</td></tr></table>"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"text": "The summarization group members are Chris Emezue, Esin Durmus, Faisal Ladhak, Jiawei Zhou, Juan Diego Rodriguez, Kaustubh Dhole, Khyathi Chandu, Laura Perez, Pawan Sasanka Ammanamanchi, Pedro Henrique Martins, Rubungo Andre Niyongabo, Shashi Narayan, Vikas Raunak, and Yufang Hou. Pedro Henrique Martins organized the group and wrote the data statement for the MLSum dataset. Pawan Sasanka Ammanamanchi was responsible for the XSum data statement, while Vikas Raunak worked on the Wikilingua statement. Shashi Narayan prepared the GEM version of the XSum dataset and trained its baseline models. Juan Diego Rodriguez was responsible for cleaning the MLSum dataset and trained its baseline models. Faisal Ladhak was responsible for the Wikilingua baseline models. Rubungo Andre Niyongabo participated in the discussions and added related papers to the planning document. Tosin Adewumi, and Wanyu Du are part of this group. Tosin Adewumi contributed code for DialoGPT, and Wanyu Du trained baselines for Schema-Guided Dialog. Harsh Jhamtani wrote the data card for Wizards of Wikipedia.Data2Text. Ondrej Dusek wrote the data cards for E2E NLG and Czech Restaurants data and a TF loader for Czech Restaurants. He also supplied baseline outputs for E2E, Czech Restaurants, and WebNLG. Sebastian Gehrmann supplied baseline outputs for E2E, WebNLG, and CommonGen. Yacine Jernite wrote the data card for CommonGen and the Hugging Face loaders for Czech Restaurants and WebNLG. Teven Le Scao wrote the Hugging Face loader for E2E. Simon Mille and Anastasia Shimorina wrote the data card for WebNLG.",
"content": "<table><tr><td>Dialog. Sashank Santhanam, Samira Shaikh,</td></tr><tr><td>Bodhisattwa Prasad Majumder, Harsh Jhamtani,</td></tr><tr><td>Yangfeng Ji,</td></tr></table>"
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"text": "Antonios Anastasopoulos and Graham Neubig. 2020. Should all cross-lingual embeddings speak English? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8658-8679, Online. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005, Ann Arbor, Michigan, USA, June 29, 2005, pages 65-72. Association for Computational Linguistics.",
"content": "<table/>"
}
}
}
}