ACL-OCL / Base_JSON /prefixE /json /evalnlgeval /2020.evalnlgeval-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:30:47.035896Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "The first workshop on Evaluating NLG Evaluation (EvalNLGEval) is taking place virtually as part of the 13th International Conference on Natural Language Generation (INLG 2020) .",
"cite_spans": [
{
"start": 164,
"end": 175,
"text": "(INLG 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "The aim of the workshop is to offer a platform for discussions on the status and the future of the evaluation of Natural Language Generation (NLG) systems. This is a special time for our field: NLG research has become one of the most popular areas of computational linguistics, the community has expanded and many new tasks and approaches have recently been introduced. However, evaluation of NLG systems remains a bottleneck, as there is no standard methodology for human evaluation nor acceptable automatic metrics, which can hinder reproducibility and comparability of results. The workshop aims to break ground by initiating discussions around these issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "The workshop invited archival papers and abstracts on NLG evaluation including best practices of human evaluation, qualitative studies, cognitive bias in human evaluations etc. The workshop received twelve submissions. Archival papers were reviewed by three members of the programme committee. Abstracts were accepted by a unanimous decision of the organization committee based on relevance; in case of conflict of interest, abstracts received two reviews. Ten papers and abstracts were accepted and were presented as posters at the workshop. This proceedings volume contains the five archival papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "The workshop features a keynote speech by Marina Fomicheva and a panel discussion with Yvette Graham, Jo\u00e3o Sedoc and Marina Fomicheva on the current limits, as well as the future of NLG evaluation. The posters were presented in four poster sessions and the workshop closes with a general discussion on NLG evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "We would like to thank the authors, the program committee members, and the workshop attendees. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td>Workshop Programme 16:50-17:20 Poster session 4</td></tr><tr><td>11:00-11:15 Opening</td></tr><tr><td>11:15-12:15 Plenary Keynote by Marina Fomicheva Evaluating AMR-to-English NLG Evaluation (abstract)</td></tr><tr><td>Think Inside the Box: Glass-box Evaluation Methods for Neural MT Emma Manning, Shira Wein and Nathan Schneider</td></tr><tr><td>12:15-12:50 Break 17:20-18:20 General discussion, closing</td></tr><tr><td>12:50-13:20 Elevator pitches for all papers</td></tr><tr><td>13:20-13:50 Poster session 1</td></tr><tr><td>Automatic Machine Translation Evaluation in Many Languages via Zero-Shot</td></tr><tr><td>Paraphrasing (abstract)</td></tr><tr><td>Brian Thompson and Matt Post</td></tr><tr><td>Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents (abstract)</td></tr><tr><td>Sashank Santhanam and Samira Shaikh</td></tr><tr><td>13:50-14:20 Poster session 2</td></tr><tr><td>On the interaction of automatic evaluation and task framing in headline style transfer</td></tr><tr><td>Lorenzo De Mattei, Michele Cafagna, Huiyuan Lai, Felice Dell'Orletta, Malvina Nissim</td></tr><tr><td>and Albert Gatt</td></tr><tr><td>Evaluating Semantic Accuracy of Data-to-Text Generation with Natural Language</td></tr><tr><td>Inference (abstract)</td></tr><tr><td>Ond\u0159ej Du\u0161ek and Zden\u011bk Kasner</td></tr><tr><td>14:20-15:00 Break</td></tr><tr><td>15:00-16:00 Panel discussion with Q&amp;A</td></tr><tr><td>Panelists: Marina Fomicheva, Yvette Graham, Jo\u00e3o Sedoc</td></tr><tr><td>16:00-16:30 Poster session 3</td></tr><tr><td>Informative Manual Evaluation of Machine Translation Output (abstract)</td></tr><tr><td>Maja Popovi\u0107</td></tr><tr><td>NUBIA: NeUral Based Interchangeability Assessor for Text Generation</td></tr><tr><td>Hassan Kane, Muhammed Yusuf Kocyigit, Ali Abdalla, Pelkins Ajanoh and Mohamed</td></tr><tr><td>Coulibali</td></tr><tr><td>\"This is a Problem, Don't You Agree?\" Framing and Bias in Human Evaluation for</td></tr><tr><td>Natural Language Generation</td></tr><tr><td>Stephanie Schoch, Diyi Yang and Yangfeng Ji</td></tr><tr><td>16:30-16:50 Break</td></tr><tr><td>v vi vii</td></tr></table>",
"text": "A proof of concept on triangular test evaluation for Natural Language Generation . . . . . . . . . 1 Javier Gonz\u00e1lez Corbelle, Jos\u00e9 Mar\u00eda Alonso Moral and Alberto Bugar\u00edn Diz \"This is a Problem, Don't You Agree?\" Framing and Bias in Human Evaluation for Natural Language Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Stephanie Schoch, Diyi Yang and Yangfeng Ji Evaluation rules! On the use of grammars and rule-based systems for NLG evaluation . . . . . . . 17 Emiel van Miltenburg, Chris van der Lee, Thiago Castro-Ferreira and Emiel Krahmer NUBIA: NeUral Based Interchangeability Assessor for Text Generation . . . . . . . . . . . . . . 28 Hassan Kane, Muhammed Yusuf Kocyigit, Ali Abdalla, Pelkins Ajanoh and Mohamed Coulibali On the interaction of automatic evaluation and task framing in headline style transfer . . . . . . . 38 Lorenzo De Mattei, Michele Cafagna, Huiyuan Lai, Felice Dell'Orletta, Malvina Nissim and Albert Gatt A proof of concept on triangular test evaluation for Natural Language Generation Javier Gonz\u00e1lez Corbelle, Jos\u00e9 Mar\u00eda Alonso Moral and Alberto Bugar\u00edn DizEvaluation rules! On the use of grammars and rule-based systems for NLG evaluation Emiel van Miltenburg, Chris van der Lee, Thiago Castro-Ferreira and Emiel Krahmer",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}