ACL-OCL / Base_JSON /prefixH /json /humeval /2021.humeval-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:11.761418Z"
},
"title": "Organising Committee",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Howcroft",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Saad",
"middle": [],
"last": "Mahamood",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Trivago",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Germany",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Mathur",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "T",
"middle": [
"U"
],
"last": "Darmstadt",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Germany",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Welcome to HumEval 2021!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "We are pleased to present the first workshop on Human Evaluation of NLP Systems (HumEval) that is taking place virtually as part of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Human evaluation plays an important role in NLP, from the large-scale crowd-sourced evaluations to the much smaller experiments routinely encountered in conference papers. With this workshop we wish to create a forum for current human evaluation research, a space for researchers working with human evaluations to exchange ideas and begin to address the issues that human evaluation in NLP currently faces, including aspects of experimental design, reporting standards, meta-evaluation and reproducibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The HumEval workshop accepted 9 submissions as long papers, and 6 as short papers. The accepted papers cover a broad range of NLP areas where human evaluation is used: natural language generation, machine translation, summarisation, dialogue, and word embeddings. There are also papers dealing with evaluation practices and methodology in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This workshop would not have been possible without the hard work of the program committee. We would like to express our gratitude to them for writing detailed and thoughtful reviews in a very constrained span of time. We also thank our invited speakers, Lucia Specia, and Margaret Mitchell, for their contribution to our program. As the workshop is part of EACL, we appreciated help from the EACL Workshop Chairs, Jonathan Berant, and Angeliki Lazaridou, from the EACL Publication Chairs, Valerio Basile, and Tommaso Caselli, and we are grateful to all the people involved in setting up the virtual infrastructure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "You can find more details about the worskhop on its website: https://humeval.github.io/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Invited Speaker: Lucia Specia, Imperial College London Disagreement in Human Evaluation: Blame the Task not the Annotators Abstract: It is well known that human evaluators are prone to disagreement and that this is a problem for reliability and reproducibility of evaluation experiments. The reasons for disagreement can fall into two broad categories: (1) human evaluator, including under-trained, under-incentivised, lacking expertise, or ill-intended individuals, e.g., cheaters; and (2) task, including ill-definition, poor guidelines, suboptimal setup, or inherent subjectivity. While in an ideal evaluation experiment many of these elements will be controlled for, I argue that task subjectivity is a much harder issue. In this talk I will cover a number of evaluation experiments on tasks with variable degrees of subjectivity, discuss their levels of disagreement along with other issues, and cover a few practical approaches do address them. I hope this will lead to an open discussion on possible strategies and directions to alleviate this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anya, Shubham, Yvette, Ehud, Anastasia",
"sec_num": null
},
{
"text": "The Ins and Outs of Ethics-Informed Evaluation Abstract: The modern train/test paradigm in Artificial Intelligence (AI) and Machine Learning (ML) narrows what we can understand about AI models, and skews our understanding of models' robustness in different environments. In this talk, I will work through the different factors involved in ethics-informed AI evaluation, including connections to ML training and ML fairness, and present an overarching evaluation protocol that addresses a multitude of considerations in developing ethical AI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Invited Speaker: Margaret Mitchell",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "It's Commonsense, isn't it? Demystifying Human Evaluations in Commonsense-Enhanced NLG Systems Miruna-Adriana Clinciu",
"authors": [
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Saad Mahamood",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "It's Commonsense, isn't it? Demystifying Human Evaluations in Commonsense-Enhanced NLG Systems Miruna-Adriana Clinciu, Dimitra Gkatzia and Saad Mahamood. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Estimating Subjective Crowd-Evaluations as an Additional Objective to Improve Natural Language Generation Jakob Nyberg, Maike Paetzel and Ramesh Manuvinakurike",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Estimating Subjective Crowd-Evaluations as an Additional Objective to Improve Natural Language Gen- eration Jakob Nyberg, Maike Paetzel and Ramesh Manuvinakurike . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Trading Off Diversity and Quality in Natural Language Generation Hugh Zhang",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Duckworth",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Arvind",
"middle": [
". . . . . . . . . . . . . . ."
],
"last": "Neelakantan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trading Off Diversity and Quality in Natural Language Generation Hugh Zhang, Daniel Duckworth, Daphne Ippolito and Arvind Neelakantan . . . . . . . . . . . . . . . . . . . 25",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards Document-Level Human MT Evaluation: On the Issues of Annotator Agreement, Effort and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Towards Document-Level Human MT Evaluation: On the Issues of Annotator Agreement, Effort and Misevaluation Sheila Castilho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Is This Translation Error Critical?: Classification-Based Human and Automatic Machine Translation Evaluation Focusing on Critical Errors Katsuhito Sudoh",
"authors": [
{
"first": "Kosuke",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Is This Translation Error Critical?: Classification-Based Human and Automatic Machine Translation Evaluation Focusing on Critical Errors Katsuhito Sudoh, Kosuke Takahashi and Satoshi Nakamura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards Objectively Evaluating the Quality of Generated Medical Summaries Francesco Moramarco",
"authors": [
{
"first": "Damir",
"middle": [],
"last": "Juric",
"suffix": ""
},
{
"first": "Aleksandar",
"middle": [],
"last": "Savkov",
"suffix": ""
},
{
"first": "Ehud",
"middle": [
". . . . . . . . . . . . . . . . ."
],
"last": "Reiter",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Towards Objectively Evaluating the Quality of Generated Medical Summaries Francesco Moramarco, Damir Juric, Aleksandar Savkov and Ehud Reiter . . . . . . . . . . . . . . . . . . . . . 56",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Preliminary Study on Evaluating Consultation Notes With Post-Editing Francesco Moramarco, Alex Papadopoulos Korfiatis, Aleksandar Savkov and Ehud Reiter",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Preliminary Study on Evaluating Consultation Notes With Post-Editing Francesco Moramarco, Alex Papadopoulos Korfiatis, Aleksandar Savkov and Ehud Reiter . . . . . 62",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Great Misalignment Problem in Human Evaluation of NLP Methods Mika H\u00e4m\u00e4l\u00e4inen and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Great Misalignment Problem in Human Evaluation of NLP Methods Mika H\u00e4m\u00e4l\u00e4inen and Khalid Alnajjar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A View From the Crowd: Evaluation Challenges for Time-Offset Interaction Applications Alberto Chierici and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A View From the Crowd: Evaluation Challenges for Time-Offset Interaction Applications Alberto Chierici and Nizar Habash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead Neslihan Iskender, Tim Polzehl and",
"authors": [
{
"first": ".",
"middle": [
". ."
],
"last": "Sebastian M\u00f6ller",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead Neslihan Iskender, Tim Polzehl and Sebastian M\u00f6ller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs Roman Grundkiewicz, Marcin Junczys-Dowmunt, Christian Federmann and",
"authors": [
{
"first": "Tom",
"middle": [
". ."
],
"last": "Kocmi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs Roman Grundkiewicz, Marcin Junczys-Dowmunt, Christian Federmann and Tom Kocmi . . . . . . 97",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Eliciting Explicit Knowledge From Domain Experts in Direct Intrinsic Evaluation of Word Embeddings for Specialized Domains Goya van Boven and Jelke Bloem",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliciting Explicit Knowledge From Domain Experts in Direct Intrinsic Evaluation of Word Embeddings for Specialized Domains Goya van Boven and Jelke Bloem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "References and Their Effect on Human Evaluation V\u011bra Kloudov\u00e1, Ond\u0159ej Bojar and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detecting Post-Edited References and Their Effect on Human Evaluation V\u011bra Kloudov\u00e1, Ond\u0159ej Bojar and Martin Popel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Case Study of Efficacy and Challenges in Practical Human-in-Loop Evaluation of NLP Systems Using Checklist Shaily Bhatt, Rahul Jain, Sandipan Dandapat and Sunayana Sitaram",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Case Study of Efficacy and Challenges in Practical Human-in-Loop Evaluation of NLP Systems Using Checklist Shaily Bhatt, Rahul Jain, Sandipan Dandapat and Sunayana Sitaram . . . . . . . . . . . . . . . . . . . . . . . . 120",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Interrater Disagreement Resolution: A Systematic Procedure to Reach Consensus in Annotation Tasks Yvette Oortwijn, Thijs Ossenkoppele and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Interrater Disagreement Resolution: A Systematic Procedure to Reach Consensus in Annotation Tasks Yvette Oortwijn, Thijs Ossenkoppele and Arianna Betti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Oral Session 2: MT 11:30-11:50 Towards Document-Level Human MT Evaluation: On the Issues of Annotator Agreement, Effort and Misevaluation Sheila Castilho 11:50-12:10 Is This Translation Error Critical?: Classification-Based Human and Automatic Machine Translation Evaluation Focusing on Critical Errors Katsuhito Sudoh",
"authors": [],
"year": 2021,
"venue": "40 Estimating Subjective Crowd-Evaluations as an Additional Objective to Improve Natural Language Generation Jakob Nyberg",
"volume": "9",
"issue": "",
"pages": "50--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Workshop Program Monday, April 19, 2021 9:00-9:10 Opening Anya Belz 9:10-10:00 Invited Talk: Lucia Specia 10:00-11:00 Oral Session 1: NLG 10:00-10:20 It's Commonsense, isn't it? Demystifying Human Evaluations in Commonsense- Enhanced NLG Systems Miruna-Adriana Clinciu, Dimitra Gkatzia and Saad Mahamood 10:20-10:40 Estimating Subjective Crowd-Evaluations as an Additional Objective to Improve Natural Language Generation Jakob Nyberg, Maike Paetzel and Ramesh Manuvinakurike 10:40-11:00 Trading Off Diversity and Quality in Natural Language Generation Hugh Zhang, Daniel Duckworth, Daphne Ippolito and Arvind Neelakantan 11:00-11:30 Break 11:30-12:10 Oral Session 2: MT 11:30-11:50 Towards Document-Level Human MT Evaluation: On the Issues of Annotator Agreement, Effort and Misevaluation Sheila Castilho 11:50-12:10 Is This Translation Error Critical?: Classification-Based Human and Automatic Machine Translation Evaluation Focusing on Critical Errors Katsuhito Sudoh, Kosuke Takahashi and Satoshi Nakamura Monday, April 19, 2021 (continued) 12:10-13:30 Poster Session 12:10-13:30 Towards Objectively Evaluating the Quality of Generated Medical Summaries Francesco Moramarco, Damir Juric, Aleksandar Savkov and Ehud Reiter 12:10-13:30 A Preliminary Study on Evaluating Consultation Notes With Post-Editing Francesco Moramarco, Alex Papadopoulos Korfiatis, Aleksandar Savkov and Ehud Reiter 12:10-13:30 The Great Misalignment Problem in Human Evaluation of NLP Methods Mika H\u00e4m\u00e4l\u00e4inen and Khalid Alnajjar 12:10-13:30 A View From the Crowd: Evaluation Challenges for Time-Offset Interaction Appli- cations Alberto Chierici and Nizar Habash 12:10-13:30 Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead Neslihan Iskender, Tim Polzehl and Sebastian M\u00f6ller 12:10-13:30 On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs Roman Grundkiewicz, Marcin Junczys-Dowmunt, Christian Federmann and Tom Kocmi 12:10-13:30 Eliciting Explicit Knowledge From Domain Experts in Direct Intrinsic Evaluation of Word Embeddings for Specialized Domains Goya van Boven and Jelke Bloem 12:10-13:30 Detecting Post-Edited References and Their Effect on Human Evaluation V\u011bra Kloudov\u00e1, Ond\u0159ej Bojar and Martin Popel 13:30-15:00 Lunch Monday, April 19, 2021 (continued) 15:00-15:40 Oral Session 3 15:00-15:20 A Case Study of Efficacy and Challenges in Practical Human-in-Loop Evaluation of NLP Systems Using Checklist Shaily Bhatt, Rahul Jain, Sandipan Dandapat and Sunayana Sitaram 15:20-15:40 Interrater Disagreement Resolution: A Systematic Procedure to Reach Consensus in Annotation Tasks Yvette Oortwijn, Thijs Ossenkoppele and Arianna Betti 15:40-16:40 Discussion Panel Ehud Reiter 16:40-17:00 Break 17:00-17:50 Invited Talk: Margaret Mitchell 17:50-18:00 Closing Yvette Graham xiii",
"links": null
}
},
"ref_entries": {}
}
}