ACL-OCL / Base_JSON /prefixH /json /humeval /2021.humeval-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:08.621479Z"
},
"title": "Interrater Disagreement Resolution A Systematic Procedure to Reach Consensus in Annotation Tasks",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Oortwijn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eindhoven University of Technology",
"location": {
"settlement": "Algorithms",
"region": "Geometry, Applications"
}
},
"email": "y.oortwijn@uva.nl"
},
{
"first": "Thijs",
"middle": [],
"last": "Ossenkoppele",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eindhoven University of Technology",
"location": {
"settlement": "Algorithms",
"region": "Geometry, Applications"
}
},
"email": "t.ossenkoppele@uva.nl"
},
{
"first": "Arianna",
"middle": [],
"last": "Betti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eindhoven University of Technology",
"location": {
"settlement": "Algorithms",
"region": "Geometry, Applications"
}
},
"email": "a.betti@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a systematic procedure for interrater disagreement resolution. The procedure is general, but of particular use in multipleannotator tasks geared towards ground truth construction. We motivate our proposal by arguing that, barring cases in which the researchers' goal is to elicit different viewpoints, interrater disagreement is a sign of poor quality in the design or the description of a task. Consensus among annotators, we maintain, should be striven for, through a systematic procedure for disagreement resolution such as the one we describe.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a systematic procedure for interrater disagreement resolution. The procedure is general, but of particular use in multipleannotator tasks geared towards ground truth construction. We motivate our proposal by arguing that, barring cases in which the researchers' goal is to elicit different viewpoints, interrater disagreement is a sign of poor quality in the design or the description of a task. Consensus among annotators, we maintain, should be striven for, through a systematic procedure for disagreement resolution such as the one we describe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A growing body of literature signals a thorny issue with assessing general progress in the field of natural language processing (NLP) as part of artificial intelligence. Benchmarks that are considered 'general', and are widely used as standards to assess NLP systems' performance, turn out to be rather specific, and hence of more limited significance than commonly acknowledged (Raji et al. 2020; Schlangen 2020) . Good performance on specific benchmarks does not guarantee good performance across the board (Faruqui et al. 2016; Bakarov 2018 ; Ethayarajh and Jurafsky 2020): it only helps with gaining understanding of how certain systems work for those specific benchmarks. In order to claim progress across the board, one would need to evaluate system performance on a certain reasoned series of such specific benchmarks, that is, results on a host of \"more focused and explicitly defined problems\" (Raji et al., 2020, 1) . To enact this, one would need a ground truth for the evaluation of each specific task-cum-dataset, including ground truths in expert domains.",
"cite_spans": [
{
"start": 379,
"end": 397,
"text": "(Raji et al. 2020;",
"ref_id": "BIBREF28"
},
{
"start": 398,
"end": 413,
"text": "Schlangen 2020)",
"ref_id": "BIBREF32"
},
{
"start": 509,
"end": 530,
"text": "(Faruqui et al. 2016;",
"ref_id": "BIBREF13"
},
{
"start": 531,
"end": 543,
"text": "Bakarov 2018",
"ref_id": "BIBREF3"
},
{
"start": 903,
"end": 925,
"text": "(Raji et al., 2020, 1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ground truth construction is challenging. In this paper we focus on the process of constructing ground truths via semantic annotations tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent studies stress the intrinsic difficulty of semantic annotation due to vagueness and ambiguity (Aroyo and Welty 2015; Kairam and Heer 2016; Pavlick and Kwiatkowski 2019) . Importantly, some argue that interpretative disagreements due to different conceptualizations or perspectives cannot be seen as just 'mistakes' (Sommerauer et al. 2020; Herbelot and Vecchi 2016) . It is our tenet that in ground truth construction differences in conceptualizations or perspectives can and must be explicitly specified as an integral part of annotation tasks; moreover, interrater disagreement is not necessarily due to inherent ambiguities in the data, but at least in part to the annotation task being underspecified, in particular as to the right context to consider.",
"cite_spans": [
{
"start": 101,
"end": 123,
"text": "(Aroyo and Welty 2015;",
"ref_id": "BIBREF1"
},
{
"start": 124,
"end": 145,
"text": "Kairam and Heer 2016;",
"ref_id": "BIBREF21"
},
{
"start": 146,
"end": 175,
"text": "Pavlick and Kwiatkowski 2019)",
"ref_id": "BIBREF29"
},
{
"start": 322,
"end": 346,
"text": "(Sommerauer et al. 2020;",
"ref_id": "BIBREF33"
},
{
"start": 347,
"end": 372,
"text": "Herbelot and Vecchi 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Take annotation tasks involving relatedness or similarity judgments, which are key types of judgment for NLP evaluation. Similarity is not a property of two things by themselves in isolation: it is always judged by a specific standard, and by weighing properties of the things compared in different ways, according to a context (Goodman 1972; Batchkarov et al. 2016) . When people judge by different standards 1 , disagreement arises as a matter of course -and is especially likely when annotating texts of high conceptual density, as this requires a lot of prior knowledge and interpretation. In order to get comparable and meaningful annotations, judgment standards need to be aligned and made extremely transparent.",
"cite_spans": [
{
"start": 328,
"end": 342,
"text": "(Goodman 1972;",
"ref_id": "BIBREF17"
},
{
"start": 343,
"end": 366,
"text": "Batchkarov et al. 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we propose a six-step systematic procedure for interrater disagreement resolution in which conceptual alignment figures as one of the steps. The procedure is designed to facilitate the resolution of interrater disagreement that fre-quently arises in annotation tasks in which multiple annotators participate. The emergence of disagreement in annotation tasks is valuable information, albeit of a negative type: barring cases in which the researchers' goal is none other than eliciting disagreement, interrater disagreement, we maintain, is a sign of poor quality in the design or the description of a task. In ground truth construction, consensus among annotators should be striven for. The procedure applies to a wide range of annotation tasks, namely every task involving the application of one or more concepts to a unit of annotation (a fragment of text, such as a paragraph or a sentence, or a more artificial unit, such as a string with a length of n characters). We hold that the benefit of a systematic procedure of resolving interrater disagreement is twofold: first, such a procedure leads to the construction of reliable and well-grounded datasets, and second, it ensures that the resolution proceeds in a non-arbitrary fashion allowing for proper documentation and replicability of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Computational research: interrater agreement, dataset creation and ground truths Standard methods for measuring interrater agreement and reliability (Artstein and Poesio, 2008 ) such as (Cohen's) kappa (Cohen 1960; Landis and Koch 1977) and Krippendorff's alpha (Krippendorff, 2013) output a single score to represent the agreement between different raters. Methods such as the CrowdTruth framework (Aroyo and Welty 2014; Aroyo and Welty 2015) give a more detailed disagreement analysis, though only in post-annotation phase. Similarly, Kairam and Heer (2016) mention that disagreement cannot simply be treated as noise and propose a post-annotation method for identifying different valid interpretations annotators may use to come to different conclusions. By contrast, we take disagreement analysis and resolution as internal to the annotation procedure. Sommerauer et al. (2020) stress difficulties with annotation due to ambiguity or vagueness in language while studying cases in which disagreement between different annotators is expected and multiple answers are legitimate. Our focus is datasets that are meant to be used as ground truths. In ground truth construction, we argue, it is necessary to resolve cases of disagreement (disagreement resolution phase, see step 5 below), and, more importantly, dispel the ambiguities that cause disagree-ment (if ambiguity is the cause of the disagreement) by task specification, either by redesigning the task or by making the annotation guidelines more precise (conceptual alignment phase, see step 2 below). We do recognize that genuine disagreement might exist due to e.g. ambiguity in language in existing datasets (see also, Palomaki et al. (2018) ), but we see legitimate disagreement as having a specific meaning: it is either a signal that further resolution is needed (through annotation task redesign or guideline redefinition), or it is the possible result of a task specifically designed to chart or elicit instances of disagreement, as in Sommerauer et al. (2020) or Herbelot and Vecchi (2016) .",
"cite_spans": [
{
"start": 149,
"end": 175,
"text": "(Artstein and Poesio, 2008",
"ref_id": "BIBREF2"
},
{
"start": 202,
"end": 214,
"text": "(Cohen 1960;",
"ref_id": "BIBREF10"
},
{
"start": 215,
"end": 236,
"text": "Landis and Koch 1977)",
"ref_id": "BIBREF25"
},
{
"start": 241,
"end": 282,
"text": "Krippendorff's alpha (Krippendorff, 2013)",
"ref_id": null
},
{
"start": 537,
"end": 559,
"text": "Kairam and Heer (2016)",
"ref_id": "BIBREF21"
},
{
"start": 857,
"end": 881,
"text": "Sommerauer et al. (2020)",
"ref_id": "BIBREF33"
},
{
"start": 1680,
"end": 1702,
"text": "Palomaki et al. (2018)",
"ref_id": "BIBREF27"
},
{
"start": 2002,
"end": 2026,
"text": "Sommerauer et al. (2020)",
"ref_id": "BIBREF33"
},
{
"start": 2030,
"end": 2056,
"text": "Herbelot and Vecchi (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We offer a procedure by which annotators can avoid disagreement due to unclarity of the task, accurately discern the reason for disagreement whenever it arises, and make a deliberate decision on how these cases should be annotated. Any differences between 'people's beliefs about the world' (or the data), we say, should be explicitly integrated in task design such that annotators are required to judge according to a certain perspective or set of beliefs, and not from an absolute point of view. We agree with Pavlick and Kwiatkowski (2019) that disagreement between annotators cannot simply be seen as noise in the data supposedly due to lowquality annotations. However, while they divide the annotations into consistent units to get sets of consistent gold labels, we argue that in ground truth construction the variety of human judgments can and should be narrowed down to exactly one type by specification of the task. In our case, the process of identifying reasons for disagreement is part of the annotation process, which allows for resolution of disagreement and thereby a dataset suitable for use as a ground truth for the task at hand.",
"cite_spans": [
{
"start": 512,
"end": 542,
"text": "Pavlick and Kwiatkowski (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In Betti et al. (2020) , a general method for constructing expert-controlled ground truths for concept-focused domains is proposed, and the construction for an actual ground truth for a philosophical corpus is described. Disagreement resolution is mentioned, and one example of resolution is reported, but no explicit general methodology for disagreement resolution is offered.",
"cite_spans": [
{
"start": 3,
"end": 22,
"text": "Betti et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "It has been emphasized that the conditions under which a dataset has been created need to be properly documented to allow for reproducibility and replicability (Bender and Friedman 2018; Paullada et al. 2020; Hutchinson et al. 2021) . Language models are known to pick up and reinforce exist-ing biases in data (see, e.g., Bolukbasi et al. 2016; Zhao et al. 2017) . Bender and Friedman (2018) offer instructions on how to document data using data statements to help reproducibility and replicability, bring existing biases to the surface and improve representation in future dataset creation. The procedure we propose asks for explicit decisions from raters after deliberation. This requirement makes the conditions of dataset creation clear, thus allowing proper documentation.",
"cite_spans": [
{
"start": 160,
"end": 186,
"text": "(Bender and Friedman 2018;",
"ref_id": "BIBREF5"
},
{
"start": 187,
"end": 208,
"text": "Paullada et al. 2020;",
"ref_id": "BIBREF28"
},
{
"start": 209,
"end": 232,
"text": "Hutchinson et al. 2021)",
"ref_id": "BIBREF20"
},
{
"start": 323,
"end": 345,
"text": "Bolukbasi et al. 2016;",
"ref_id": "BIBREF8"
},
{
"start": 346,
"end": 363,
"text": "Zhao et al. 2017)",
"ref_id": "BIBREF35"
},
{
"start": 366,
"end": 392,
"text": "Bender and Friedman (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Philosophy Peer disagreement is a topic of investigation in philosophy, in particular in the subfield of social epistemology. A large amount of literature exists on issues concerning both peer disagreement (e.g. Goldman and Whitcomb 2011; Christensen and Lackey 2013) and group decision making in the face of such disagreement (e.g. List 2005 ), but resolution procedures that aid in moving from peer disagreement to unanimously agreed upon results are not proposed, and are in general ' [...] at best rare in scientific contexts.' (de Ridder, 2014). One of the scarce examples is Gius and Jacke's (2017) procedure for resolving interrater disagreement in literary corpus annotation. Although similar in approach, our work improves on the latter in terms of applicability: we intend our procedure to be fit for all annotation tasks that involve the application of one or more concepts to units of annotation, while Gius & Jacke focus on tasks within literary analysis exclusively. Note that annotation tasks in which concepts are applied to units of annotation are frequent: any task involving the identifying of instances of any concept qualifies. For example, in our validation example in section 5.2 the annotation task requires annotators to identify wide-scope claims in the text of journal articles (that is, instances of the concept of wide-scope claim).",
"cite_spans": [
{
"start": 333,
"end": 342,
"text": "List 2005",
"ref_id": "BIBREF26"
},
{
"start": 488,
"end": 493,
"text": "[...]",
"ref_id": null
},
{
"start": 581,
"end": 604,
"text": "Gius and Jacke's (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In Pivovarov and Elhadad (2012) a Cohen's kappa of 0.68 is \"accepted as representing a substantial amount of agreement between annotators\". By contrast, in Betti et al. (2020) the initial interrater agreement of 0.65 was taken as a starting point to reach further consensus. When the aim of the annotation is e.g. to get an overview of the variety of ways in which people interpret statements, then interrater agreement need only be high on statements for which there is only one obvious interpretation and so agreement is expected. However, when the annotations are supposed to establish a ground truth, interrater agreement, we argue, should be 1.",
"cite_spans": [
{
"start": 156,
"end": 175,
"text": "Betti et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ground truths and interrater agreement",
"sec_num": "3"
},
{
"text": "One strategy used for getting the interrater agreement on the ground truth to 1, is to discard disputed annotation(s) (see, e.g., Kenyon-Dean et al. (2018) ). But clearly this is loss of valuable information: for the purpose of training and evaluating a computational system we want to be as specific as possible as to what its output needs to be; by tossing out disputed annotation we underspecify what the right output on the matter is. Consider one of the examples in Herbelot and Vecchi (2016) : \"MISSILES EX-PLODE received the labels SOME, MOST and ALL. It is likely that the SOME interpretation quantifies over missiles which actually explode, while the MOST/ALL interpretation considers the potential of a missile to explode\". For ground truth construction, it is necessary to specify whether an annotator should e.g. take an actual or potential interpretation, to prevent annotators from making arbitrary choices or introducing unknown biases.",
"cite_spans": [
{
"start": 130,
"end": 155,
"text": "Kenyon-Dean et al. (2018)",
"ref_id": null
},
{
"start": 471,
"end": 497,
"text": "Herbelot and Vecchi (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ground truths and interrater agreement",
"sec_num": "3"
},
{
"text": "So, if an annotation data set is to be used as a ground truth, agreement should be the aim. When disagreement arises, it is important to identify why it arises, and make well-grounded decisions on how to deal with it. In the next section, we will outline a procedure for annotation through which different reasons for disagreement can be identified and which specifies directions for resolution of each of these types of disagreements. The procedure results in a reproducible dataset by forcing annotators to make well-grounded, and thereby traceable decisions on their annotations. Note that traceability makes the procedure relevant to all annotations, not just ground truth construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ground truths and interrater agreement",
"sec_num": "3"
},
{
"text": "The annotation procedure supposes what we call an 'annotation toolbox' consisting of (i) the annotation task or question, (ii) the guidelines specifying the instructions for annotation and (iii) some kind of definition or characterisation of the key concepts involved (see step 2). Fixing the definitions and characterisations of these concepts is essential to the conceptual alignment of annotators and for subsequent use of the resulting annotations. The use of the annotation toolbox also facilitates disagreement resolution insofar as annotators can refer to elements of the toolbox to give a justification for their scoring. This also means that if disagreement cannot be resolved by referring to elements of the toolbox, the toolbox is incomplete, or in any case insufficient as a basis for annotation. In this case, further expert research might be necessary to supplement the annotation toolbox. Based on the newly supplemented annotation toolbox, previous annotations might have to be redone, for there is no guarantee that these would end up receiving the same scoring. If such a resolution or supplementation is deemed impossible, the annotation cannot be completed and cannot lead to a dataset that is suitable as a ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ground truths and interrater agreement",
"sec_num": "3"
},
{
"text": "What follows is a description of the steps of the annotation procedure (see flowchart in figure 1). Throughout this description we will talk of 'scoring' as the act of annotating a single unit. This is intended to also refer to types of annotation that are more adequately called 'categorizing', 'labelling' or otherwise. Note that with the exception of cases in which step 0-2 is performed by the same group of researchers as step 3-5 (see, e.g., section 5.1 in which the annotation procedure of Betti et al. (2020) is described), the annotators should be under close supervision of the researchers formulating the research question, and those setting the annotation task and guidelines, throughout all steps of the procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The annotation procedure",
"sec_num": "4"
},
{
"text": "Step 0: Research setup and hypothesis forming In this initial phase, the prior research is done which indicates the need for an annotation task, research question(s) and hypotheses to be tested are formulated, and an annotation task is distilled to test these hypotheses. If at any point it is noticed that the research question or hypotheses are ill-defined or the annotation task does not match the research question, one should return to this step and start the process anew.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "Step 1: Setting up annotation task and guidelines In this phase, the annotators are either presented with or set up themselves both 1) the annotation task, and 2) a set of annotation guidelines that guide 1). Ideally the annotators are already involved in the task and guideline set up since this improves the understanding of the task. 1) is immutable; if for some reason during the annotation procedure the task changes, the annotation procedure is reset and new guidelines must be set up that correspond to the new task. 2), however, is mutable; it can happen that new insights emerge during the annotation procedure that call for additional annotation guidelines or for an improvement of the existing ones. In case setting up the annotation task and guidelines requires additional research, one should return to step 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "In developing the guidelines, researchers should consider how to score units that are ambiguous and therefore might endorse more than one interpretation. We recommend that instead of using, e.g., a simple binary scoring system, an \"ambiguous\" score is added to prevent forcing a decision. Forcing decisions could lead to arbitrariness, while ambiguity is still a real part of natural language that should be reflected in annotation. It should be ensured that this category won't mask unclarity in the task or the guidelines, by asking annotators to specify the source of unclarity (e.g. lexical ambiguity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "Step 2: Interrater conceptual alignment In this phase, the researchers identify the key concepts 2 , and make sure that all annotators agree on the meaning or function of those concepts in the context of the task by specifying the definitions and characterisations for these concepts. In case researchers and annotators are two different sets of people, the annotators should be trained by the researchers in the concepts relevant to the task. The annotation procedure cannot move beyond this step if no interrater conceptual consensus is reached; this type of mismatch will almost certainly result in irresolvable conflicting annotations. Complex concepts, viz. concepts that involve many subconcepts when unpacked (e.g. philosophical concepts) require unpacking in the form of an interpretive model in the sense of Betti and van den Berg (2014) . In these interpretive models, relations between subconcepts in the definition or characterisation of the concept modelled are made explicit. This facilitates the identification of instances of complex, rich concepts such as epistemology (see section 5.1). Such elaborate specification might not be required for simpler, or already well-defined concepts used consensually in different domains; in such cases, we expect less elaborate methods to suffice.",
"cite_spans": [
{
"start": 817,
"end": 846,
"text": "Betti and van den Berg (2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "After consensus is reached on all key concepts that the annotators are aware of at this stage, the annotators can be expected to have an equal understanding of these concepts, which they can apply in annotating the units. As we observe in our second test case (section 5.2), questions for which there are issues with conceptual alignment receive lower interrater agreement than questions without such issues. The annotations for these questions should be redone after returning to this step for proper conceptual alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "Similar to step 1, it is possible that for the definition of concepts it is necessary to do further research, in which case one should return to step 0, or to further specify the task or guidelines, in which case one should return to step 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "Step 3: Individual annotation Next, the annotations are performed according to the annotation guidelines specified in step 1. The manner in which the individual annotation proceeds depends on the guidelines, but as a general rule all annotators should score independently from each other to prevent being influenced by each other's scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "Step 4: Annotation comparison After the individual annotation process, the annotations are compared. The comparison ideally yields a large set of agreed-upon annotations, but will likely also yield a set of conflicting annotations. For the latter, the disagreement resolution procedure should be put into operation. As mentioned in section 3, if conflicting annotations are simply discarded, we obtain an incomplete dataset which is not fit for use as a ground truth. Moreover, in such cases, hidden unclarities are likely to persist in the task or guidelines (see step 5, a below); as a consequence, we cannot trust previously agreed-upon annotations to reflect genuine agreement. We recommend in any case that it be specified whether the annotation procedure for the dataset under consideration has proceeded beyond this step; for, if not, then no attempt has been made to even check for inconsistent scoring by the same annotator (see step 5, c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "Step 5: Disagreement resolution We identify five main sources of interrater disagreement: (a) Task or guideline unclarity. Among the possible reasons for interrater disagreement are 1) at least one annotator made a judgment based on a deviant interpretation of the nature of the task, and 2) the guidelines harbor residual unclarity as to the individual annotation procedure due to e.g. missing or vague instructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "In case 1), the annotators should achieve a uniform understanding of the task through discussion. Different construal of the task can be due to poor or missing definition of the concepts involved in it. In this case the annotators should return to step 2. For other task unclarities the annotators should return to step 1. Recall that the task is immutable, so if it becomes apparent that the annotators cannot agree on what the task to be performed is, the whole annotation procedure should be abandoned; there is no justification for continuing an annotation task that is not equally clear for all annotators. The annotators will have to restart the procedure and redefine the task in such a way that all annotators understand what is expected of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "In case of 2), the annotators should return to step 1 to reconsider the guidelines and, depending on the source of confusion, amend or supplement them. This should not be a controversial practice: it is not the task itself that is amended, but only the lines along which it is carried out most successfully. Note that in cases of drastic changes to the guidelines 3 , the whole individual annotation process likely needs to be redone. This option should be duly considered since this situation casts doubts also on the cases of agreement in the dataset .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "(b) Non-uniform interrater domain expertise. Despite having gone through step 2, there still may be differences in the amount of background knowledge that the annotators bring to the individual annotations. A difference in background knowledge used in annotating can cause diverging annotations. An example of divergence of this kind is when annotators align on the wrong width of some concept, i.e. a too narrow or too broad definition or characterisation of the concept, in which too many or too few aspects of that concept are considered. Mismatch in concept width among annotators is bound to lead to diverging annotations. In such a case, the annotators have to return to step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "(c) Inconsistent annotation. An annotator can have annotated inconsistently by scoring two units differently that should be given the same score (e.g. because the two units are functionally synonymous). In this case, the inconsistent annotator must decide whether they agree with the other annotators. If so, the scoring of the inconsistent units can simply be corrected and the disagreement is resolved. Reconsideration might however lead to rescoring such that the inconsistency is resolved, but the disagreement is not. In such cases, disagreement resolution won't be of type (c), though, and must be discussed under (a), (b), (d), or (e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "(d) Interpretive disagreement. Interpretive disagreement arises when, despite the fact that the annotators have reached conceptual alignment, there is disagreement about the purported meaning of certain terms in some unit. Annotators might hold a different interpretation of a certain unit even when they have an equal understanding of the concepts used in that unit, for example due to the use of an ambiguous term. The way these disagreements will have to be resolved is case-dependent. All annotators should defend their choice by stating the reasons for annotating the way they did. They should try to convince the other annotators by (rational) argumentation that their reading is the correct one. The annotators should then together weigh each others' reasons and see whether agreement can be reached. Whether the disagreement can be resolved or not depends on whether the annotators can settle for one interpretation that they all agree on. In some complex cases, deliberation might need to be postponed until research on the phenomenon encountered has sufficiently progressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "(e) Simple mistakes. If it is suspected that an annotator has made a simple mistake somewhere (a typo, or disagreement about a unit that should not be controversial), this has to be pointed out to the annotator concerned. If they agree that they have made a mistake, the annotation can be corrected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The procedure",
"sec_num": "4.1"
},
{
"text": "By identifying the source of disagreement and, if necessary, clarifying the task or guidelines for annotation, updating and repeating the (relevant parts of the) annotation procedure should result in a complete set of agreed-upon annotations. If there are structural unclarities in the task or annotation guidelines, it might be necessary to redo the individual annotations at step 3, and subsequent steps, after the task and guidelines have been clarified (step 1-2). Further research might also be needed to solve some disagreement (step 0) in which case the annotation process should be halted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unresolved Annotations",
"sec_num": "4.2"
},
{
"text": "In case the resolution procedure has still failed to resolve all disagreements but the annotation process has to be finished, it is possible to settle for a deprecated dataset. Two strategies to complete the annotation process commonly used in current annotation dataset creation are: 1) the conflicting annotations remain disagreed upon, with the resulting data loss and problems with usage of the dataset as a ground truth mentioned in section 3 as its consequence, or 2) a pre-appointed 'dictator' has the last say and resolves the disagreements by force. The dictator does so by either forcing particular decisions of their own choosing (in which case this part of the dataset is a single-annotator portion), or by applying some judgment aggregation method, such as majority rule. The benefit of choosing 1) is having a fully peer consensus-based annotation dataset, but this option imposes limits on the applicability of the resulting dataset as a ground truth. If 2) is chosen, there will be no unresolved disagreements, but the epistemic status of the annotation procedure is significantly compromised, not to mention the risk of having a dictator that makes wrong or capricious decisions. These options are up to those responsible for the resulting dataset. We argue against keeping any disagreements essentially unresolved (see section 3); at the same time, we also advise strongly against appointing dictators, as persistent peer disagreements reflect poorly specified tasks or unclear guidelines, and the forced resolution of these disagreements obfuscate such defects. Instead, a higher degree of conceptual alignment or a better specification of the annotation task or guidelines should be aimed for. If this is not possible, both the dataset and the cases of interpretive disagreement should be flagged as such, and a report should be made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unresolved Annotations",
"sec_num": "4.2"
},
{
"text": "By way of illustration and validation, in this section we outline two different user applications of the procedure we have observed, by two nonoverlapping teams of domain expert annotators. The first application concerns a study of a complex, rich philosophical concept in the complete corpus of the works of a specific author. In this case, the annotators worked through the entire procedure. The second application concerns a study of the methodological justification given to widescope claims in academic literature. Although the corpus used in the second case is also from the field of philosophy, the annotation task is generic, and could have been performed on any type of scholarly article. The second team set up the research (step 0), annotation task and guidelines (step 1), but they did not settle on the meaning of all key concepts (step 2) before annotation. For the first case we will give examples for each of the reasons for disagreement, while for the second case we will focus on an issue due to the lack of conceptual alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test cases",
"sec_num": "5"
},
{
"text": "In this task, the annotators scored paragraphs in the work of the philosopher W. V. O. Quine for relevance on his views on epistemology. 4 The annotators started by creating an initial interpretive model at step 0. The annotation task and guidelines, formulated as part of step 1, were as follows: The annotators have to score paragraphs based on the degree of evidence they contain with respect to a research question (RQ) concerning the nature of Quine's naturalistic epistemology.",
"cite_spans": [
{
"start": 137,
"end": 138,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "Guidelines: The annotators have three scoring options:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "1: the paragraph contains strong evidence for some answer to the RQ. 0: the paragraph contains mild evidence for some answer to the RQ, or the annotator is not sure whether the paragraph contains sufficient evidence to answer the RQ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "-1: the paragraph does not contain enough evidence to answer the RQ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "As part of step 2 the annotators expanded the initial interpretive model to make sure they had a clearly defined, shared conception of all key concepts. Without this, the annotators might have started the individual annotation phase with diverging understandings of the concept of e.g. epistemology and would presumably fail to score the same way, leading to many disagreements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "After step 3 (individual annotations), the annotators had an interrater agreement of about \u03ba \u2248 0.65. After step 4 and step 5, the identification and resolution of all the cases of disagreement, an interrater agreement of 1 was reached. The following are examples of each of the possible reasons for disagreement and how they were resolved: (a) Task or guideline unclarity: In some of the annotated paragraphs, Quine merely talks about the views of different philosophers on epistemology, instead of expressing his own. After discussion it was decided to add to the guidelines the rule that these paragraphs do not provide evidence for the research question and hence should be scored -1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "(b) Non-uniform interrater domain expertise: There was disagreement about a passage in which the term \"first philosophy\" occurred without an explanation of that term in the same passage. Not all annotators agreed on the degree of evidence the passage provided without an explication of \"first philosophy\". After further conceptual alignment, the annotators agreed that \"first philosophy\" expressed a concept of central importance, and that an equal understanding of the matter among annotators was thus essential to the task. A characterisa-tion for the term was fixed, and the units containing \"first philosophy\" were re-annotated in unanimous agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "(c) Inconsistent annotation: Two paragraphs that had to be annotated indicated Quine's blurring of the boundary between ontological statements and (natural) scientific statements, only in different wording. One annotator scored the two passages differently, and corrected this after notice from and discussion with another annotator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "(d) Interpretive disagreement: One annotator scored 1, the other two 0. Upon discussion, the first annotator explained to have read the unit as if Quine defended a view mentioned as the \"straightforward view\". After discussion, the annotator became convinced that this cannot be clearly said from the fragment, and thus consensus was reached on scoring 0, resolving the disagreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "(e) Simple mistake: An annotator noticed disagreement about a paragraph that should not be controversial. In that paragraph, Quine quite straightforwardly states that mathematical logic is an example of a hard science. The unit was rescored and the disagreement was resolved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Epistemology in Quine",
"sec_num": "5.1"
},
{
"text": "In this annotation task, annotators scored articles from the British Journal of History of Philosophy between 2017 and 2019 by checking their abstracts, introduction and methodological information for clear statements of inclusion/exclusion criteria for the sources the authors take into account, the completeness of the sources consulted, and the scope of the claims authors made on this basis. 5 The annotation task was as follows: for each article, the annotators answer the following questions: Exclusion/Inclusion 1. Does the article use a reproducible methodology with explicit inclusion and exclusion criteria to identify and find primary literature? 2. Does the article use a reproducible methodology with explicit inclusion and exclusion criteria to identify and find secondary literature? Completeness 3. Does the article explicitly attempt to identify all available primary literature relative to the research question? 4. Does the article explicitly attempt to identify all available secondary literature relative to the research question? Wide-scope claims 5a. Does the article argue for wide-scope historical claims, i.e., claims spanning multiple decades or periods or intellectual movements? 5b. If 5a is answered positively, does the article qualify the wide-scope claims?",
"cite_spans": [
{
"start": 396,
"end": 397,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Reviews in the History of Philosophy",
"sec_num": "5.2"
},
{
"text": "Guidelines: The annotators will annotate the article by scoring '1' for yes, otherwise, by scoring '0'. In case of a discrepancy between the abstract and body of the article, the body (represented by the introduction and methodology section) will be leading. The annotators will also check section and subsection headings in order to identify other relevant sections related to the finding and use of primary and secondary literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Reviews in the History of Philosophy",
"sec_num": "5.2"
},
{
"text": "The annotators did not construct interpretive models for the key concepts in the task/guidelines. This is understandable, given the low complexity of concepts involved. The problem, though, is that the team did not fix definitions or characterisations of all relevant terms from the outset either, as will be clear below, and by contrast with the annotations in section 5.1. Missing this essential part of the annotation toolbox is a shortcoming that resulted in an interrater agreement unnecessarily lower than it should have been. We will highlight one case of task or guideline unclarity (step 5, a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Reviews in the History of Philosophy",
"sec_num": "5.2"
},
{
"text": "During discussion on specific disagreements on the basis of our flowchart, the annotators noticed that they used different construals of what constitutes a wide-scope claim. While the annotators were able to resolve these disagreements on a case-by-case basis, it cannot be guaranteed that the agreed annotations would still receive the same scoring by the new considerations on what constitutes a wide-scope claim. Therefore, when in step 5 of the procedure it is discovered that the interpretation of key terms should be refined, it is necessary to revisit all annotations. By following the first three steps of the procedure before starting the individual annotations, annotators are forced to settle on an interpretation of terms such as wide-scope claim before annotating. This way disagreement on many passages and the need to redo all annotations can be avoided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Reviews in the History of Philosophy",
"sec_num": "5.2"
},
{
"text": "The interrater agreement on this task was \u03ba \u2248 0.71 before disagreement resolution. The annota-tors resolved all cases of disagreement using step 5 of the procedure. 62% of the disagreements were determined to be inconsistent annotations (5, c), 21% were due to guideline or task unclarity (5, a), 10% were due to non-uniform interrater expertise (5, b) and 7% were simple mistakes (5, e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Reviews in the History of Philosophy",
"sec_num": "5.2"
},
{
"text": "Note that the two questions about wide-scope claims have a much lower interrater agreement of \u03ba \u2248 0.45. This can be explained by the problems concerning the different construals of what constitutes a wide-scope claim discussed above and emphasized the need for conceptual alignment. Note also that no cases of interpretive disagreement were identified. This is likely because, after the interpretation of concepts has been settled in step 2, there is not much need for extensive interpretation of the units of annotation in this annotation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Reviews in the History of Philosophy",
"sec_num": "5.2"
},
{
"text": "We have shown how the procedure applies to the two test cases discussed in section 5. However, our procedure is not limited to cases of that type. Concepts are involved in any type of annotation task, and any concept necessitates both interpretation and conceptual alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further applications",
"sec_num": "6"
},
{
"text": "Consider the case of Herbelot and Vecchi (2016) again: \"MISSILES EXPLODE received the labels SOME, MOST and ALL.\". Suppose we want to construct a ground truth of property-object pairs. The example shows that the guidelines should specify whether to use an actual or potential interpretation of property possession. Note, though, that settling for an interpretation often won't be enough: while annotating under a potential interpretation, the issue may arise whether objects should have the potential to have a property actually (most do, but some are faulty) or teleologically (all). By our procedure, these ambiguities become apparent, and disagreement can be resolved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further applications",
"sec_num": "6"
},
{
"text": "The two test cases of section 5 both have academics as annotators, but this is no intrinsic requirement of our procedure. For some linguistic tasks, being a native speaker of the relevant language is enough expertise to be able to grasp and apply the concepts involved in the task. Another matter is the common practice of resorting to crowdsourcing platforms 6 to construct large, non-academic annotation datasets. The practice is useful, but ill-suited to accommodate the type of disagreement resolution we envisage. Our take is that even though it 6 See e.g. https://www.mturk.com/ might not always be possible to adopt the entire procedure for ground truth construction, we see no fundamental, theoretical problems with its application in a wide variety of cases.",
"cite_spans": [
{
"start": 551,
"end": 552,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further applications",
"sec_num": "6"
},
{
"text": "In this paper we proposed a six-step systematic procedure for annotation focused on disagreement resolution. We argued that disagreement is the result of poorly specified tasks or guidelines, or of insufficient conceptual alignment among annotators. To avoid incomplete datasets unfit for use as ground truths, we set up the procedure in such a way that the identification and non-arbitrary resolution of different types of disagreement is facilitated. Disagreement resolution by a clearly defined procedure results in more reliable and well-grounded datasets. By identifying the cause of disagreement and giving appropriate instructions for resolution for each type of disagreement, our procedure ensures that the resolution proceeds in a non-arbitrary fashion allowing for proper documentation and increasing replicability of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7"
},
{
"text": "We have validated the effectiveness and the importance of our annotation procedure by two test cases. The first case shows that conceptual alignment by itself does not guarantee that annotators make no mistakes or only come across clarified concepts, indicating the need for disagreement resolution after annotation. The second case emphasizes the importance of task clarification and conceptual alignment prior to annotation. Without this, the likeliness increases of having to redo annotations due to different construals of terms influencing both conflicting and agreed-upon annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7"
},
{
"text": "In further work we aim to collect more use cases to test the applicability of the procedure to more varied types of annotations. Moreover, we want to consider in more depth the interplay of step 0-2 and further elaborate on the idea of key concept at step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7"
},
{
"text": "AsGladkova and Drozd (2016) point out, similarity is defined by Turney and Pantel (2010) as co-hyponymy (e.g. car and bicycle), whereasHill et al. (2015) define it as \"exemplified by pairs of synonyms; words with identical referents\" (e.g. mug and cup).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "By 'key concepts' we mean concepts mentioned in both task and guidelines. Note that settling on a definition for a concept at this step might require adding further new concepts to the guidelines in step 1, which should in turn be settled in step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "What \"drastic changes\" are depends on the nature of the task, and on whether the changes have any bearing on the scoring of other, previously completed annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For more information about the dataset, seeBetti et al. (2020) and https://github.com/YOortwijn/ HumEvalDisRes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For more information about the dataset, see https:// github.com/YOortwijn/HumEvalDisRes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their time and helpful comments. We thank the UvA e-Ideas team for their valuable discussion of a draft of this paper. This research was supported by grants e-Ideas (VICI, 277-20-007) and CatVis (314-99-117), funded by the Dutch Research Council (NWO), and by the Human(e)AI grant Small data, big challenges funded by the University of Amsterdam.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The three sides of CrowdTruth",
"authors": [
{
"first": "Lora",
"middle": [],
"last": "Aroyo",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2014,
"venue": "Human Computation",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lora Aroyo and Chris Welty. 2014. The three sides of CrowdTruth. Human Computation, 1(1).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Truth is a lie: Crowd truth and the seven myths of human annotation",
"authors": [
{
"first": "Lora",
"middle": [],
"last": "Aroyo",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "36",
"issue": "",
"pages": "15--24",
"other_ids": {
"DOI": [
"10.1609/aimag.v36i1.2564"
]
},
"num": null,
"urls": [],
"raw_text": "Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annota- tion. AI Magazine, 36(1):15-24.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Survey article: Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {
"DOI": [
"10.1162/coli.07-034-R2"
]
},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Survey ar- ticle: Inter-coder agreement for computational lin- guistics. Computational Linguistics, 34(4):555- 596.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A survey of word embeddings evaluation methods",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Bakarov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.09536"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Bakarov. 2018. A survey of word embeddings evaluation methods. arXiv:1801.09536.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A critique of word similarity as a method for evaluating distributional semantic models",
"authors": [
{
"first": "Miroslav",
"middle": [],
"last": "Batchkarov",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kober",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2502"
]
},
"num": null,
"urls": [],
"raw_text": "Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. 2016. A critique of word similarity as a method for evaluating distribu- tional semantic models. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representa- tions for NLP, pages 7-12, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00041"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Expert Concept-Modeling Ground Truth Construction for Word Embeddings Evaluation in Concept-Focused Domains",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Betti",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Reynaert",
"suffix": ""
},
{
"first": "Thijs",
"middle": [],
"last": "Ossenkoppele",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Oortwijn",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Salway",
"suffix": ""
},
{
"first": "Jelke",
"middle": [],
"last": "Bloem",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6690--6702",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.586"
]
},
"num": null,
"urls": [],
"raw_text": "Arianna Betti, Martin Reynaert, Thijs Ossenkoppele, Yvette Oortwijn, Andrew Salway, and Jelke Bloem. 2020. Expert Concept-Modeling Ground Truth Construction for Word Embeddings Evaluation in Concept-Focused Domains. In Proceedings of the 28th International Conference on Computa- tional Linguistics, pages 6690-6702, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modelling the History of Ideas",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Betti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hein Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2014,
"venue": "British Journal for the History of Philosophy",
"volume": "22",
"issue": "4",
"pages": "812--835",
"other_ids": {
"DOI": [
"10.1080/09608788.2014.949217"
]
},
"num": null,
"urls": [],
"raw_text": "Arianna Betti and Hein van den Berg. 2014. Modelling the History of Ideas. British Journal for the History of Philosophy, 22(4):812-835.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Man is to computer programmer as woman is to homemaker? Debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16",
"volume": "",
"issue": "",
"pages": "4356--4364",
"other_ids": {
"DOI": [
"10.5555/3157382.3157584"
]
},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Pro- ceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, pages 4356-4364, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Epistemology of Disagreement: New Essays",
"authors": [
{
"first": "David",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Lackey",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Christensen and Jennifer Lackey, editors. 2013. The Epistemology of Disagreement: New Essays. Oxford University Press, Oxford, UK.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A coefficient of agreement for nominal scales",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "1",
"pages": "37--46",
"other_ids": {
"DOI": [
"10.1177/001316446002000104"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Epistemic dependence and collective scientific knowledge",
"authors": [
{
"first": "Ridder",
"middle": [],
"last": "Jeroen De",
"suffix": ""
}
],
"year": 2014,
"venue": "Synthese",
"volume": "191",
"issue": "1",
"pages": "37--53",
"other_ids": {
"DOI": [
"10.1007/s11229-013-0283-3"
]
},
"num": null,
"urls": [],
"raw_text": "Jeroen de Ridder. 2014. Epistemic dependence and col- lective scientific knowledge. Synthese, 191(1):37- 53.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Utility is in the eye of the user: A critique of NLP leaderboards",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.13888"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards. arXiv:2009.13888.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Problems with evaluation of word embeddings using word similarity tasks",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2506"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 30- 35, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Hermeneutic Profit of Annotation: On Preventing and Fostering Disagreement in Literary Analysis",
"authors": [
{
"first": "Evelyn",
"middle": [],
"last": "Gius",
"suffix": ""
},
{
"first": "Janina",
"middle": [],
"last": "Jacke",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Humanities and Arts Computing",
"volume": "11",
"issue": "2",
"pages": "233--254",
"other_ids": {
"DOI": [
"10.3366/ijhac.2017.0194"
]
},
"num": null,
"urls": [],
"raw_text": "Evelyn Gius and Janina Jacke. 2017. The Hermeneu- tic Profit of Annotation: On Preventing and Fos- tering Disagreement in Literary Analysis. Interna- tional Journal of Humanities and Arts Computing, 11(2):233-254.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Intrinsic evaluations of word embeddings: What can we do better?",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Gladkova",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "36--42",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2507"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Gladkova and Aleksandr Drozd. 2016. Intrinsic evaluations of word embeddings: What can we do better? In Proceedings of the 1st Workshop on Eval- uating Vector-Space Representations for NLP, pages 36-42, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Social Epistemology: Essential Readings",
"authors": [
{
"first": "Alvin",
"middle": [
"I"
],
"last": "Goldman",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Whitcomb",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvin I. Goldman and Dennis Whitcomb, editors. 2011. Social Epistemology: Essential Readings. Oxford University Press, Oxford, NY.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Seven strictures on similarity",
"authors": [
{
"first": "N",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1972,
"venue": "Problems and Projects",
"volume": "",
"issue": "",
"pages": "437--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Goodman. 1972. Seven strictures on similarity. In Problems and Projects, pages 437-450. Bobbs Mer- rill, Indianapolis, IN.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Many speakers, many worlds: Interannotator variations in the quantification of feature norms",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Eva",
"middle": [
"Maria"
],
"last": "Vecchi",
"suffix": ""
}
],
"year": 2016,
"venue": "Linguistic Issues in Language Technology",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Eva Maria Vecchi. 2016. Many speakers, many worlds: Interannotator variations in the quantification of feature norms. In Linguistic Issues in Language Technology, Volume 13, 2016. CSLI Publications.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00237"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards accountability for machine learning datasets: Practices from software engineering and infrastructure",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Smart",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Greer",
"suffix": ""
},
{
"first": "Oddur",
"middle": [],
"last": "Kjartansson",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21",
"volume": "",
"issue": "",
"pages": "560--575",
"other_ids": {
"DOI": [
"10.1145/3442188.3445918"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards ac- countability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fair- ness, Accountability, and Transparency, FAccT '21, page 560-575, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks",
"authors": [
{
"first": "Sanjay",
"middle": [],
"last": "Kairam",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW '16",
"volume": "",
"issue": "",
"pages": "1637--1648",
"other_ids": {
"DOI": [
"10.1145/2818048.2820016"
]
},
"num": null,
"urls": [],
"raw_text": "Sanjay Kairam and Jeffrey Heer. 2016. Parting crowds: Characterizing divergent interpretations in crowd- sourced annotation tasks. In Proceedings of the 19th ACM Conference on Computer-Supported Coopera- tive Work & Social Computing, CSCW '16, pages 1637-1648, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sentiment analysis: It's complicated!",
"authors": [
{
"first": "Nirmal",
"middle": [],
"last": "Belfer",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Kanagasabai",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Sarrazingendron",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1886--1895",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1171"
]
},
"num": null,
"urls": [],
"raw_text": "Belfer, Nirmal Kanagasabai, Roman Sarrazingen- dron, Rohit Verma, and Derek Ruths. 2018. Senti- ment analysis: It's complicated! In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1886-1895, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Content Analysis -3rd Edition : An Introduction to Its Methodology",
"authors": [
{
"first": "Klaus",
"middle": [
"H"
],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus H. Krippendorff. 2013. Content Analysis -3rd Edition : An Introduction to Its Methodology. SAGE Publications, Inc., Thousand Oaks, CA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Measurement of Observer Agreement for Categorical Data",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Landis",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "33",
"issue": "1",
"pages": "159--174",
"other_ids": {
"DOI": [
"10.2307/2529310"
]
},
"num": null,
"urls": [],
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The Mea- surement of Observer Agreement for Categorical Data. Biometrics, 33(1):159-174.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Group Knowledge and Group Rationality: A Judgment Aggregation Perspective",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2005,
"venue": "Episteme",
"volume": "2",
"issue": "1",
"pages": "25--38",
"other_ids": {
"DOI": [
"10.3366/epi.2005.2.1.25"
]
},
"num": null,
"urls": [],
"raw_text": "Christian List. 2005. Group Knowledge and Group Rationality: A Judgment Aggregation Perspective. Episteme, 2(1):25-38.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A case for a range of acceptable annotations",
"authors": [
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Rhinehart",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Tseng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias",
"volume": "2276",
"issue": "",
"pages": "19--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennimaria Palomaki, Olivia Rhinehart, and Michael Tseng. 2018. A case for a range of acceptable an- notations. In Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowd- sourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018), Z\u00fcrich, Switzerland, volume 2276 of CEUR Workshop Proceedings, pages 19-31. CEUR-WS.org.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Data and its (dis)contents: A survey of dataset development and use in machine learning research",
"authors": [
{
"first": "Amandalynne",
"middle": [],
"last": "Paullada",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Inioluwa",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Raji",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hanna",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex Hanna. 2020. Data and its (dis)contents: A survey of dataset development and use in machine learning research.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Inherent disagreements in human textual inferences",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "677--694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transac- tions of the Association for Computational Linguis- tics, 7:677-694.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A hybrid knowledge-based and data-driven approach to identifying semantically similar concepts",
"authors": [
{
"first": "Rimma",
"middle": [],
"last": "Pivovarov",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Biomedical Informatics",
"volume": "45",
"issue": "3",
"pages": "471--481",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2012.01.002"
]
},
"num": null,
"urls": [],
"raw_text": "Rimma Pivovarov and No\u00e9mie Elhadad. 2012. A hy- brid knowledge-based and data-driven approach to identifying semantically similar concepts. Journal of Biomedical Informatics, 45(3):471-481.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Emily Denton, Alex Hanna, and Amandalynne Paullada. 2020. AI and the Everything in the Whole Wide World Benchmark",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Inioluwa Deborah Raji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bender",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the NeurIPS 2020 Workshop: ML Retrospectives, Surveys & Meta-Analyses (ML-RSA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inioluwa Deborah Raji, Emily M. Bender, Emily Den- ton, Alex Hanna, and Amandalynne Paullada. 2020. AI and the Everything in the Whole Wide World Benchmark. In Proceedings of the NeurIPS 2020 Workshop: ML Retrospectives, Surveys & Meta- Analyses (ML-RSA), Online.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Targeting the benchmark: On methodology in current natural language processing research",
"authors": [
{
"first": "David",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.04792"
]
},
"num": null,
"urls": [],
"raw_text": "David Schlangen. 2020. Targeting the benchmark: On methodology in current natural language processing research. arXiv:2007.04792.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Would you describe a leopard as yellow? Evaluating crowd-annotations with justified and informative disagreement",
"authors": [
{
"first": "Pia",
"middle": [],
"last": "Sommerauer",
"suffix": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4798--4809",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.422"
]
},
"num": null,
"urls": [],
"raw_text": "Pia Sommerauer, Antske Fokkens, and Piek Vossen. 2020. Would you describe a leopard as yellow? Evaluating crowd-annotations with justified and in- formative disagreement. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 4798-4809, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "1",
"pages": "141--188",
"other_ids": {
"DOI": [
"10.1613/jair.2934"
]
},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37(1):141-188.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2979--2989",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1323"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "This flowchart serves as a summary of the annotation procedure detailed in section 4. The oval boxes contain the resulting annotations, green for agreed and pink for unresolved annotations. See https://github. com/YOortwijn/HumEvalDisRes to view the image separately.",
"uris": null,
"num": null
}
}
}
}