ACL-OCL / Base_JSON /prefixJ /json /J04 /J04-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J04-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:57:15.400806Z"
},
"title": "Squibs and Discussions",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Di",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Glass",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect \u03ba and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of \u03ba. Second, we discuss how prevalence and bias affect the \u03ba measure.",
"pdf_parse": {
"paper_id": "J04-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect \u03ba and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of \u03ba. Second, we discuss how prevalence and bias affect the \u03ba measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last few years, coded corpora have acquired an increasing importance in every aspect of human-language technology. Tagging for many phenomena, such as dialogue acts (Carletta et al. 1997; Di Eugenio et al. 2000) , requires coders to make subtle distinctions among categories. The objectivity of these decisions can be assessed by evaluating the reliability of the tagging, namely, whether the coders reach a satisfying level of agreement when they perform the same coding task. Currently, the de facto standard for assessing intercoder agreement is the \u03ba coefficient, which factors out expected agreement (Cohen 1960; Krippendorff 1980) . \u03ba had long been used in content analysis and medicine (e.g., in psychiatry to assess how well students' diagnoses on a set of test cases agree with expert answers) (Grove et al. 1981) . Carletta (1996) deserves the credit for bringing \u03ba to the attention of computational linguists.",
"cite_spans": [
{
"start": 172,
"end": 194,
"text": "(Carletta et al. 1997;",
"ref_id": "BIBREF6"
},
{
"start": 195,
"end": 218,
"text": "Di Eugenio et al. 2000)",
"ref_id": "BIBREF9"
},
{
"start": 612,
"end": 624,
"text": "(Cohen 1960;",
"ref_id": "BIBREF8"
},
{
"start": 625,
"end": 643,
"text": "Krippendorff 1980)",
"ref_id": "BIBREF15"
},
{
"start": 810,
"end": 829,
"text": "(Grove et al. 1981)",
"ref_id": "BIBREF14"
},
{
"start": 832,
"end": 847,
"text": "Carletta (1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u03ba is computed as P(A) \u2212 P(E) 1 \u2212 P(E) , where P(A) is the observed agreement among the coders, and P(E) is the expected agreement, that is, P(E) represents the probability that the coders agree by chance. The values of \u03ba are constrained to the interval [\u22121, 1]. A \u03ba value of one means perfect agreement, a \u03ba value of zero means that agreement is equal to chance, and a \u03ba value of negative one means \"perfect\" disagreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This squib addresses two issues that have been neglected in the computational linguistics literature. First, there are two main ways of computing P(E), the expected agreement, according to whether the distribution of proportions over the categories is taken to be equal for the coders (Scott 1955; Fleiss 1971; Krippendorff 1980; Siegel and Castellan 1988) or not (Cohen 1960) . Clearly, the two approaches reflect different conceptualizations of the problem. We believe the distinction between the two is often glossed over because in practice the two computations of P(E) produce very similar outcomes in most cases, especially for the highest values of \u03ba. However, first, we will show that they can indeed result in different values of \u03ba, that we will call \u03ba Co (Cohen 1960) and \u03ba S&C (Siegel and Castellan 1988) . These different values can lead to contradictory conclusions on intercoder agreement. Moreover, the assumption of equal distributions over the categories masks the exact source of disagreement among the coders. Thus, such an assumption is detrimental if such systematic disagreements are to be used to improve the coding scheme (Wiebe, Bruce, and O'Hara 1999) .",
"cite_spans": [
{
"start": 285,
"end": 297,
"text": "(Scott 1955;",
"ref_id": "BIBREF17"
},
{
"start": 298,
"end": 310,
"text": "Fleiss 1971;",
"ref_id": "BIBREF12"
},
{
"start": 311,
"end": 329,
"text": "Krippendorff 1980;",
"ref_id": "BIBREF15"
},
{
"start": 330,
"end": 356,
"text": "Siegel and Castellan 1988)",
"ref_id": "BIBREF18"
},
{
"start": 364,
"end": 376,
"text": "(Cohen 1960)",
"ref_id": "BIBREF8"
},
{
"start": 765,
"end": 777,
"text": "(Cohen 1960)",
"ref_id": "BIBREF8"
},
{
"start": 788,
"end": 815,
"text": "(Siegel and Castellan 1988)",
"ref_id": "BIBREF18"
},
{
"start": 1146,
"end": 1177,
"text": "(Wiebe, Bruce, and O'Hara 1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Second, \u03ba is affected by skewed distributions of categories (the prevalence problem) and by the degree to which the coders disagree (the bias problem). That is, for a fixed P(A), the values of \u03ba vary substantially in the presence of prevalence, bias, or both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We will conclude by suggesting that \u03ba Co is a better choice than \u03ba S&C in those studies in which the assumption of equal distributions underlying \u03ba S&C does not hold: the vast majority, if not all, of discourse-and dialogue-tagging efforts. However, as \u03ba Co suffers from the bias problem but \u03ba S&C does not, \u03ba S&C should be reported too, as well as a third measure that corrects for prevalence, as suggested in Byrt, Bishop, and Carlin (1993) .",
"cite_spans": [
{
"start": 411,
"end": 442,
"text": "Byrt, Bishop, and Carlin (1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "P(E) is the probability of agreement among coders due to chance. The literature describes two different methods for estimating a probability distribution for random assignment of categories. In the first, each coder has a personal distribution, based on that coder's distribution of categories (Cohen 1960) . In the second, there is one distribution for all coders, derived from the total proportions of categories assigned by all coders (Scott 1955; Fleiss 1971; Krippendorff 1980; Siegel and Castellan 1988) . 1 We now illustrate the computation of P(E) according to these two methods. We will then show that the resulting \u03ba Co and \u03ba S&C may straddle one of the significant thresholds used to assess the raw \u03ba values.",
"cite_spans": [
{
"start": 294,
"end": 306,
"text": "(Cohen 1960)",
"ref_id": "BIBREF8"
},
{
"start": 438,
"end": 450,
"text": "(Scott 1955;",
"ref_id": "BIBREF17"
},
{
"start": 451,
"end": 463,
"text": "Fleiss 1971;",
"ref_id": "BIBREF12"
},
{
"start": 464,
"end": 482,
"text": "Krippendorff 1980;",
"ref_id": "BIBREF15"
},
{
"start": 483,
"end": 509,
"text": "Siegel and Castellan 1988)",
"ref_id": "BIBREF18"
},
{
"start": 512,
"end": 513,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "The assumptions underlying these two methods are made tangible in the way the data are visualized, in a contingency table for Cohen, and in what we will call an agreement table for the others. Consider the following situation. Two coders 2 code 150 occurrences of Okay and assign to them one of the two labels Accept or Ack(nowledgement) (Allen and Core 1997) . The two coders label 70 occurrences as Accept, and another 55 as Ack. They disagree on 25 occurrences, which one coder labels as Ack, and the other as Accept. In Figure 1 , this example is encoded by the top contingency table on the left (labeled Example 1) and the agreement table on the right. The contingency table directly mirrors our description. The agreement table is an N \u00d7 m matrix, where N is the number of items in the data set and m is the number of labels that can be assigned to each object; in our example, N = 150 and m = 2. Each entry n ij is the number of codings of label j to item i. The agreement table in Figure 1 shows that occurrences 1 through 70 have been labeled as Accept by both coders, 71 through 125 as Ack by both coders, and 126 to 150 differ in their labels.",
"cite_spans": [
{
"start": 338,
"end": 359,
"text": "(Allen and Core 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 524,
"end": 532,
"text": "Figure 1",
"ref_id": null
},
{
"start": 989,
"end": 997,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "1 To be precise, Krippendorff uses a computation very similar to Siegel and Castellan's to produce a statistic called alpha. Krippendorff computes P(E) (called 1 \u2212 De in his terminology) with a sampling-without-replacement methodology. The computations of P(E) and of 1 \u2212 De show that the difference is negligible:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "P(E) = j i n ij Nk 2 (Siegel and Castellan) 1 \u2212 De = j i n ij Nk i n ij \u22121 Nk\u22121 (Krippendorff)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "2 Both \u03ba S&C (Scott 1955) and \u03ba Co (Cohen 1960) were originally devised for two coders. Each has been extended to more than two coders, for example, respectively Fleiss (1971) and Bartko and Carpenter (1976) . Thus, without loss of generality, our examples involve two coders. Agreement tables lose information. When the coders disagree, we cannot reconstruct which coder picked which category. Consider Example 2 in Figure 1 . The two coders still disagree on 25 occurrences of Okay. However, one coder now labels 10 of those as Accept and the remaining 15 as Ack, whereas the other labels the same 10 as Ack and the same 15 as Accept. The agreement table does not change, but the contingency table does.",
"cite_spans": [
{
"start": 13,
"end": 25,
"text": "(Scott 1955)",
"ref_id": "BIBREF17"
},
{
"start": 35,
"end": 47,
"text": "(Cohen 1960)",
"ref_id": "BIBREF8"
},
{
"start": 162,
"end": 175,
"text": "Fleiss (1971)",
"ref_id": "BIBREF12"
},
{
"start": 180,
"end": 207,
"text": "Bartko and Carpenter (1976)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Turning now to computing P(E), Figure 2 shows, for Example 1, Cohen's computation of P(E) on the left, and Siegel and Castellan's computation on the right. We include the computations of \u03ba Co and \u03ba S&C as the last step. For both Cohen and Siegel and Castellan, P(A) = 125/150 = 0.8333. The observed agreement P(A) is computed as the proportion of items the coders agree on to the total number of items; N is the number of items, and k the number of coders (N = 150 and k = 2 in our example). Both \u03ba Co and \u03ba S&C are highly significant at the p = 0.5 * 10 \u22125 level (significance is computed for \u03ba Co and \u03ba S&C according to the formulas in Cohen [1960] and Siegel and Castellan [1988] , respectively).",
"cite_spans": [
{
"start": 638,
"end": 650,
"text": "Cohen [1960]",
"ref_id": "BIBREF8"
},
{
"start": 655,
"end": 682,
"text": "Siegel and Castellan [1988]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "The difference between \u03ba Co and \u03ba S&C in Figure 2 is just under 1%, however, the results of the two \u03ba computations straddle the value 0.67, which for better or worse has been adopted as a cutoff in computational linguistics. This cutoff is based on the assessment of \u03ba values in Krippendorff (1980) , which discounts \u03ba < 0.67 and allows tentative conclusions when 0.67 \u2264 \u03ba < 0.8 and definite conclusions when \u03ba \u2265 0.8. Krippendorff's scale has been adopted without question, even though Krippendorff himself considers it only a plausible standard that has emerged from his and his colleagues' work. In fact, Carletta et al. (1997) use words of caution against adopting Krippendorff's suggestion as a standard; the first author has also raised the issue of how to assess \u03ba values in Di Eugenio (2000) .",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "Krippendorff (1980)",
"ref_id": "BIBREF15"
},
{
"start": 607,
"end": 629,
"text": "Carletta et al. (1997)",
"ref_id": "BIBREF6"
},
{
"start": 784,
"end": 798,
"text": "Eugenio (2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "If Krippendorff's scale is supposed to be our standard, the example just worked out shows that the different computations of P(E) do affect the assessment of intercoder agreement. If less-strict scales are adopted, the discrepancies between the two \u03ba computations play a larger role, as they have a larger effect on smaller values of \u03ba. For example, Rietveld and van Hout (1993) Figure 4) ; \u03ba Co = 0.418, but \u03ba S&C = 0.27. These two values are really at odds. Assumption of different distributions among coders (Cohen) Step 1. For each category j, compute the overall proportion p j,l of items assigned to j by each coder l. In a contingency table, each row and column total divided by N corresponds to one such proportion for the corresponding coder. p Accept,1 = 95/150, p Ack,1 = 55/150, p Accept,2 = 70/150, p Ack,2 = 80/150 Assumption of equal distributions among coders (Siegel and Castellan) Step 1. For each category j, compute p j , the overall proportion of items assigned to j. In an agreement table, the column totals give the total counts for each category j, hence:",
"cite_spans": [
{
"start": 350,
"end": 378,
"text": "Rietveld and van Hout (1993)",
"ref_id": "BIBREF16"
},
{
"start": 511,
"end": 518,
"text": "(Cohen)",
"ref_id": null
},
{
"start": 876,
"end": 898,
"text": "(Siegel and Castellan)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 379,
"end": 388,
"text": "Figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "p j = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Nk \u00d7 i n ij p Accept = 165/300 = 0.55, p Ack = 135/300 = 0.45",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Step 2. For a given item, the likelihood of both coders' independently agreeing on category j by chance, is p j,1 * p j,2 . p Accept,1 * p Accept,2 = 95/150 * 70/150 = 0.2956 p Ack,1 * p Ack,2 = 55/150 * 80/150 = 0.1956",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Step 2. For a given item, the likelihood of both coders' independently agreeing on category j by chance is p 2 j . p 2 Accept = 0.3025 p 2 Ack = 0.2025",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Step 3. P(E), the likelihood of coders' accidentally assigning the same category to a given item, is j p j,1 * p j,2 = 0.2956 + 0.1956 = 0.4912",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Step 3. P(E), the likelihood of coders' accidentally assigning the same category to a given item, is j p 2 j = 0.3025 + 0.2025 = 0.5050",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Step 4. \u03ba Co = (0.8333 \u2212 0.4912)/(1 \u2212 0.4912) = .3421/.5088=0.6724",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "Step 4. \u03ba S&C = (0.8333 \u2212 0.5050)/(1 \u2212 0.5050) = .3283/.4950 = 0.6632",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computation of P(E)",
"sec_num": "1."
},
{
"text": "The computation of P(E) and \u03ba according to Cohen (left) and to Siegel and Castellan (right) .",
"cite_spans": [
{
"start": 43,
"end": 55,
"text": "Cohen (left)",
"ref_id": null
},
{
"start": 63,
"end": 91,
"text": "Siegel and Castellan (right)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "In the computational linguistics literature, \u03ba has been used mostly to validate coding schemes: Namely, a \"good\" value of \u03ba means that the coders agree on the categories and therefore that those categories are \"real.\" We noted previously that assessing what constitutes a \"good\" value for \u03ba is problematic in itself and that different scales have been proposed. The problem is compounded by the following obvious effect on \u03ba values: If P(A) is kept constant, varying values for P(E) yield varying values of \u03ba. What can affect P(E) even if P(A) is constant are prevalence and bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unpleasant Behaviors of Kappa: Prevalence and Bias",
"sec_num": "2."
},
{
"text": "The prevalence problem arises because skewing the distribution of categories in the data increases P(E). The minimum value P(E) = 1/m occurs when the labels are equally distributed among the m categories (see Example 4 in Figure 3 ). The maximum value P(E) = 1 occurs when the labels are all concentrated in a single category. But for a given value of P(A), the larger the value of P(E), the lower the value of \u03ba.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unpleasant Behaviors of Kappa: Prevalence and Bias",
"sec_num": "2."
},
{
"text": "Example 3 and Example 4 in Figure 3 show two coders agreeing on 90 out of 100 occurrences of Okay, that is, P(A) = 0.9. However, \u03ba ranges from \u22120.048 to 0.80, and from not significant to significant (the values of \u03ba S&C for Examples 3 and 4 are the same as the values of \u03ba Co ). 3 The differences in \u03ba are due to the difference in the relative prevalence of the two categories Accept and Ack. In Example 3, the distribution is skewed, as there are 190 Accepts but only 10 Acks across the two coders; in Example 4, the distribution is even, as there are 100 Accepts and 100 Acks, respectively. These results do not depend on the size of the sample; that is, they are not due to the fact Example 4 Coder 2 Coder 1 Accept Ack Accept 45 5 50 Ack 5 45 50 50 50 100 P(A) = 0.90, P(E) = 0.5 \u03ba Co = \u03ba S&C = 0.80, p = 0.5 * 10 \u22125",
"cite_spans": [
{
"start": 279,
"end": 280,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 3",
"ref_id": null
},
{
"start": 686,
"end": 764,
"text": "Example 4 Coder 2 Coder 1 Accept Ack Accept 45 5 50 Ack 5 45 50 50",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unpleasant Behaviors of Kappa: Prevalence and Bias",
"sec_num": "2."
},
{
"text": "Contingency tables illustrating the prevalence effect on \u03ba. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Contingency tables illustrating the bias effect on \u03ba Co .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Example 3 and Example 4 are small. As the computations of P(A) and P(E) are based on proportions, the same distributions of categories in a much larger sample, say, 10,000 items, will result in exactly the same \u03ba values. Although this behavior follows squarely from \u03ba's definition, it is at odds with using \u03ba to assess a coding scheme. From both Example 3 and Example 4 we would like to conclude that the two coders are in substantial agreement, independent of the skewed prevalence of Accept with respect to Ack in Example 3. The role of prevalence in assessing \u03ba has been subject to heated discussion in the medical literature (Grove et al. 1981; Berry 1992; Goldman 1992) . The bias problem occurs in \u03ba Co but not \u03ba S&C . For \u03ba Co , P(E) is computed from each coder's individual probabilities. Thus, the less two coders agree in their overall behavior, the fewer chance agreements are expected. But for a given value of P(A), decreasing P(E) will increase \u03ba Co , leading to the paradox that \u03ba Co increases as the coders become less similar, that is, as the marginal totals diverge in the contingency table. Consider two coders coding the usual 100 occurrences of Okay, according to the two tables in Figure 4 . In Example 5, the proportions of each category are very similar among coders, at 55 versus 60 Accept, and 45 versus 40 Ack. However, in Example 6 coder 1 favors Accept much more than coder 2 (75 versus 40 occurrences) and conversely chooses Ack much less frequently (25 versus 60 occurrences). In both cases, P(A) is 0.65 and \u03ba S&C is stable at 0.27, but \u03ba Co goes from 0.27 to 0.418. Our initial example in Figure 1 is also affected by bias. The distribution in Example 1 yielded \u03ba Co = 0.6724 but \u03ba S&C = 0.6632. If the bias decreases as in Example 2, \u03ba Co becomes 0.6632, the same as \u03ba S&C .",
"cite_spans": [
{
"start": 629,
"end": 648,
"text": "(Grove et al. 1981;",
"ref_id": "BIBREF14"
},
{
"start": 649,
"end": 660,
"text": "Berry 1992;",
"ref_id": "BIBREF3"
},
{
"start": 661,
"end": 674,
"text": "Goldman 1992)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1203,
"end": 1211,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1622,
"end": 1630,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "The issue that remains open is which computation of \u03ba to choose. Siegel and Castellan's \u03ba S&C is not affected by bias, whereas Cohen's \u03ba Co is. However, it is questionable whether the assumption of equal distributions underlying \u03ba S&C is appropriate for coding in discourse and dialogue work. In fact, it appears to us that it holds in few if any of the published discourse-or dialogue-tagging efforts for which \u03ba has been computed. It is, for example, appropriate in situations in which item i may be tagged by different coders than item j (Fleiss 1971) . However, \u03ba assessments for discourse and dialogue tagging are most often performed on the same portion of the data, which has been annotated by each of a small number of annotators (between two and four). In fact, in many cases the analysis of systematic disagreements among annotators on the same portion of the data (i.e., of bias) can be used to improve the coding scheme (Wiebe, Bruce, and O'Hara 1999) .",
"cite_spans": [
{
"start": 541,
"end": 554,
"text": "(Fleiss 1971)",
"ref_id": "BIBREF12"
},
{
"start": 932,
"end": 963,
"text": "(Wiebe, Bruce, and O'Hara 1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3."
},
{
"text": "To use \u03ba Co but to guard against bias, Cicchetti and Feinstein (1990) suggest that \u03ba Co be supplemented, for each coding category, by two measures of agreement, positive and negative, between the coders. This means a total of 2m additional measures, which we believe are too many to gain a general insight into the meaning of the specific \u03ba Co value. Alternatively, Byrt, Bishop, and Carlin (1993) suggest that intercoder reliability be reported as three numbers: \u03ba Co and two adjustments of \u03ba Co , one with bias removed, the other with prevalence removed. The value of \u03ba Co adjusted for bias turns out to be . . . \u03ba S&C . Adjusted for prevalence, \u03ba Co yields a measure that is equal to 2P(A) \u2212 1.",
"cite_spans": [
{
"start": 39,
"end": 69,
"text": "Cicchetti and Feinstein (1990)",
"ref_id": "BIBREF7"
},
{
"start": 366,
"end": 397,
"text": "Byrt, Bishop, and Carlin (1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3."
},
{
"text": "The results for Example 1 should then be reported as \u03ba Co = 0.6724, \u03ba S&C = 0.6632, 2P(A)\u22121 = 0.6666; those for Example 6 as \u03ba Co = 0.418, \u03ba S&C = 0.27, and 2P(A)\u22121 = 0.3. For both Examples 3 and 4, 2P(A) \u2212 1 = 0.8. Collectively, these three numbers appear to provide a means of better judging the meaning of \u03ba values. Reporting both \u03ba and 2P(A) \u2212 1 may seem contradictory, as 2P(A) \u2212 1 does not correct for expected agreement. However, when the distribution of categories is skewed, this highlights the effect of prevalence. Reporting both \u03ba Co and \u03ba S&C does not invalidate our previous discussion, as we believe \u03ba Co is more appropriate for discourse-and dialogue-tagging in the majority of cases, especially when exploiting bias to improve coding (Wiebe, Bruce, and O'Hara 1999) .",
"cite_spans": [
{
"start": 751,
"end": 782,
"text": "(Wiebe, Bruce, and O'Hara 1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3."
},
{
"text": "We are not including agreement tables for the sake of brevity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Di Eugenio and GlassKappa: A Second Look",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by grant N00014-00-1-0640 from the Office of Naval Research. Thanks to Janet Cahn and to the anonymous reviewers for comments on earlier drafts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "DAMSL: Dialog act markup in several layers",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Core",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen, James and Mark Core. 1997. DAMSL: Dialog act markup in several layers;",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Coding scheme developed by the participants at two discourse tagging workshops",
"authors": [],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coding scheme developed by the participants at two discourse tagging workshops, University of Pennsylvania, March 1996, and Schlo\u00df Dagstuhl, February 1997. Draft.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the methods and theory of reliability",
"authors": [
{
"first": "John",
"middle": [
"J"
],
"last": "Bartko",
"suffix": ""
},
{
"first": "William",
"middle": [
"T"
],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1976,
"venue": "Journal of Nervous and Mental Disease",
"volume": "163",
"issue": "5",
"pages": "307--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bartko, John J. and William T. Carpenter. 1976. On the methods and theory of reliability. Journal of Nervous and Mental Disease, 163(5):307-317.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The \u03ba statistic",
"authors": [
{
"first": "Charles",
"middle": [
"C"
],
"last": "Berry",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of the American Medical Association",
"volume": "268",
"issue": "18",
"pages": "2513--2514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berry, Charles C. 1992. The \u03ba statistic [letter to the editor]. Journal of the American Medical Association, 268(18):2513-2514.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bias, prevalence, and kappa",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Byrt",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Bishop",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Carlin",
"suffix": ""
}
],
"year": 1993,
"venue": "Journal of Clinical Epidemiology",
"volume": "46",
"issue": "5",
"pages": "423--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byrt, Ted, Janet Bishop, and John B. Carlin. 1993. Bias, prevalence, and kappa. Journal of Clinical Epidemiology, 46(5):423-429.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Assessing agreement on classification tasks: The Kappa statistic",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carletta, Jean. 1996. Assessing agreement on classification tasks: The Kappa statistic. Computational Linguistics, 22(2):249-254.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The reliability of a dialogue structure coding scheme",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [
"C"
],
"last": "Kowtko",
"suffix": ""
},
{
"first": "Gwyneth",
"middle": [],
"last": "Doherty-Sneddon",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"H"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Lingustics",
"volume": "23",
"issue": "1",
"pages": "13--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carletta, Jean, Amy Isard, Stephen Isard, Jacqueline C. Kowtko, Gwyneth Doherty-Sneddon, and Anne H. Anderson. 1997. The reliability of a dialogue structure coding scheme. Computational Lingustics, 23(1):13-31.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "High agreement but low kappa: II. Resolving the paradoxes",
"authors": [
{
"first": "Domenic",
"middle": [
"V"
],
"last": "Cicchetti",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Alvan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Feinstein",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Clinical Epidemiology",
"volume": "43",
"issue": "6",
"pages": "551--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicchetti, Domenic V. and Alvan R. Feinstein. 1990. High agreement but low kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 43(6):551-558.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A coefficient of agreement for nominal scales",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20:37-46.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the usage of Kappa to evaluate agreement on coding tasks",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Eugenio",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2000,
"venue": "LREC2000: Proceedings of the Second International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "441--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di Eugenio, Barbara. 2000. On the usage of Kappa to evaluate agreement on coding tasks. In LREC2000: Proceedings of the Second International Conference on Language Resources and Evaluation, pages 441-444, Athens.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The agreement process: An empirical investigation of human-human computer-mediated collaborative dialogues",
"authors": [
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "International Journal of Human Computer Studies",
"volume": "53",
"issue": "6",
"pages": "1017--1076",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore. 2000. The agreement process: An empirical investigation of human-human computer-mediated collaborative dialogues. International Journal of Human Computer Studies, 53(6):1017-1076.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378-382.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The \u03ba statistic",
"authors": [
{
"first": "Ronald",
"middle": [
"L"
],
"last": "Goldman",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of the American Medical Association",
"volume": "268",
"issue": "18",
"pages": "2513--2514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldman, Ronald L. 1992. The \u03ba statistic [letter to the editor (in reply)]. Journal of the American Medical Association, 268(18):2513-2514.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reliability studies of psychiatric diagnosis: Theory and practice",
"authors": [
{
"first": "William",
"middle": [
"M"
],
"last": "Grove",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Nancy",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Andreasen",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"B"
],
"last": "Mcdonald-Scott",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"W"
],
"last": "Keller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shapiro",
"suffix": ""
}
],
"year": 1981,
"venue": "Archives of General Psychiatry",
"volume": "38",
"issue": "",
"pages": "408--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grove, William M., Nancy C. Andreasen, Patricia McDonald-Scott, Martin B. Keller, and Robert W. Shapiro. 1981. Reliability studies of psychiatric diagnosis: Theory and practice. Archives of General Psychiatry, 38:408-413.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Content Analysis: An Introduction to Its Methodology",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krippendorff, Klaus. 1980. Content Analysis: An Introduction to Its Methodology. Sage Publications, Beverly Hills, CA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Statistical Techniques for the Study of Language and Language Behaviour",
"authors": [
{
"first": "Toni",
"middle": [],
"last": "Rietveld",
"suffix": ""
},
{
"first": "Roeland",
"middle": [],
"last": "Van Hout",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rietveld, Toni and Roeland van Hout. 1993. Statistical Techniques for the Study of Language and Language Behaviour. Mouton de Gruyter, Berlin.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Reliability of content analysis: The case of nominal scale coding",
"authors": [
{
"first": "William",
"middle": [
"A"
],
"last": "Scott",
"suffix": ""
}
],
"year": 1955,
"venue": "Public Opinion Quarterly",
"volume": "19",
"issue": "",
"pages": "127--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott, William A. 1955. Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly, 19:127-141.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Nonparametric statistics for the behavioral sciences",
"authors": [
{
"first": "Sidney",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "N. John",
"middle": [],
"last": "Castellan",
"suffix": ""
},
{
"first": "Jr",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siegel, Sidney and N. John Castellan, Jr. 1988. Nonparametric statistics for the behavioral sciences. McGraw Hill, Boston.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Development and use of a gold-standard data set for subjectivity classifications",
"authors": [
{
"first": "Janyce",
"middle": [
"M"
],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"F"
],
"last": "Bruce",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"P"
],
"last": "O'hara",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL99: Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "246--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiebe, Janyce M., Rebecca F. Bruce, and Thomas P. O'Hara. 1999. Development and use of a gold-standard data set for subjectivity classifications. In ACL99: Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 246-253, College Park, MD.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "consider 0.20 < \u03ba \u2264 0.40 as indicating fair agreement, and 0.40 < \u03ba \u2264 0.60 as indicating moderate agreement. Suppose that two coders are coding 100 occurrences of Okay. The two coders label 40 occurrences as Accept and 25 as Ack. The remaining 35 are labeled as Ack by one coder and as Accept by the other (as in Example 6 in",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": ") = 0.90, P(E) = 0.905 \u03ba Co = \u03ba S&C = \u22120.048, p = 1",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": ") = 0.65, P(E) = 0.52 \u03ba Co = 0.27, p = 0) = 0.65, P(E) = 0.45 \u03ba Co = 0.418, p = 0.5 * 10 \u22125",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}