| { |
| "paper_id": "P09-1032", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:52:56.199509Z" |
| }, |
| "title": "Learning with Annotation Noise", |
| "authors": [ |
| { |
| "first": "Eyal", |
| "middle": [], |
| "last": "Beigman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Washington University in St. Louis", |
| "location": {} |
| }, |
| "email": "beigman@wustl.edu" |
| }, |
| { |
| "first": "Beata", |
| "middle": [ |
| "Beigman" |
| ], |
| "last": "Klebanov", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "It is usually assumed that the kind of noise existing in annotated data is random classification noise. Yet there is evidence that differences between annotators are not always random attention slips but could result from different biases towards the classification categories, at least for the harder-to-decide cases. Under an annotation generation model that takes this into account, there is a hazard that some of the training instances are actually hard cases with unreliable annotations. We show that these are relatively unproblematic for an algorithm operating under the 0-1 loss model, whereas for the commonly used voted perceptron algorithm, hard training cases could result in incorrect prediction on the uncontroversial cases at test time.", |
| "pdf_parse": { |
| "paper_id": "P09-1032", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "It is usually assumed that the kind of noise existing in annotated data is random classification noise. Yet there is evidence that differences between annotators are not always random attention slips but could result from different biases towards the classification categories, at least for the harder-to-decide cases. Under an annotation generation model that takes this into account, there is a hazard that some of the training instances are actually hard cases with unreliable annotations. We show that these are relatively unproblematic for an algorithm operating under the 0-1 loss model, whereas for the commonly used voted perceptron algorithm, hard training cases could result in incorrect prediction on the uncontroversial cases at test time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "It is assumed, often tacitly, that the kind of noise existing in human-annotated datasets used in computational linguistics is random classification noise (Kearns, 1993; Angluin and Laird, 1988) , resulting from annotator attention slips randomly distributed across instances. For example, Osborne (2002) evaluates noise tolerance of shallow parsers, with random classification noise taken to be \"crudely approximating annotation errors.\" It has been shown, both theoretically and empirically, that this type of noise is tolerated well by the commonly used machine learning algorithms (Cohen, 1997; Blum et al., 1996; Osborne, 2002; Reidsma and Carletta, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 169, |
| "text": "(Kearns, 1993;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 170, |
| "end": 194, |
| "text": "Angluin and Laird, 1988)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 290, |
| "end": 304, |
| "text": "Osborne (2002)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 585, |
| "end": 598, |
| "text": "(Cohen, 1997;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 599, |
| "end": 617, |
| "text": "Blum et al., 1996;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 618, |
| "end": 632, |
| "text": "Osborne, 2002;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 633, |
| "end": 660, |
| "text": "Reidsma and Carletta, 2008)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Yet this might be overly optimistic. Reidsma and op den Akker (2008) show that apparent differences between annotators are not random slips of attention but rather result from different biases annotators might have towards the classification categories. When training data comes from one annotator and test data from another, the first annotator's biases are sometimes systematic enough for a machine learner to pick them up, with detrimental results for the algorithm's performance on the test data. A small subset of doubly annotated data (for inter-annotator agreement check) and large chunks of singly annotated data (for training algorithms) is not uncommon in computational linguistics datasets; such a setup is prone to problems if annotators are differently biased. 1 Annotator bias is consistent with a number of noise models. For example, it could be that an annotator's bias is exercised on each and every instance, making his preferred category likelier for any instance than in another person's annotations. Another possibility, recently explored by Beigman Klebanov and Beigman (2009) , is that some items are really quite clear-cut for an annotator with any bias, belonging squarely within one particular category. However, some instances -termed hard cases therein -are harder to decide upon, and this is where various preferences and biases come into play. In a metaphor annotation study reported by Beigman Klebanov et al. (2008) , certain markups received overwhelming annotator support when people were asked to validate annotations after a certain time delay. Other instances saw opinions split; moreover, Beigman Klebanov et al. (2008) observed cases where people retracted their own earlier annotations.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 68, |
| "text": "Reidsma and op den Akker (2008)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 774, |
| "end": 775, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 1071, |
| "end": 1098, |
| "text": "Klebanov and Beigman (2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1425, |
| "end": 1447, |
| "text": "Klebanov et al. (2008)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1635, |
| "end": 1657, |
| "text": "Klebanov et al. (2008)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To start accounting for such annotator behavior, Beigman Klebanov and Beigman (2009) proposed a model where instances are either easy, and then all annotators agree on them, or hard, and then each annotator flips his or her own coin to de-cide on a label (each annotator can have a different \"coin\" reflecting his or her biases). For annotations generated under such a model, there is a danger of hard instances posing as easy -an observed agreement between annotators being a result of all coins coming up heads by chance. They therefore define the expected proportion of hard instances in agreed items as annotation noise. They provide an example from the literature where an annotation noise rate of about 15% is likely.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 84, |
| "text": "Klebanov and Beigman (2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The question addressed in this article is: How problematic is learning from training data with annotation noise? Specifically, we are interested in estimating the degree to which performance on easy instances at test time can be hurt by the presence of hard instances in training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Definition 1 The hard case bias, \u03c4 , is the portion of easy instances in the test data that are misclassified as a result of hard instances in the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This article proceeds as follows. First, we show that a machine learner operating under a 0-1 loss minimization principle could sustain a hard case bias of \u03b8( 1 \u221a N ) in the worst case. Thus, while annotation noise is hazardous for small datasets, it is better tolerated in larger ones. However, 0-1 loss minimization is computationally intractable for large datasets (Feldman et al., 2006; Guruswami and Raghavendra, 2006) ; substitute loss functions are often used in practice. While their tolerance to random classification noise is as good as for 0-1 loss, their tolerance to annotation noise is worse. For example, the perceptron family of algorithms handle random classification noise well (Cohen, 1997) . We show in section 3.4 that the widely used Freund and Schapire (1999) voted perceptron algorithm could face a constant hard case bias when confronted with annotation noise in training data, irrespective of the size of the dataset. Finally, we discuss the implications of our findings for the practice of annotation studies and for data utilization in machine learning.", |
| "cite_spans": [ |
| { |
| "start": 368, |
| "end": 390, |
| "text": "(Feldman et al., 2006;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 391, |
| "end": 423, |
| "text": "Guruswami and Raghavendra, 2006)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 696, |
| "end": 709, |
| "text": "(Cohen, 1997)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 756, |
| "end": 782, |
| "text": "Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Let a sample be a sequence x 1 , . . . , x N drawn uniformly from the d-dimensional discrete cube", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "I d = {\u22121, 1} d with corresponding labels y 1 , . . . , y N \u2208 {\u22121, 1}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Suppose further that the learning algorithm operates by finding a hyperplane (w, \u03c8),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "w \u2208 R d , \u03c8 \u2208 R, that minimizes the empirical er- ror L(w, \u03c8) = j=1...N [y j \u2212sgn( i=1...d x i j w i \u2212 \u03c8)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2 . Let there be H hard cases, such that the annotation noise is \u03b3 = H N . 2 Theorem 1 In the worst case configuration of instances a hard case bias of \u03c4 = \u03b8( 1 \u221a N ) cannot be ruled out with constant confidence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Idea of the proof : We prove by explicit construction of an adversarial case. Suppose there is a plane that perfectly separates the easy instances. The \u03b8(N ) hard instances will be concentrated in a band parallel to the separating plane, that is near enough to the plane so as to trap only about \u03b8( \u221a N ) easy instances between the plane and the band (see figure 1 for an illustration). For a random labeling of the hard instances, the central limit theorem shows there is positive probability that there would be an imbalance between +1 and \u22121 labels in favor of \u22121s on the scale of \u221a N , which, with appropriate constants, would lead to the movement of the empirically minimal separation plane to the right of the hard case band, misclassifying the trapped easy cases.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 356, |
| "end": 364, |
| "text": "figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Proof :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let v = v(x) = i=1...d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "x i denote the sum of the coordinates of an instance in I d and take", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u03bb e = \u221a d \u2022 F \u22121 ( \u221a \u03b3 \u2022 2 \u2212 d 2 + 1 2 ) and \u03bb h = \u221a d \u2022 F \u22121 (\u03b3 + \u221a \u03b3 \u2022 2 \u2212 d 2 + 1 2 ), where F (t)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is the cumulative distribution function of the normal distribution. Suppose further that instances x j such that \u03bb e < v j < \u03bb h are all and only hard instances; their labels are coinflips. All other instances are easy, and labeled y = y(x) = sgn(v). In this case, the hyperplane 1 \u221a d (1 . . . 1) is the true separation plane for the easy instances, with \u03c8 = 0. Figure 1 shows this configuration.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 363, |
| "end": 371, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "According to the central limit theorem, for d, N large, the distribution of v is well approximated by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "N (0, \u221a d). If N = c 1 \u2022 2 d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": ", for some 0 < c 1 < 4, the second application of the central limit theorem ensures that, with high probability, about \u03b3N = c 1 \u03b32 d items would fall between \u03bb e and \u03bb h (all hard), and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u221a \u03b3 \u2022 2 \u2212 d 2 N = c 1 \u03b32 d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "would fall between 0 and \u03bb e (all easy, all labeled +1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let Z be the sum of labels of the hard cases, Z = i=1...H y i . Applying the central limit theorem a third time, for large N , Z will, with a high probability, be distributed approximately as 0 \u03bb e \u03bb h Figure 1 : The adversarial case for 0-1 loss. Squares correspond to easy instances, circles -to hard ones. Filled squares and circles are labeled \u22121, empty ones are labeled +1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 202, |
| "end": 210, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "N (0, \u221a \u03b3N )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": ". This implies that a value as low as \u22122\u03c3 cannot be ruled out with high (say 95%) confidence. Thus, an imbalance of up to 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u221a \u03b3N , or of 2 c 1 \u03b32 d , in favor of \u22121s is possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There are between 0 and \u03bb h about 2 \u221a c 1 \u03b32 d more \u22121 hard instances than +1 hard instances, as opposed to c 1 \u03b32 d easy instances that are all +1. As long as c 1 < 2 \u221a c 1 , i.e. c 1 < 4, the empirically minimal threshold would move to \u03bb h , resulting in a hard case bias of \u03c4 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u221a \u03b3 \u221a c 1 2 d (1\u2212\u03b3)\u2022c 1 2 d = \u03b8( 1 \u221a N ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To see that this is the worst case scenario, we note that 0-1 loss sustained on \u03b8(N ) hard cases is the order of magnitude of the possible imbalance between \u22121 and +1 random labels, which is \u03b8( \u221a N ). For hard case loss to outweigh the loss on the misclassified easy instances, there cannot be more than \u03b8( \u221a N ) of the latter 2 Note that the proof requires that N = \u03b8(2 d ) namely, that asymptotically the sample includes a fixed portion of the instances. If the sample is asymptotically smaller, then \u03bb e will have to be adjusted such that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u03bb e = \u221a d \u2022 F \u22121 (\u03b8( 1 \u221a N ) + 1 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": ". According to theorem 1, for a 10K dataset with 15% hard case rate, a hard case bias of about 1% cannot be ruled out with 95% confidence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Theorem 1 suggests that annotation noise as defined here is qualitatively different from more malicious types of noise analyzed in the agnostic learning framework (Kearns and Li, 1988; Haussler, 1992; Kearns et al., 1994) , where an adver-sary can not only choose the placement of the hard cases, but also their labels. In worst case, the 0-1 loss model would sustain a constant rate of error due to malicious noise, whereas annotation noise is tolerated quite well in large datasets.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 184, |
| "text": "(Kearns and Li, 1988;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 185, |
| "end": 200, |
| "text": "Haussler, 1992;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 201, |
| "end": 221, |
| "text": "Kearns et al., 1994)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "0-1 Loss", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Freund and Schapire (1999) describe the voted perceptron. This algorithm and its many variants are widely used in the computational linguistics community (Collins, 2002a; Collins and Duffy, 2002; Collins, 2002b; Collins and Roark, 2004; Henderson and Titov, 2005; Viola and Narasimhan, 2005; Cohen et al., 2004; Carreras et al., 2005; Shen and Joshi, 2005; Ciaramita and Johnson, 2003) . In this section, we show that the voted perceptron can be vulnerable to annotation noise. The algorithm is shown below. ", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 170, |
| "text": "(Collins, 2002a;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 171, |
| "end": 195, |
| "text": "Collins and Duffy, 2002;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 196, |
| "end": 211, |
| "text": "Collins, 2002b;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 212, |
| "end": 236, |
| "text": "Collins and Roark, 2004;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 237, |
| "end": 263, |
| "text": "Henderson and Titov, 2005;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 264, |
| "end": 291, |
| "text": "Viola and Narasimhan, 2005;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 292, |
| "end": 311, |
| "text": "Cohen et al., 2004;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 312, |
| "end": 334, |
| "text": "Carreras et al., 2005;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 335, |
| "end": 356, |
| "text": "Shen and Joshi, 2005;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 357, |
| "end": 385, |
| "text": "Ciaramita and Johnson, 2003)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Voted Perceptron", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Initialize: t \u2190 0; w1 \u2190 0; \u03c81 \u2190 0 for t = 1 . . . N d\u00f4 yt \u2190 sign( wt, xt + \u03c8t) wt+1 \u2190 wt + y t \u2212\u0177 t 2 \u2022 xt \u03c8t+1 \u2190 \u03c8t + y t \u2212\u0177 t 2 \u2022 wt, xt end for Forecasting Input: a list of perceptrons w1, . . . , wN an unlabeled instance x Output: A forecasted label \u0177 y \u2190 P N t=1 sign( wt, xt + \u03c8t) y \u2190 sign(\u0177)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 1 Voted Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "The voted perceptron algorithm is a refinement of the perceptron algorithm (Rosenblatt, 1962; Minsky and Papert, 1969) . Perceptron is a dynamic algorithm; starting with an initial hyperplane w 0 , it passes repeatedly through the labeled sample. Whenever an instance is misclassified by w t , the hyperplane is modified to adapt to the instance. The algorithm terminates once it has passed through the sample without making any classification mistakes. The algorithm terminates iff the sample can be separated by a hyperplane, and in this case the algorithm finds a separating hyperplane. Novikoff (1962) gives a bound on the number of iterations the algorithm goes through before termination, when the sample is separable by a margin.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 93, |
| "text": "(Rosenblatt, 1962;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 94, |
| "end": 118, |
| "text": "Minsky and Papert, 1969)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 590, |
| "end": 605, |
| "text": "Novikoff (1962)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 1 Voted Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "The perceptron algorithm is vulnerable to noise, as even a little noise could make the sample inseparable. In this case the algorithm would cycle indefinitely never meeting termination conditions, w t would obtain values within a certain dynamic range but would not converge. In such setting, imposing a stopping time would be equivalent to drawing a random vector from the dynamic range. Freund and Schapire (1999) extend the perceptron to inseparable samples with their voted perceptron algorithm and give theoretical generalization bounds for its performance. The basic idea underlying the algorithm is that if the dynamic range of the perceptron is not too large then w t would classify most instances correctly most of the time (for most values of t). Thus, for a sample x 1 , . . . , x N the new algorithm would keep track of w 0 , . . . , w N , and for an unlabeled instance x it would forecast the classification most prominent amongst these hyperplanes.", |
| "cite_spans": [ |
| { |
| "start": 389, |
| "end": 415, |
| "text": "Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 1 Voted Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "The bounds given by Freund and Schapire (1999) depend on the hinge loss of the dataset. In section 3.2 we construct a difficult setting for this algorithm. To prove that voted perceptron would suffer from a constant hard case bias in this setting using the exact dynamics of the perceptron is beyond the scope of this article. Instead, in section 3.3 we provide a lower bound on the hinge loss for a simplified model of the perceptron algorithm dynamics, which we argue would be a good approximation to the true dynamics in the setting we constructed. For this simplified model, we show that the hinge loss is large, and the bounds in Freund and Schapire (1999) cannot rule out a constant level of error regardless of the size of the dataset. In section 3.4 we study the dynamics of the model and prove that \u03c4 = \u03b8(1) for the adversarial setting.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 46, |
| "text": "Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 635, |
| "end": 661, |
| "text": "Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 1 Voted Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition 2 The hinge loss of a labeled instance (x, y) with respect to hyperplane (w, \u03c8) and margin \u03b4 > 0 is given by \u03b6 = \u03b6(\u03c8, \u03b4) = max(0, \u03b4 \u2212 y \u2022 ( w, x \u2212 \u03c8)). \u03b6 measures the distance of an instance from being classified correctly with a \u03b4 margin. Figure 2 shows examples of hinge loss for various data points.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 259, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hinge Loss", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Theorem 2 (Freund and Schapire (1999) ) After one pass on the sample, the probability that the voted perceptron algorithm does not predict correctly the label of a test instance", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 37, |
| "text": "(Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hinge Loss", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "x N +1 is bounded by 2 N +1 E N +1 d+D \u03b4 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hinge Loss", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hinge Loss", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "D = D(w, \u03c8, \u03b4) = N i=1 \u03b6 2 i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hinge Loss", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This result is used to explain the convergence of weighted or voted perceptron algorithms (Collins, 2002a) . It is useful as long as the expected value of D is not too large. We show that in an adversarial setting of the annotation noise D is large, hence these bounds are trivial.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 106, |
| "text": "(Collins, 2002a)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hinge Loss", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Let a sample be a sequence x 1 , . . . , x N drawn uniformly from I d with y 1 , . . . , y N \u2208 {\u22121, 1}. Easy cases are labeled y = y(x) = sgn(v) as before,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Annotation Noise", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "with v = v(x) = i=1...d x i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Annotation Noise", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The true separation plane for the easy instances is w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Annotation Noise", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "* = 1 \u221a d (1 . . . 1), \u03c8 * = 0. Suppose hard cases are those where v(x) > c 1 \u221a d,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Annotation Noise", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where c 1 is chosen so that the hard instances account for \u03b3N of all instances. 3 Figure 3 shows this setting.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 82, |
| "end": 90, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Adversarial Annotation Noise", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In the simplified case, we assume that the algorithm starts training with the hyperplane w 0 = w * = 1 \u221a d (1 . . . 1), and keeps it throughout the training, only updating \u03c8. In reality, each hard instance can be decomposed into a component that is parallel to w * , and a component that is orthogonal to it. The expected contribution of the orthogonal 0 c 1 Figure 3 : An adversarial case of annotation noise for the voted perceptron algorithm. component to the algorithm's update will be positive due to the systematic positioning of the hard cases, while the contributions of the parallel components are expected to cancel out due to the symmetry of the hard cases around the main diagonal that is orthogonal to w * . Thus, while w t will not necessarily parallel w * , it will be close to parallel for most t > 0. The simplified case is thus a good approximation of the real case, and the bound we obtain is expected to hold for the real case as well.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 359, |
| "end": 367, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For any initial value \u03c8 0 < 0 all misclassified instances are labeled \u22121 and classified as +1, hence the update will increase \u03c8 0 , and reach 0 soon enough. We can therefore assume that \u03c8 t \u2265 0 for any t > t 0 where t 0 N .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Lemma 3 For any t > t 0 , there exist \u03b1 = \u03b1(\u03b3, T ) > 0 such that E(\u03b6 2 ) \u2265 \u03b1 \u2022 \u03b4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Proof : For \u03c8 \u2265 0 there are two main sources of hinge loss: easy +1 instances that are classified as \u22121, and hard -1 instances classified as +1. These correspond to the two components of the following sum (the inequality is due to disregarding the loss incurred by a correct classification with too wide a margin):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "E(\u03b6 2 ) \u2265 [\u03c8] l=0 1 2 d d l ( \u03c8 \u221a d \u2212 l \u221a d + \u03b4) 2 + 1 2 d l=c 1 \u221a d 1 2 d d l ( l \u221a d \u2212 \u03c8 \u221a d + \u03b4) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Let 0 < T < c 1 be a parameter. For \u03c8 > T", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u221a d,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "misclassified easy instances dominate the loss:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "E(\u03b6 2 ) \u2265 [\u03c8] l=0 1 2 d d l ( \u03c8 \u221a d \u2212 l \u221a d + \u03b4) 2 \u2265 [T \u221a d] l=0 1 2 d d l ( T \u221a d \u221a d \u2212 l \u221a d + \u03b4) 2 \u2265 T \u221a d l=0 1 2 d d l (T \u2212 l \u221a d + \u03b4) 2 \u2265 1 \u221a 2\u03c0 T 0 (T + \u03b4 \u2212 t) 2 e \u2212t 2 /2 dt = H T (\u03b4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The last inequality follows from a normal approximation of the binomial distribution (see, for example, Feller (1968) ).", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 117, |
| "text": "Feller (1968)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For 0 \u2264 \u03c8 \u2264 T \u221a d, misclassified hard cases dominate:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "E(\u03b6 2 ) \u2265 1 2 d l=c 1 \u221a d 1 2 d d l ( l \u221a d \u2212 \u03c8 \u221a d + \u03b4) 2 \u2265 1 2 d l=c 1 \u221a d 1 2 d d l ( l \u221a d \u2212 T \u221a d \u221a d + \u03b4) 2 \u2265 1 2 \u2022 1 \u221a 2\u03c0 \u221e \u03a6 \u22121 (\u03b3) (t \u2212 T + \u03b4) 2 e \u2212t 2 /2 dt = H \u03b3 (\u03b4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where \u03a6 \u22121 (\u03b3) is the inverse of the normal distribution density. Thus E(\u03b6 2 ) \u2265 min{H T (\u03b4), H \u03b3 (\u03b4)}, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "there exists \u03b1 = \u03b1(\u03b3, T ) > 0 such that min{H T (\u03b4), H \u03b3 (\u03b4)} \u2265 \u03b1 \u2022 \u03b4 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Corollary 4 The bound in theorem 2 does not converge to zero for large N .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We recall that Freund and Schapire (1999) bound is proportional to D 2 = N i=1 \u03b6 2 i . It follows from lemma 3 that D 2 = \u03b8(N ), hence the bound is ineffective.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 41, |
| "text": "Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on Hinge Loss", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Under Simplified Dynamics Corollary 4 does not give an estimate on the hard case bias. Indeed, it could be that w t = w * for almost every t. There would still be significant hinge in this case, but the hard case bias for the voted forecast would be zero. To assess the hard case bias we need a model of perceptron dynamics that would account for the history of hyperplanes w 0 , . . . , w N the perceptron goes through on a sample x 1 , . . . , x N . The key simplification in our model is assuming that w t parallels w * for all t, hence the next hyperplane depends only on the offset \u03c8 t . This is a one dimensional Markov random walk governed by the distribution", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "P(\u03c8 t+1 \u2212\u03c8 t = r|\u03c8 t ) = P(x| y t \u2212\u0177 t 2 \u2022 w * , x = r)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In general \u2212d \u2264 \u03c8 t \u2264 d but as mentioned before lemma 3, we may assume \u03c8 t > 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Lemma 5 There exists c > 0 such that with a high probability \u03c8 t > c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\u2022 \u221a d for most 0 \u2264 t \u2264 N .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Proof : Let c 0 = F \u22121 ( \u03b3 2 + 1 2 ); c 1 = F \u22121 (1\u2212\u03b3). We designate the intervals", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "I 0 = [0, c 0 \u2022 \u221a d]; I 1 = [c 0 \u2022 \u221a d, c 1 \u2022 \u221a d] and I 2 = [c 1 \u2022 \u221a d, d] and define A i = {x : v(x) \u2208 I i } for i = 0, 1, 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Note that the constants c 0 and c 1 are chosen so that P(A 0 ) = \u03b3 2 and P(A 2 ) = \u03b3. It follows from the construction in section 3.2 that A 0 and A 1 are easy instances and A 2 are hard. Given a sample x 1 , . . . , x N , a misclassification of x t \u2208 A 0 by \u03c8 t could only happen when an easy +1 instance is classified as \u22121. Thus the algorithm would shift \u03c8 t to the left by no more than |v t \u2212 \u03c8 t | since v t = w * , x t . This shows that \u03c8 t \u2208 I 0 implies \u03c8 t+1 \u2208 I 0 . In the same manner, it is easy to verify that if \u03c8 t \u2208 I j and x t \u2208 A k then \u03c8 t+1 \u2208 I k , unless j = 0 and k = 1, in which case \u03c8 t+1 \u2208 I 0 because x t \u2208 A 1 would be classified correctly by \u03c8 t \u2208 I 0 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We construct a Markov chain with three states a 0 = 0, a 1 = c 0 \u2022 \u221a d and a 2 = c 1 \u2022 \u221a d governed by the following transition distribution:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\uf8eb \uf8ec \uf8ec \uf8ed 1 \u2212 \u03b3 2 0 \u03b3 2 \u03b3 2 1 \u2212 \u03b3 \u03b3 2 \u03b3 2 1 2 \u2212 3\u03b3 2 1 2 + \u03b3 \uf8f6 \uf8f7 \uf8f7 \uf8f8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Let X t be the state at time t. The principal eigenvector of the transition matrix ( 1 3 , 1 3 , 1 3 ) gives the stationary probability distribution of X t . Thus X t \u2208 {a 1 , a 2 } with probability 2 3 . Since the transition distribution of X t mirrors that of \u03c8 t , and since a j are at the leftmost borders of I j , respectively, it follows that X t \u2264 \u03c8 t for all t, thus X t \u2208 {a 1 , a 2 } implies \u03c8 t \u2208 I 1 \u222aI 2 . It follows that \u03c8 t > c 0 \u2022 \u221a d with probability 2 3 , and the lemma follows from the law of large numbers 2 Corollary 6 With high probability \u03c4 = \u03b8(1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Proof : Lemma 5 shows that for a sample x 1 , . . . , x N with high probability \u03c8 t is most of the time to the right of c \u2022 \u221a d. Consequently for any x in the band 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\u2264 v \u2264 c \u2022 \u221a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "d we get sign( w * , x + \u03c8 t ) = \u22121 for most t hence by definition, the voted perceptron would classify such an instance as \u22121, although it is in fact a +1 easy instance. Since there are \u03b8(N ) misclassified easy instances, \u03c4 = \u03b8(1) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lower Bound on \u03c4 for Voted Perceptron", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In this article we show that training with annotation noise can be detrimental for test-time results on easy, uncontroversial instances; we termed this phenomenon hard case bias. Although under the 0-1 loss model annotation noise can be tolerated for larger datasets (theorem 1), minimizing such loss becomes intractable for larger datasets. Freund and Schapire (1999) voted perceptron algorithm and its variants are widely used in computational linguistics practice; our results show that it could suffer a constant rate of hard case bias irrespective of the size of the dataset (section 3.4).", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 368, |
| "text": "Freund and Schapire (1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "How can hard case bias be reduced? One possibility is removing as many hard cases as one can not only from the test data, as suggested in Beigman Klebanov and Beigman (2009) , but from the training data as well. Adding the second annotator is expected to detect about half the hard cases, as they would surface as disagreements between the annotators. Subsequently, a machine learner can be told to ignore those cases during training, reducing the risk of hard case bias. While this is certainly a daunting task, it is possible that for annotation studies that do not require expert annotators and extensive annotator training, the newly available access to a large pool of inexpensive annotators, such as the Amazon Mechanical Turk scheme (Snow et al., 2008) , 4 or embedding the task in an online game played by volunteers (Poesio et al., 2008; von Ahn, 2006) could provide some solutions.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 173, |
| "text": "Klebanov and Beigman (2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 740, |
| "end": 759, |
| "text": "(Snow et al., 2008)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 825, |
| "end": 846, |
| "text": "(Poesio et al., 2008;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 847, |
| "end": 861, |
| "text": "von Ahn, 2006)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Reidsma and op den Akker (2008) suggest a different option. When non-overlapping parts of the dataset are annotated by different annotators, each classifier can be trained to reflect the opinion (albeit biased) of a specific annotator, using different parts of the datasets. Such \"subjective machines\" can be applied to a new set of data; an item that causes disagreement between classifiers is then extrapolated to be a case of potential disagreement between the humans they replicate, i.e. a hard case. Our results suggest that, regardless of the success of such an extrapolation scheme in detecting hard cases, it could erroneously invalidate easy cases: Each classifier would presumably suffer from a certain hard case bias, i.e. classify incorrectly things that are in fact uncontroversial for any human annotator. If each such classifier has a different hard case bias, some inter-classifier disagreements would occur on easy cases. Depending on the distribution of those easy cases in the feature space, this could invalidate valuable cases. If the situation depicted in figure 1 corresponds to the pattern learned by one of the classifiers, it would lead to marking the easy cases closest to the real separation boundary (those between 0 and \u03bb e ) as hard, and hence unsuitable for learning, eliminating the most informative material from the training data. Reidsma and Carletta (2008) recently showed by simulation that different types of annotator behavior have different impact on the outcomes of machine learning from the annotated data. Our results provide a theoretical analysis that points in the same direction: While random classification noise is tolerable, other types of noise -such as annotation noise handled here -are more problematic. It is therefore important to develop models of annotator behavior and of the resulting imperfections of the annotated datasets, in order to diagnose the potential learning problem and suggest mitigation strategies.", |
| "cite_spans": [ |
| { |
| "start": 1366, |
| "end": 1393, |
| "text": "Reidsma and Carletta (2008)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The different biases might not amount to much in the small doubly annotated subset, resulting in acceptable interannotator agreement; yet when enacted throughout a large number of instances they can be detrimental from a machine learner's perspective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In BeigmanKlebanov and Beigman (2009), annotation noise is defined as percentage of hard instances in the agreed annotations; this implies noise measurement on multiply annotated material. When there is just one annotator, no distinction between easy vs hard instances can be made; in this sense, all hard instances are posing as easy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See the proof of 0-1 case for a similar construction using the central limit theorem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://aws.amazon.com/mturk/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Learning from Noisy Examples", |
| "authors": [ |
| { |
| "first": "Dana", |
| "middle": [], |
| "last": "Angluin", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Laird", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Machine Learning", |
| "volume": "2", |
| "issue": "", |
| "pages": "343--370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dana Angluin and Philip Laird. 1988. Learning from Noisy Examples. Machine Learning, 2(4):343-370.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "From Annotator Agreement to Noise Models", |
| "authors": [ |
| { |
| "first": "Eyal", |
| "middle": [], |
| "last": "Beata Beigman Klebanov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Beigman", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2009. From Annotator Agreement to Noise Models. Com- putational Linguistics, accepted for publication.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Analyzing Disagreements", |
| "authors": [ |
| { |
| "first": "Eyal", |
| "middle": [], |
| "last": "Beata Beigman Klebanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Beigman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Diermeier", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "COLING 2008 Workshop on Human Judgments in Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing Disagreements. In COLING 2008 Workshop on Human Judgments in Computational Linguistics, pages 2-7, Manchester, UK.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A Polynomial-Time Algorithm for Learning Noisy Linear Threshold Functions", |
| "authors": [ |
| { |
| "first": "Avrim", |
| "middle": [], |
| "last": "Blum", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Frieze", |
| "suffix": "" |
| }, |
| { |
| "first": "Ravi", |
| "middle": [], |
| "last": "Kannan", |
| "suffix": "" |
| }, |
| { |
| "first": "Santosh", |
| "middle": [], |
| "last": "Vempala", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "330--338", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Avrim Blum, Alan Frieze, Ravi Kannan, and Santosh Vempala. 1996. A Polynomial-Time Algorithm for Learning Noisy Linear Threshold Functions. In Pro- ceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, pages 330-338, Burlington, Vermont, USA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Filtering-Ranking Perceptron Learning for Partial Parsing", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Ll\u00fais", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Castro", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Machine Learning", |
| "volume": "60", |
| "issue": "", |
| "pages": "41--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Carreras, Ll\u00fais M\u00e0rquez, and Jorge Castro. 2005. Filtering-Ranking Perceptron Learning for Partial Parsing. Machine Learning, 60(1):41-71.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Supersense Tagging of Unknown Nouns in WordNet", |
| "authors": [ |
| { |
| "first": "Massimiliano", |
| "middle": [], |
| "last": "Ciaramita", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "168--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimiliano Ciaramita and Mark Johnson. 2003. Su- persense Tagging of Unknown Nouns in WordNet. In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 168-175, Sapporo, Japan.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Learning to Classify Email into \"Speech Acts", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Vitor", |
| "middle": [], |
| "last": "Carvalho", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "309--316", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Cohen, Vitor Carvalho, and Tom Mitchell. 2004. Learning to Classify Email into \"Speech Acts\". In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 309-316, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Learning Noisy Perceptrons by a Perceptron in Polynomial Time", |
| "authors": [ |
| { |
| "first": "Edith", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 38th Annual Symposium on Foundations of Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "514--523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edith Cohen. 1997. Learning Noisy Perceptrons by a Perceptron in Polynomial Time. In Proceedings of the 38th Annual Symposium on Foundations of Computer Science, pages 514-523, Miami Beach, Florida, USA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Nigel", |
| "middle": [], |
| "last": "Duffy", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "263--370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Nigel Duffy. 2002. New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron. In Proceedings of the 40th Annual Meeting on Associa- tion for Computational Linguistics, pages 263-370, Philadelphia, USA.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Incremental Parsing with the Perceptron Algorithm", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "111--118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Brian Roark. 2004. Incremen- tal Parsing with the Perceptron Algorithm. In Pro- ceedings of the 42nd Annual Meeting on Associa- tion for Computational Linguistics, pages 111-118, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Discriminative Training Methods for Hidden Markov Hodels: Theory and Experiments with Perceptron Algorithms", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2002a. Discriminative Training Methods for Hidden Markov Hodels: Theory and Experiments with Perceptron Algorithms. In Pro- ceedings of the Empirical Methods in Natural Lan- guage Processing Conference, pages 1-8, Philadel- phia, USA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Ranking Algorithms for Named Entity Extraction: Boosting and the Voted Perceptron", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "489--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2002b. Ranking Algorithms for Named Entity Extraction: Boosting and the Voted Perceptron. In Proceedings of the 40th Annual Meeting on Association for Computational Linguis- tics, pages 489-496, Philadelphia, USA.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "New Results for Learning Noisy Parities and Halfspaces", |
| "authors": [ |
| { |
| "first": "Vitaly", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Parikshit", |
| "middle": [], |
| "last": "Gopalan", |
| "suffix": "" |
| }, |
| { |
| "first": "Subhash", |
| "middle": [], |
| "last": "Khot", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashok", |
| "middle": [], |
| "last": "Ponnuswami", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "563--574", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Ponnuswami. 2006. New Results for Learn- ing Noisy Parities and Halfspaces. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 563-574, Los Alamitos, CA, USA.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "An Introduction to Probability Theory and Its Application", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Feller", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Feller. 1968. An Introduction to Probability Theory and Its Application, volume 1. Wiley, New York, 3rd edition.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Large Margin Classification Using the Perceptron Algorithm", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Schapire", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine Learning", |
| "volume": "37", |
| "issue": "", |
| "pages": "277--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund and Robert Schapire. 1999. Large Mar- gin Classification Using the Perceptron Algorithm. Machine Learning, 37(3):277-296.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Hardness of Learning Halfspaces with Noise", |
| "authors": [ |
| { |
| "first": "Venkatesan", |
| "middle": [], |
| "last": "Guruswami", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasad", |
| "middle": [], |
| "last": "Raghavendra", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "543--552", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Venkatesan Guruswami and Prasad Raghavendra. 2006. Hardness of Learning Halfspaces with Noise. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 543- 552, Los Alamitos, CA, USA.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Decision Theoretic Generalizations of the PAC Model for Neural Net and other Learning Applications. Information and Computation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Haussler", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "100", |
| "issue": "", |
| "pages": "78--150", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Haussler. 1992. Decision Theoretic General- izations of the PAC Model for Neural Net and other Learning Applications. Information and Computa- tion, 100(1):78-150.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Data-Defined Kernels for Parse Reranking Derived from Probabilistic Models", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "181--188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Henderson and Ivan Titov. 2005. Data-Defined Kernels for Parse Reranking Derived from Proba- bilistic Models. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguis- tics, pages 181-188, Ann Arbor, Michigan, USA.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning in the Presence of Malicious Errors", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kearns", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Proceedings of the 20th Annual ACM symposium on Theory of Computing", |
| "volume": "", |
| "issue": "", |
| "pages": "267--280", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Kearns and Ming Li. 1988. Learning in the Presence of Malicious Errors. In Proceedings of the 20th Annual ACM symposium on Theory of Comput- ing, pages 267-280, Chicago, USA.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Toward Efficient Agnostic Learning. Machine Learning", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kearns", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Schapire", |
| "suffix": "" |
| }, |
| { |
| "first": "Linda", |
| "middle": [], |
| "last": "Sellie", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "17", |
| "issue": "", |
| "pages": "115--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Kearns, Robert Schapire, and Linda Sellie. 1994. Toward Efficient Agnostic Learning. Ma- chine Learning, 17(2):115-141.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Efficient Noise-Tolerant Learning from Statistical Queries", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kearns", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 25th Annual ACM Symposium on Theory of Computing", |
| "volume": "", |
| "issue": "", |
| "pages": "392--401", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Kearns. 1993. Efficient Noise-Tolerant Learning from Statistical Queries. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 392-401, San Diego, CA, USA.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Perceptrons: An Introduction to Computational Geometry", |
| "authors": [ |
| { |
| "first": "Marvin", |
| "middle": [], |
| "last": "Minsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Seymour", |
| "middle": [], |
| "last": "Papert", |
| "suffix": "" |
| } |
| ], |
| "year": 1969, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marvin Minsky and Seymour Papert. 1969. Percep- trons: An Introduction to Computational Geometry. MIT Press, Cambridge, Mass.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "On convergence proofs on perceptrons", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "B" |
| ], |
| "last": "Novikoff", |
| "suffix": "" |
| } |
| ], |
| "year": 1962, |
| "venue": "Symposium on the Mathematical Theory of Automata", |
| "volume": "12", |
| "issue": "", |
| "pages": "615--622", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. B. Novikoff. 1962. On convergence proofs on per- ceptrons. Symposium on the Mathematical Theory of Automata, 12:615-622.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Shallow Parsing Using Noisy and Non-Stationary Training Material", |
| "authors": [ |
| { |
| "first": "Miles", |
| "middle": [], |
| "last": "Osborne", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "2", |
| "issue": "", |
| "pages": "695--719", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miles Osborne. 2002. Shallow Parsing Using Noisy and Non-Stationary Training Material. Journal of Machine Learning Research, 2:695-719.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "ANAWIKI: Creating Anaphorically Annotated Resources through Web Cooperation", |
| "authors": [ |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Udo", |
| "middle": [], |
| "last": "Kruschwitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Chamberlain", |
| "middle": [], |
| "last": "Jon", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 6th International Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimo Poesio, Udo Kruschwitz, and Chamberlain Jon. 2008. ANAWIKI: Creating Anaphorically An- notated Resources through Web Cooperation. In Proceedings of the 6th International Language Re- sources and Evaluation Conference, Marrakech, Morocco.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Reliability measurement without limit", |
| "authors": [ |
| { |
| "first": "Dennis", |
| "middle": [], |
| "last": "Reidsma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Carletta", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "3", |
| "pages": "319--326", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dennis Reidsma and Jean Carletta. 2008. Reliability measurement without limit. Computational Linguis- tics, 34(3):319-326.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Exploiting Subjective Annotations", |
| "authors": [ |
| { |
| "first": "Dennis", |
| "middle": [], |
| "last": "Reidsma", |
| "suffix": "" |
| }, |
| { |
| "first": "Rieks", |
| "middle": [], |
| "last": "Op Den Akker", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "COLING 2008 Workshop on Human Judgments in Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "8--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dennis Reidsma and Rieks op den Akker. 2008. Ex- ploiting Subjective Annotations. In COLING 2008 Workshop on Human Judgments in Computational Linguistics, pages 8-16, Manchester, UK.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Rosenblatt", |
| "suffix": "" |
| } |
| ], |
| "year": 1962, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank Rosenblatt. 1962. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, D.C.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Incremental LTAG Parsing", |
| "authors": [ |
| { |
| "first": "Libin", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Human Language Technology Conference and Empirical Methods in Natural Language Processing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "811--818", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Libin Shen and Aravind Joshi. 2005. Incremen- tal LTAG Parsing. In Proceedings of the Human Language Technology Conference and Empirical Methods in Natural Language Processing Confer- ence, pages 811-818, Vancouver, British Columbia, Canada.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Cheap and Fast -But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks", |
| "authors": [ |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "254--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and Fast -But is it Good? Evaluating Non-Expert Annotations for Nat- ural Language Tasks. In Proceedings of the Empir- ical Methods in Natural Language Processing Con- ference, pages 254-263, Honolulu, Hawaii.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Learning to Extract Information from Semi-Structured Text Using a Discriminative Context Free Grammar", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Viola", |
| "suffix": "" |
| }, |
| { |
| "first": "Mukund", |
| "middle": [], |
| "last": "Narasimhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "330--337", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Viola and Mukund Narasimhan. 2005. Learning to Extract Information from Semi-Structured Text Using a Discriminative Context Free Grammar. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 330-337, Salvador, Brazil.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Games with a purpose", |
| "authors": [ |
| { |
| "first": "Ahn", |
| "middle": [], |
| "last": "Luis Von", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computer", |
| "volume": "39", |
| "issue": "6", |
| "pages": "92--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luis von Ahn. 2006. Games with a purpose. Com- puter, 39(6):92-94.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "TrainingInput: a labeled training set (x1, y1), . . . , (xN , yN ) Output: a list of perceptrons w1, . . . , wN", |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Hinge loss \u03b6 for various data points incurred by the separator with margin \u03b4.", |
| "num": null |
| } |
| } |
| } |
| } |