| { |
| "paper_id": "Q14-1025", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:11:35.594090Z" |
| }, |
| "title": "The Benefits of a Model of Annotation", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [ |
| "J" |
| ], |
| "last": "Passonneau", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University New York", |
| "location": { |
| "region": "NY", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Bob", |
| "middle": [], |
| "last": "Carpenter", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": { |
| "settlement": "New York", |
| "region": "NY", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus. In a case study of word sense annotation, conventional methods for evaluating labels from trained annotators are contrasted with a probabilistic annotation model applied to crowdsourced data. The annotation model provides far more information, including a certainty measure for each gold standard label; the crowdsourced data was collected at less than half the cost of the conventional approach.", |
| "pdf_parse": { |
| "paper_id": "Q14-1025", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus. In a case study of word sense annotation, conventional methods for evaluating labels from trained annotators are contrasted with a probabilistic annotation model applied to crowdsourced data. The annotation model provides far more information, including a certainty measure for each gold standard label; the crowdsourced data was collected at less than half the cost of the conventional approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The quality of annotated data for computational linguistics is generally assumed to be good enough if a few annotators can be shown to be consistent with one another. Standard practice relies on metrics that measure consistency, either in an absolute way, or in a chance-adjusted fashion. Such measures, however, merely report how often annotators agree, with no direct measure of corpus quality, nor of the quality of individual items. We argue that high chance-adjusted interannotator agreement is neither necessary nor sufficient to ensure high quality gold-standard labels. We contrast the use of agreement metrics with the use of probabilistic models to draw inferences about annotated data where the items have been labeled by many annotators. A probabilistic model to fit many annotators' observed labels produces much more information about the annotated corpus. In particular, there will be a confidence estimate for each ground truth label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Probabilistic models of agreement and goldstandard inference have been used in psychometrics and marketing since the 1950s (e.g., IRT models or Bradley-Terry models) and in epidemiology since the 1970s (e.g., diagnostic disease prevalence models). More recently, crowdsourcing has motivated their application to data annotation for machine learning. The model we apply here (Dawid and Skene, 1979) assumes that annotators differ from one another in their accuracy at identifying the true label values, and that these true values occur at certain rates (their prevalence).", |
| "cite_spans": [ |
| { |
| "start": 374, |
| "end": 397, |
| "text": "(Dawid and Skene, 1979)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To contrast the two approaches to creation of an annotated corpus, we present a case study of word sense annotation. The items that were annotated are occurrences of words in their sentence contexts, and each label is a WordNet sense (Miller, 1995) . Each item has sense labels from up to twenty-five different annotators, collected through crowdsourcing. Application of an annotation model does not require this many labels per item, and crowdsourced annotation data does not require a probabilistic model. The case study, however, shows how the two benefit each other.", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 248, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "MASC (Manually Annotated Sub-Corpus of the Open American National Corpus) contains a subsidiary word sense sentence corpus that consists of approximately one thousand sentences per word for 116 words. Word senses were annotated in their sentence contexts using WordNet sense labels. Chanceadjusted agreement levels ranged from very high to chance levels, with similar variation for pairwise agreement (Passonneau et al., 2012a) . As a result, the annotations for certain words appear to be low quality. 1 Our case study shows how we created a more reliable word sense corpus for a randomly selected subset of 45 of the same words, through crowdsourcing and application of the Dawid and Skene model. The model yields a certainty measure for each labeled instance. For most instances, the certainty of the estimated true labels is high, even on words where pairwise and chance-adjusted agreement of trained annotators were both low.", |
| "cite_spans": [ |
| { |
| "start": 401, |
| "end": 427, |
| "text": "(Passonneau et al., 2012a)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper first summarizes the limitations of agreement metrics, then presents the Dawid and Skene model. The next two sections present a case study of the crowdsourced data, and the annotation results. While many of the MASC words had low agreement from trained annotators on the small proportion of the data where agreement was assessed, the same words have many instances with highly confident labels estimated from the crowdsourced annotations. In the discussion section, we compare the model-based labels to the labels from the trained annotators. The final sections present related work and our conclusions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A high-confidence ground truth label for each annotated instance is the ultimate goal of annotation, but can often be impractical or infeasible to achieve. On the grounds that more knowledge is always better, we argue that it is desirable to provide a confidence measure for each estimated label. This section first presents the case that the conventional steps to compute agreement provide at best an indirect measure of confidence on labels. We then present the Dawid and Skene model (1979) , which estimates a probability of each label value on every instance. To motivate its application to the crowdsourced sense labels, we work through an example to show how true labels are inferred, and to illustrate that information about the true label is derived from both accurate and inaccurate annotators. With many annotators to compare, the value of gathering a label can be quantified using information gain and mutual information, as illustrated in Section 2.2.2.", |
| "cite_spans": [ |
| { |
| "start": 464, |
| "end": 492, |
| "text": "Dawid and Skene model (1979)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Agreement Metrics versus a Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1 One potential use for the words with low agreement is to investigate whether features of the WordNet definitions, or sentence contexts, or both, correlate with low agreement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Agreement Metrics versus a Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Current best practice for creating annotation standards involves iteration over four steps: 1) design or redesign the annotation task, 2) write or revise guidelines to instruct annotators how to carry out the task, possibly with some training, 3) have two or more annotators work independently to annotate a sample of data, 4) measure the interannotator agreement on the data sample. Once the desired agreement has been obtained, the final step is to create a gold standard dataset where each item is annotated by a single annotator. How much chance-adjusted agreement is sufficient has been much debated (Artstein and Poesio, 2008; di Eugenio and Glass, 2004; di Eugenio, 2000; Bruce and Wiebe, 1998) . Surprisingly, little attention has been devoted to the question of whether the agreement subset is a representative sample of the corpus. Without such an assurance, there is little justification to take interannotator agreement as a quality measure of the corpus as a whole. Given the influence that a gold standard corpus can have on progress in our field, it is not clear that agreement measures on a corpus subset provide a sufficient guarantee of corpus quality. While it is taken for granted that some annotators perform better than others, 2 agreement metrics do not differentiate annotators. Since there are many ways to be inaccurate, and only one way to be accurate, it is assumed that if annotators have high pairwise or chance-adjusted agreement, then the annotation must be accurate. This is not necessarily a correct inference, as we show below. If two annotators do not agree well, this method does not identify whether one annotator is more accurate. More importantly, no information is gained about the quality of the ground truth labels.", |
| "cite_spans": [ |
| { |
| "start": 605, |
| "end": 632, |
| "text": "(Artstein and Poesio, 2008;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 633, |
| "end": 660, |
| "text": "di Eugenio and Glass, 2004;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 661, |
| "end": 678, |
| "text": "di Eugenio, 2000;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 679, |
| "end": 701, |
| "text": "Bruce and Wiebe, 1998)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To assess the limitations of agreement metrics, consider how they are computed and what they measure. Let i \u2208 1:I represent the items, j \u2208 1:J the annotators, k \u2208 1:K the label classes in a categorical labeling scheme (e.g., word senses), and y i,j \u2208 1:K the observed labels from annotator j for item i. Assume every annotator labels every item exactly once (we later relax this constraint).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Agreement: Pairwise agreement A m,n between two annotators m, n \u2208 1:J is defined as the proportion of items i \u2208 1:I for which the annotators supplied the same label,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A m,n = 1 I I i=1 I(y i,m = y i,n ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where I(s) = 1 if s is true and 0 otherwise. In other words, A m,n is the maximum likelihood estimate of chance of agreement in a binomial model. Pairwise agreement can be extended to the full set of annotators by averaging over all J 2 pairs:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A = 1 ( J 2 ) J\u22121 m=1 J n=m+1 A m,n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In sum, A is the proportion of all pairs of items that annotators agreed on. It does not take into account the proportion of each label from 1:K in the data. Chance-Adjusted Agreement: Agreement coefficients measure the proportion of observed agreements that are above the proportion expected by chance. Given an estimate A m,n of the probability that two annotators m, n \u2208 1:J will agree on a label and an estimate of the probability C m,n that they will agree by chance, chance-adjusted agree-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "ment IA m,n \u2208 [\u22121, 1] is defined by IA m,n = Am,n\u2212Cm,n 1\u2212Cm,n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Chance agreement takes into account the prevalence of the individual labels in 1:K. Specifically, it is defined to be the probability that a pair of labels drawn at random for two annotators will agree. There are two common ways to define this draw. Cohen's \u03ba statistic (Cohen, 1960) assumes each annotator draws uniformly at random from her set of labels. Letting \u03c8 j,k = 1 I I i=1 I(y i,j = k) be the proportion of the label k in annotator j's labels, this notion of chance agreement for a pair of annotators m, n is estimated as the product of their proportions \u03c8:", |
| "cite_spans": [ |
| { |
| "start": 270, |
| "end": 283, |
| "text": "(Cohen, 1960)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "C m,n = K k=1 \u03c8 m,k \u00d7 \u03c8 n,k .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Krippendorff's \u03b1, another chance-adjusted metric in wide use, assumes each annotator draws uniformly at random from the pooled set of labels from all annotators (Krippendorff, 1980) . Letting \u03c6 k be the proportion of label k in the entire set of labels, this alternative estimate, C m,n = K k=1 \u03c6 2 k , does not depend on the identity of the annotators m and n.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 181, |
| "text": "(Krippendorff, 1980)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Agreement coefficients suffer from multiple shortcomings. (1) They are intrinsically pairwise, although one can compare to a voted consensus or average over multiple pairwise agreements. 2In agreement-based analyses, two wrongs make a right in the sense that if two annotators both make the same mistake, they agree. If annotators are 80% accurate on a binary task, then chance agreement on the wrong category occurs at a 4% rate. 3Chance-adjusted agreement reduces to simple agreement as chance agreement approaches zero. When chance agreement is high, even high-accuracy annotators can have low chance-adjusted agreement, as when the data is skewed towards a few values, a typical case for NLP tasks. Feinstein and Cicchetti (1990) referred to this as the paradox of \u03ba (see section 6). For example, in a binary task with 95% prevalence of one category, two 90% accurate annotators would have negative chanceadjusted agreements of 0.9\u2212(.95 2 +.05 2 ) 1\u2212(.95 2 +.05 2 ) = \u2212.053. Thus high chance-adjusted interannotator agreement is not a necessary condition for a high-quality corpus. An alternative metric discussed in Section 6 addresses skewed prevalence of label values, but has not been adopted in the NLP community (Gwet, 2008) . (4) Interannotator agreement statistics implicitly assume annotators are unbiased; if they are biased in the same direction, e.g., the most prevalent category, then agreement is an overestimate of their accuracy. In the extreme case, in a binary labeling task, two adversarial annotators who always provide the wrong answer have a chance-adjusted agreement of 100%. (5) Item-level effects such as difficulty can inflate levels of agreement-in-error. For example, in a named-entity corpus one of the co-authors helped collect for MUC, hard-to-identify names have correlated false negatives among annotators, leading to higher agreement-in-error than would otherwise be expected. (6) Interannotator agreement statistics are rarely computed with confidence intervals, which can be quite wide even under optimistic assumptions of no annotator bias or item-level effects. Given a sample of 100 annotations, if the true gold standard categories were known (as opposed to being themselves estimated as in our setup here), an annotator getting 80 out of 100 items correct would produce a 95% interval for accuracy of roughly (74%, 86%). 3 Agreement statistics have even wider error bounds. This introduces enough uncertainty to span the rather arbitrary decision boundaries for acceptability employed for interannotator agreement statistics. Note that bootstrapping is a reliable method to compute confidence intervals (Efron and Tibshirani, 1986) . Briefly, given a sample of size N , a large number of samples of size N are drawn randomly with replacement from the original sample, the statistic of interest is computed for each random draw, and the mean \u00b1 1.96 standard deviations gives the estimated value and its approximate 95% confidence interval.", |
| "cite_spans": [ |
| { |
| "start": 1222, |
| "end": 1234, |
| "text": "(Gwet, 2008)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 2648, |
| "end": 2676, |
| "text": "(Efron and Tibshirani, 1986)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise and Chance-Adjusted Agreement Measures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A probabilistic model provides a recipe to randomly \"generate\" a dataset from a set of model parameters and constants. 4, 5 The utility of such a model lies in its ability to support meaningful inferences from data, such as an estimate of the true prevalence of each category. Dawid and Skene (1979) proposed a model to determine a consensus among patient histories taken by multiple doctors. Inference is driven by accuracies and biases estimated for each annotator on a per-category basis. A graphical sketch of the model is shown in Figure 1 .", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 121, |
| "text": "4,", |
| "ref_id": null |
| }, |
| { |
| "start": 122, |
| "end": 123, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 277, |
| "end": 299, |
| "text": "Dawid and Skene (1979)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 536, |
| "end": 544, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Let K be the number of possible labels or categories for an item, I the number of items to annotate, J the number of annotators, and N the total number of labels provided by annotators, where each annotator may label each instance zero or more times. Because the data is not a simple I \u00d7 J data matrix where every annotator labels every item exactly once, a database-like indexing scheme is used in which each annotation n is represented as a tuple of an item ii[n] \u2208 1:I, an annotator jj[n] \u2208 1:J, and a label y[n] \u2208 1:K. Skene 1979proposed a model to determine a consensus among patient histories taken by multiple doctors. Inference is driven by accuracies and biases estimated for each annotator on a percategory basis. A graphical sketch of the model is shown in Figure 1 . Let K be the number of possible labels or categories for an item, I the number of items to annotate, J the number of annotators, and N the total number of labels provided by annotators, where each annotator may label each instance zero or more times. Because the data is not a simple I \u21e5 J data matrix where every annotator labels every item exactly once, a database-like indexing scheme is used in which each annotation n is represented as a tuple made up of an item ii[n] 2 1:I, an annotator jj[n] 2 1:J, and a label y[n] 2 1:K. 6 As illustrated in Table 1 , we assemble the annotations in a database-like table where each row is an annotation, and the values in each column are indices over the items, annotators, and labels. For \u2022 \u2713 j,k,k 0 2 [0, 1] for the probabil tor j assigns the label k 0 to an it category is k, subject to", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 768, |
| "end": 776, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1330, |
| "end": 1337, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "n iin jjn yn 1 1 1 4 2 1 3 1 3 192 17 5 . . . . . . . . . . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "P K k 0 =1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The generative model first selects gory for item i according to the pre egories,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "z i \u21e0 Categorical(\u21e1). The observed labels y n are gener annotator jj[n]'s responses \u2713 jj[n], z ii[n] whose true category is z[ii[n]] y n \u21e0 Categorical(\u2713 jj[n], z[ii", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We use additively smoothed maxim estimation (MLE) to stabilize infer equivalent to maximum a posteriori tion in a Bayesian model with Diric", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2713 j,k \u21e0 Dirichlet(\u21b5 k ) \u21e1 \u21e0 Dirichlet( ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The unsmoothed MLE is equivalent timate when \u21b5 k and are unit vec experiments, we added a fractional of these vectors, corresponding to a gree of additive smoothing applied", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Probabilistic Annotation Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Given a set of annotators' labels f stance, the prevalence of senses, an tors' accuracies and biases, Bayes used to estimate the true sense of annotations can be assembled in a table where each row is an annotation, and the column values are indices over items, annotators, and labels. The first two rows show that on item 1, annotators 1 and 3 assigned labels 4 and 1, respectively. The third row says that for item 192 annotator 17 provided label 5. Dawid and Skene's model includes parameters", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "p(z i |y, \u2713, \u21e1) / p(z i |\u21e1) p(y|z = \u21e1 z[i] Y ii[n]=i \u2713 j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "n ii n jj n y n 1 1 1 4 2 1 3 1 3 192 17 5 . . . . . . . . . . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u2022 z i \u2208 1:K for the true category of item i,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u2022 \u03c0 k \u2208 [0, 1] for the probability that an item is of category k, subject to K k=1 \u03c0 k = 1, and \u2022 \u03b8 j,k,k \u2208 [0, 1] for the probabilty that annotator j assigns the label k to an item whose true category is k, subject to K k =1 \u03b8 j,k,k = 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "The generative model first selects the true category for item i according to the prevalence of categories,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "z i \u223c Categorical(\u03c0).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "The observed labels y n are generated based on an-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "notator jj[n]'s responses \u03b8 jj[n], z[ii[n]] to items ii[n] whose true category is z[ii[n]], y n \u223c Categorical(\u03b8 jj[n], z[ii[n]] ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "We use additively smoothed maximum likelihood estimation (MLE) to stabilize inference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u03b8 j,k \u223c Dirichlet(\u03b1 k ) \u03c0 \u223c Dirichlet(\u03b2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "The unsmoothed MLE is equivalent to the MAP estimate when \u03b1 k and \u03b2 are unit vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Given a set of annotators' labels for a word instance, the prevalence of senses, and the annotators' accuracies and biases, Bayes's rule can be used to estimate the true sense of each instance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "p(z i |y, \u03b8, \u03c0) \u221d p(z i |\u03c0) p(y|z i , \u03b8) = \u03c0 z[i] ii[n]=i \u03b8 jj[n],z[i],y[n] .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "As a simple example, consider K = 2 outcomes with prevalences \u03c0 1 = 0.2, and \u03c0 2 = 0.8. Suppose three annotators with response matrices supplied labels y 1 = 1, y 2 = 1, and y 3 = 2 for word instance i, respectively. Then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u03b8 1 = .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Pr[z i = 1|y, \u03b8, \u03c0] \u221d \u03c0 1 \u03b8 1,1,1 \u03b8 2,1,1 \u03b8 3,1,2 = .00975 Pr[z i = 2|y, \u03b8, \u03c0] \u221d \u03c0 2 \u03b8 1,2,1 \u03b8 2,2,1 \u03b8 3,2,2 = .0768.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "By normalizing (and rounding),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Pr[z i = 1|y, \u03b8, \u03c0] = .00975 .00975 + .0768 = .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "11", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Pr[z i = 2|y, \u03b8, \u03c0] = .0768 .00975 + .0768 = .89", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Although the majority vote on i is for category 1, the estimated probability that the category is 1 is only 0.11, given the adjustments for annotators' accuracies and biases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Comparison to voting. On the log scale, the annotation model is similar to a weighted additive voting scheme with maximum weight zero and no minimum weight; if u \u2208 (0, 1], then log u \u2208 (\u2212\u221e, 0].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "As we discuss in the next section, the important difference is that the weighting is based on the true category, allowing the model to adjust for annotator bias.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Spam annotators. The Dawid and Skene model adjusts for annotations from noisy annotators. In the limit, a label for a word instance from an annotator whose response is independent of the true category provides no information about the true sense of that instance, and such a label provides no impact on the resulting category estimate. For example, in a binary task, a label from an annotator with response matrix \u03b8 j = 0.9 0.1 0.9 0.1 provides no information on the true category. The model cancels the effect of such an annotator's label because Pr[z i = 1|y , \u03b8 j , \u03c0] = Pr[z i = 1|\u03c0], which follows from the fact that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u03c0 1 \u00d7 \u03b8 j,1,1 \u03c0 2 \u00d7 \u03b8 j,2,1 = \u03c0 1 \u03c0 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Biased Annotators. Biased annotators can have low accuracy and low agreement with other annotators, yet still provide a great deal of information about the true label. For example, in a binary task, a positively biased annotator will return relatively more false positives and relatively fewer false negatives compared to an unbiased one. As shown in Section 4.2, our word sense task had fairly small estimated biases toward the high-frequency senses in most cases. Other tasks, such as ordinal ranking of author certainty for assertions, show systematically biased annotators. Annotators may be biased toward one end of an ordinal scale, or toward the center. These kinds of biases are apparent in the annotators in the annotation task described in (Rzhetsky et al., 2009) , where biologists labeled sentences in biomedical research articles on a 1 to 7 scale of polarity and certainty.", |
| "cite_spans": [ |
| { |
| "start": 750, |
| "end": 773, |
| "text": "(Rzhetsky et al., 2009)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Adversarial Annotators. An adversarial annotator who always returns the wrong answer exhibits an extreme bias. In a binary annotation case, it is clear how perfectly adversarial answers provide the same information as perfectly cooperative answers. Although it is possible to estimate the response matrix of an adversarial annotator, if too many of the annotators are adversarial, the Dawid and Skene model cannot separate the truth from the lies. None of the data sets we have collected showed any evidence of adversarial labeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimated Senses", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "By comparing the uncertainty before and after including a new label from an annotator, we can measure the reduction in uncertainty provided by the annotator's label. By considering the expected reduction in uncertainty due to observing a label from an annotator, we can quantify how much information the label is expected to provide.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Much Information is in a Label?", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "Entropy. The information-theoretic notion of entropy makes the notion of uncertainty precise (Cover and Thomas, 1991) . If Z i is the random variable corresponding to the true label of word instance i with K possible labels and probability mass function", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 117, |
| "text": "Thomas, 1991)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Much Information is in a Label?", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "p Z i , its entropy is H[Z i ] = \u2212 K k=1 p Z i (k) log p Z i (k).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Much Information is in a Label?", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "Conditional Entropy. Consider a label Y n = k from annotator j = jj n for item i = ii n . The entropy of Z i conditioned on the observed label is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Much Information is in a Label?", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "H[Z i |Y n = k ] = \u2212 K k=1 p Z i |Yn (k|k ) log p Z i |Yn (k|k ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Much Information is in a Label?", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "tropy of Z i after observing Y n , H[Z i |Y n ] = K k =1 p Yn (k ) H[Z i |Y n =k ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": ". Conditional entropy can be generalized in the obvious way to condition on more than one observed label, for instance to compute the expected entropy of Z i after observing two labels, Y n and Y n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": "Mutual Information. Mutual information is the expected reduction in entropy in the state of Z i after observing one or more labels,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": "I[Z i ; Y n ] = H[Z i ] \u2212 H[Z i |Y n ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": "Gibbs' inequality ensures that mutual information is positive. In theory at least, it never hurts to observe a label (in expectation), no matter how bad the annotator is. In practice, we may not have an accurate estimate of an annotator's response probabilities p Yn|Z i . Using log base 2, which measures information in bits, consider the three hypothetical annotators illustrated above. Clearly the most accurate confusion matrix is \u03b8 3 . The conditional entropies of a new label for the three cases are, respectively, 0.71, 0.60 and 0.47 and the mutual information values are 0.01, 0.13 and 0.25. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": "; Y n ] = H[Z i ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": "A highly biased and hence inaccurate annotator can provide as much information as a more accurate annotator. This demonstrates that weighted voting schemes are not the correct approach to inference for true category labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional entropy is defined by the expected en-", |
| "sec_num": null |
| }, |
| { |
| "text": "The results in this paper were derived by expectation maximization using software written in R. The code is distributed with the data under an open-source license. 7 Other implementations of the Dawid and Skene model should produce the same penalized maximum likelihood (equivalently maximum a posteriori) estimates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation and Priors", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "The very weak Dirichlet priors added only arithmetic stabilization to the inferences, allowing an identified penalized maximum likelihood estimate in cases where an annotator did not label any instances of some sense for a word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation and Priors", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Bayesian posterior means provide similar results for this model; full Bayes would also quantify estimation uncertainty, which as noted above, is substantial for the data sizes discussed here. Carpenter (2008) discusses a more general approach based on a hierarchical model for the accuracy/bias parameters \u03b8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation and Priors", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Modeling a random effect per item, such as item difficulty, widens confidence intervals on accuracies/biases, because observed labels may be the result of item ease/difficulty or annotator accuracy/bias. This would have been more realistic, and would have provided additional information, but we felt the increased model complexity, especially with multivariate outputs, would distract from our main point in contrasting model-based inference with agreement statistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation and Priors", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "3 Two Data Collections", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation and Priors", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "To motivate our case study, we briefly discuss some of the limitations of the MASC word sense sentence corpus, which is an addendum to the MASC corpus. 8 For convenience, we refer here to the word sense sentence corpus as the MASC corpus. This is a 1.3 million word corpus with approximately one thousand sentences per word, for 116 words nearly evenly balanced among nouns, adjectives and verbs (Passonneau et al., 2012a) . Each sentence is drawn from the MASC corpus or the Open American National Corpus, exemplifies at least one of the 116 MASC words, and has been annotated by trained annotators who used WordNet senses as annotation labels. The annotation process is described in detail in (Passonneau et al., 2012a; Passonneau et al., 2012b) . The annotators were college students from Vassar, Barnard, and Columbia who were given general training in the annotation process, then were trained together on each word with a sample of fifty sentences, which included discussion with Christiane Fellbaum, one of the designers of WordNet. After the pre-annotation sample, annotators worked independently to label 1,000 sentences for each word using an annotation tool that presented the Word-Net senses and example usages, plus four variants of none of the above. For each word, 100 of the 1,000 sentences were annotated by two to four annotators to assess inter-annotator reliability. Figure 3 shows 45 randomly selected MASC words that were re-annotated using crowdsourcing. Shown are the part of speech, the number of Word-Net senses, the number of senses used by annotators, the \u03b1 value, and pairwise agreement. While the MASC word sense data demonstrates that annotators can agree on words with many senses, there are many words with low agreement, and correspondingly questionable ground truth labels. There is no correlation between the agreement and number of available senses, or senses used by annotators (Passonneau et al., 2012a) .", |
| "cite_spans": [ |
| { |
| "start": 396, |
| "end": 422, |
| "text": "(Passonneau et al., 2012a)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 695, |
| "end": 721, |
| "text": "(Passonneau et al., 2012a;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 722, |
| "end": 747, |
| "text": "Passonneau et al., 2012b)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 1916, |
| "end": 1942, |
| "text": "(Passonneau et al., 2012a)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1387, |
| "end": 1395, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MASC Word Sense Sentence Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Due to limited resources, the project deviated from best practice in having only a single round of annotation per word, and no iteration to achieve an agreement threshold. All annotators, however, had at least two phases of training, and most annotated several rounds. Below we use mutual information to show that the quality of the crowdsourced labels is equivalent to or superior than labels from the trained MASC annotators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MASC Word Sense Sentence Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To collect the data, we relied on Amazon Mechanical Turk, a crowdsourcing marketplace that is used extensively in the NLP community (Callison-Burch and Dredze, 2010) . Human Intelligence Tasks (HITs) are presented to Turkers by requesters. Certain aspects of the task were the same as for the MASC data: 45 randomly selected MASC words were used, sentences were drawn from the same pool, and the annotation labels were the same Word-Net 3.0 senses. Instead of collecting a single label for most instances, however, we collected up to twenty-five. Other differences from the MASC data collection were: the annotators were not trained; the annotation interface differed, though it presented the same information; the sets of sentences were not identical; annotators labeled any number of instances for a word up to the limit of 25 labels per word; finally, the Turkers were not instructed to become familiar with WordNet.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 165, |
| "text": "(Callison-Burch and Dredze, 2010)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourced Word Sense Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In each HIT, Turkers were presented with ten sample sentences for each word, with the word's senses listed below each sentence. A short paragraph of instructions indicated there would be up to 100 HITs for each word. To encourage Turkers to do multiple HITs per word, so we could estimate annotator accuracies more tightly, the instructions indicated that Turkers could expect their time per HIT to decrease with increasing familiarity with the word's senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourced Word Sense Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Most but not all crowdsourced instances had also been annotated by the trained annotators. Figures 7a-7b in Section 5, which compares the ground truth labels from the trained annotators with the crowdsourced labels, indicates for each word how many instances were annotated in common (e.g., 960 for board (verb)). Sentences were drawn from is the sense number, and the y-axis the proportion of instances assigned that sense. MASC FREQ: frequency of each sense in the singly-annotated instances from the trained MASC annotators; AMT MAJ: frequency of each majority vote sense for instances annotated by \u224825 Turkers; AMT MLE: estimated probability of each sense for instances annotated by \u224825 Turkers, using MLE. the same pool but in a few cases, the overlap is significantly less than the full 900-1,000 instances (e.g., work (noun) with 380).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 91, |
| "end": 98, |
| "text": "Figures", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Crowdsourced Word Sense Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Given 1,000 instances per word for a category whose prevalence is as low as 0.10 (100 examples expected), the 95% interval for sample prevalence, assuming examples are independent, will be 0.10 \u00b1 0.06. We collected between 20 and 25 labels per item to get reasonable confidence intervals for the true label, and so that future models could incorporate item difficulty. The large number of labels sharpens our estimates of the true category significantly, as estimated error goes down as O(1/ \u221a n)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourced Word Sense Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "with n independent annotations. Confidence intervals must be expanded as correlation among annotator responses increases due to item-level effects such as difficulty or subject matter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourced Word Sense Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Requesters can control many aspects of HITs. To ensure a high proportion of instances with high quality inferred labels, we piloted the HIT design with two trials of two and three words each, and discussed both with Turkers on the Turker Nation message board. The HIT title we chose-For American English Word Mavens-targeted Turkers with an inherent interest in words and meanings, and we recruited Turkers with high performance ratings and a long history of good work. The final procedure and payment were as follows. To avoid spam workers, we required Turkers to have a 98% lifetime approval rating and to have successfully completed 20,000 HITs. HITs were automatically approved after fifteen minutes. We monitored performance of Turk-ers across HITs by comparing individual Turker's labels to the current majority labels. Turkers with very poor performance were warned to take more care, or be blocked from doing further HITs. Of 228 Turkers, five were blocked, with one subsequently unblocked. The blocked Turker data is included with the other Turker data in our analyses and in the full data release. As noted above, the model-based approach to annotation is effective at adjusting for inaccurate annotators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourced Word Sense Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Modeling annotators as having distinct biases and accuracies should match the intuitions of anyone who has compared the results of more than one annotator on a task. The power of the Dawid and Skene model, however, shows up in the estimates it yields for category prevalence and for the true labels on each instance. Figure 4 contrasts three ways to estimate sense prevalence, illustrated with four of the crowdsourced words. AMT MLE is the model estimate from Turkers' labels. MASC FREQ is a naive rate from the trained annotators' label distributions, rather than a true estimate. Majority voted labels for Turkers (AMT MAJ) are closer to the model estimates than MASC FREQ, but do not take annotators' biases into account.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 317, |
| "end": 325, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimates for Prevalence and Labels", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The plots for the four words in Figure 4 are ordered by their \u03b1 scores for the 100 instances that were annotated in common by four trained annotators: add (0.55) > date (0.47) > help (0.26) > The prevalence estimates diverge less on words where the agreement is higher. Notably, the plots for the first three words demonstrate one or more senses where the AMT MLE estimate differs markedly from all other estimates. In Figure 4a , the AMT MLE estimate for sense 1 is much lower (0.51) than the other two measures. In Figure 4b , the AMT MLE estimate for sense 4 is much closer to MASC FREQ than AMT MAJ, which sugggests that some Turkers are biased against sense 4. The AMT MLE estimates for senses 1, 6 and 7 are distinctive. For help, the AMT MLE estimates for senses 1 and 6 are particularly distinctive. For ask senses 2 and 4, the divergence of the AMT MAJ estimates is again evidence of bias in some Turkers.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 40, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 419, |
| "end": 428, |
| "text": "Figure 4a", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 517, |
| "end": 526, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimates for Prevalence and Labels", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The estimates of label quality on each item are perhaps the strongest reason for turning to modelbased approaches to assess annotated data. For the same four words, Figure 5 shows the proportion of all instances that had an estimated true label where the label probability was greater than or equal to 0.99. This proportion ranges from 97% for date to 81% for help. Even for help, of the remaining 19% of instances of less confident estimated labels, 13% have posterior probabilities greater than 0.75. Figure 5 also shows that the high quality labels for each word are distributed across many of the senses. Of the 45 words studied here, 20 had \u03b1 scores less than 0.50 from the trained annotators. For 42 of the same 45 words, 80% of the inferred true labels have a probability higher than 0.99. Figure 6 shows confusion matrices in the form of heatmaps that plot annotator responses by the estimated true labels. Darker cells have higher probabilities. Perfect response accuracy (agreement with the inferred true label) would yield black squares on the diagonal and white on the off-diagonal. Figure 6a and Figure 6b show heatmaps for four annotators for the two words of the four that had the highest and third highest \u03b1 values.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 165, |
| "end": 173, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 503, |
| "end": 511, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 797, |
| "end": 805, |
| "text": "Figure 6", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 1095, |
| "end": 1104, |
| "text": "Figure 6a", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 1109, |
| "end": 1118, |
| "text": "Figure 6b", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimates for Prevalence and Labels", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The two figures show that the Turkers were generally more accurate on add (verb) than on help (verb), which is consistent with the differences in the interannotator agreement of trained annotators on these two words. In contrast to what can be learned from agreement metrics, inference based on the annotation model provides estimates of bias towards specific values. Figure 6a shows the bias of these annotators to overuse WordNet sense 1 for help. Further, there were no assignments of senses 6 or 8 for this word. The figures provide a succinct visual sum-mary that there were more differences across the four annotators for help than for add, with more bias towards overuse of not only sense 1, but also senses 2 (annotators 8 and 41) and 3 (annotator 9). When annotator 8 uses sense 1, the true label is often sense 6, thus illustrating how annotators provide information about the true label even from inaccurate responses.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 368, |
| "end": 377, |
| "text": "Figure 6a", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotator Accuracy and Bias", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Mean accuracies per word ranged from 0.86 to 0.05, with most words showing a large spread across senses, and higher mean accuracy for the more frequent senses. Mean accuracy for add was 0.90 for sense 1, 0.79 for sense 2, and much lower for senses 6 (0.29) and 7 (0.19). For help, mean accuracy was best on sense 1 (0.73), which was also the most frequent, but it was also quite good on sense 4 (0.64), which was much less frequent. Mean accuracies on senses of help ranged from 0.11 (senses 5, 7, and other) to 0.73 (sense 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotator Accuracy and Bias", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For many of the words, the model yields the same label values as the trained annotator on a large majority of instances, yet for nearly as many words there is more disparity. After we discuss how the modelbased and trained annotators labels line up with each other, we argue that the model estimates are better. The two sets of labels cannot be differentiated from one another by mutual information. In contrast to the model estimates, the trained annotator labels have no confidence value, and no estimate for the trained annotator's accuracy. We conclude the section with a cost comparison. Figure 7 compares how many instances have the same labels from the trained annotators and of Turkers (blue); from the trained annotators and the model (red), and from the Turker Plurality and the model (green). Recall that about ninety percent of the instances labled by trained annotators have a single label; for the ten percent with two to four annotators, we used the majority label if there was one, else gave each tied sense a proportional amount of the vote. Figure 7a shows 22 words where all three comparisons have about the same relative proportion in common (70%-98% on average). Here sets with the least overlap are the trained annotators compared with the model, with the exception of win-dow (noun). The bottom figure shows the 23 words where the proportion in common is relatively lower (35%-75% on average), mostly due to the two comparisons for the trained annotators. Across the 45 words, the proportion of instances that had the same labels assigned by the trained annotators and the model does not correlate with the \u03b1 scores for the words, or with pairwise agreement Previous work has shown that model-based estimates are superior to majority-voting (Snow et al., 2008) . Figure 7 shows that the trained annotators' labels match the model (red bars) consistently less often than they match the Turker plurality, which is often a majority (blue bars). There are a fair number of cases, however, with a large disparity between the trained annotators and Turkers. This is most apparent when the green bar is much higher than the red or blue bars. For the word meet (verb), for example, in 19% of cases the trained annotator used sense 4 of WordNet 3.0 (glossed as \"fill or meet a want or need\") where the the plurality of Turkers selected sense 5 (glossed as \"satisfy a condition or restriction\"). Notably, in WordNet 3.1, two of the Word-Net 3.0 senses for meet (verb) have been removed, including the sense 5 that the Turkers favored in our data. A similar situation occurs with date (noun): 17% of cases where the trained annotator used sense 4, the plurality of Turkers used sense 5; the former sense 4 is no longer in WordNet 3.0.", |
| "cite_spans": [ |
| { |
| "start": 1764, |
| "end": 1783, |
| "text": "(Snow et al., 2008)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 593, |
| "end": 601, |
| "text": "Figure 7", |
| "ref_id": "FIGREF8" |
| }, |
| { |
| "start": 1059, |
| "end": 1068, |
| "text": "Figure 7a", |
| "ref_id": "FIGREF8" |
| }, |
| { |
| "start": 1786, |
| "end": 1794, |
| "text": "Figure 7", |
| "ref_id": "FIGREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For the trained annotators, interannotator agreement and pairwise agreement varied widely, as shown in Figure 3 . Measures of the information provided by labels from Turkers and trained annotators give a similarly wide range across both groups. Figure 8 shows a histogram of estimated mutual information for Turkers and MASC annotators across the four words. The most striking feature of these plots is the large variation in mutual information scores within both groups of annotators for each word (note that date and help had many more trained annotators than add or ask). There is no evidence that a label from a trained annotator provides more information than a Turker's. Thus we conclude that a modelbased label derived from many Turkers is preferable to a label from a single trained annotator.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 103, |
| "end": 111, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 245, |
| "end": 253, |
| "text": "Figure 8", |
| "ref_id": "FIGREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In contrast to current best practice, an annotation model yields far more information about the most essential aspect of annotation efforts, namely how 0.1$ 0.2$ 0.3$ 0.4$ 0.5$ 0.6$ 0.7$ 0.8$ 0.9$ 1$ %$Trained$=$Turker$Plurality$ %$Trained$=$Turker$Model$ %$Turker$Plur$=$Turker$Model$ (a) For these 22 words, the three sets of labels (trained annotators, Turker plurality, Turker model) have a high proportion in common and lower variance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "0.1$ 0.2$ 0.3$ 0.4$ 0.5$ 0.6$ 0.7$ 0.8$ 0.9$ 1$ %$Trained$=$Turker$Plurality$ %$Trained$=$Turker$Model$", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(b) For these 23 the words, the three sets of labels (trained annotators, Turker plurality, Turker model) have a lower proportion in common and higher variance. much uncertainty is associated with each gold standard label. In our case, the richer information comes at a lower cost. Over the course of a five-year period that included development of the infrastructure, 17 undergraduates who annotated the 116 MASC words were paid an estimated total of $80,000 for 116 words \u00d7 1000 sentences per word, which comes to a unit cost of $0.70 per ground truth label. In a 12 month period with 6 months devoted to infrastructure and trial runs, we paid 228 Turkers a total of $15,000 for 45 words \u00d7 1000 sentences per word, for a unit cost of $0.33 per ground truth label. In short, the AMT data cost less than half the trained annotator data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For annotation tasks such as this one, where each candidate word has multiple class labels, the comparison between the two methods of data collection shows that the model-based estimates from crowdsourced data have at least the same quality, if not higher, for less cost. The fact that each label has an associated confidence makes them more valuable because the end user can choose how to handle labels with lower certainty: for example, to assign them less weight in evaluating word sense disam-biguation systems, or to eliminate them from training for statistical approaches to building such systems. Each word here has a distinct set of classes, and the results from both the trained annotators and model indicate that some sets of sense labels led to greater agreement or a higher proportion of high confidence labels. In many cases, results for the words with fewer high confidence labels could be improved by revising the sense inventories, as suggested by the examples with meet (verb) and date (noun).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Alternative metrics to measure association of raters on binary data have been proposed to overcome deficiencies in \u03ba when there is data skew. The Gindex (Holley and Guildford, 1964; Vegelius, 1981) , for example, is argued to improve over the Matthews Correlation Coefficient (Matthews, 1975) . Feinstein and Cicchetti (1990) outline the undesirable behavior that \u03ba-like metrics will have lower values when there is high agreement on highly skewed data. \u03ba assumes that chance agreement on the more prevalent class becomes high. Gwet (2008) presents a metric that estimates the likelihood of chance agreement based on the assumption that chance agreement occurs only when annotators assign labels randomly, which is estimated from the data. Klebanov and Beigman (2009) make a related assumption that annotators agree on easy cases and behave randomly on hard cases, and propose a model to estimate the proportion of hard cases.", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 181, |
| "text": "(Holley and Guildford, 1964;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 182, |
| "end": 197, |
| "text": "Vegelius, 1981)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 276, |
| "end": 292, |
| "text": "(Matthews, 1975)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 295, |
| "end": 325, |
| "text": "Feinstein and Cicchetti (1990)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Model-based gold-standard estimation such as (Dawid and Skene, 1979) has long been the standard in epidemiology, and has been applied to disease prevalence estimation (Albert and Dodd, 2008) and also to many other problems such as human annotation of craters in images of Venus (Smyth et al., 1995) . Smyth et al. (1995) , Rogers et al. (2010), and Raykar et al. (2010) all discuss the advantages of learning and evaluation with probabilistically annotated corpora. Rzhetsky et al. (2009) and Whitehill et al. (2009) estimate annotation models without gold-standard supervision, but neither models annotator biases, which are critical for estimating true labels.", |
| "cite_spans": [ |
| { |
| "start": 45, |
| "end": 68, |
| "text": "(Dawid and Skene, 1979)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 167, |
| "end": 190, |
| "text": "(Albert and Dodd, 2008)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 278, |
| "end": 298, |
| "text": "(Smyth et al., 1995)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 301, |
| "end": 320, |
| "text": "Smyth et al. (1995)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 323, |
| "end": 348, |
| "text": "Rogers et al. (2010), and", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 349, |
| "end": 369, |
| "text": "Raykar et al. (2010)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 466, |
| "end": 488, |
| "text": "Rzhetsky et al. (2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 493, |
| "end": 516, |
| "text": "Whitehill et al. (2009)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Perhaps the first application of Dawid and Skene's model to NLP data was the Bruce and Wiebe (1999) investigation of word sense. Much later, Snow et al. (2008) used the same model to show that combining noisy crowdsourced annotations produced data of equal quality to five distinct published gold standards, including an example of word sense. Both works estimate the Dawid and Skene model using supervised gold-standard category data, which allows direct estimation of annotator accuracy and bias. Hovy et al. (2013) simpler model to filter out spam annotators. Crowdsourcing is now so widespread that NAACL 2010 sponsored a workshop on \"Creating Speech and Language Data with Amazon's Mechanical Turk\" and in 2011, TREC added a crowdsourcing track. Active learning is an alternative method to annotate corpora, thus the Troia project (Ipeirotis et al., 2010) is a web service implementation of a maximum a posteriori estimator for the Dawid and Skene model, with a decision-theoretic module for active learning to select the next item to label. They draw on the Sheng et al. (2008) model to actively select the next label to elicit, which provides a very simple estimate of expected accuracy for a given number of labels. This essentially provides a statistical power calculation for annotation tasks. Because it is explicitly designed to measure reduction in uncertainty, mutual information should be the ideal choice for guiding such active labeling (MacKay, 1992) . Such a strategy of selecting features with maximal mutual information has proven effective in greedy featureselection strategies for classifiers, despite the fact that the objective function was classification accuracy, not entropy (Yang and Pedersen, 1997; Forman, 2003) .", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 99, |
| "text": "Bruce and Wiebe (1999)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 141, |
| "end": 159, |
| "text": "Snow et al. (2008)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 499, |
| "end": 517, |
| "text": "Hovy et al. (2013)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1064, |
| "end": 1083, |
| "text": "Sheng et al. (2008)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1454, |
| "end": 1468, |
| "text": "(MacKay, 1992)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1703, |
| "end": 1728, |
| "text": "(Yang and Pedersen, 1997;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1729, |
| "end": 1742, |
| "text": "Forman, 2003)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Interannotator agreement applies to a set of annotations, and provides no information about individual instances. When two or more annotators have very high interannotator agreement on a task, unless they have perfect accuracy, there will be instances where they agreed incorrectly, and no way to predict which instances these are. Moreover, for many semantic annotation tasks, high \u03ba is impractical. In addition, there is often a pragmatic dimension where labels represent community-established conventions of usage. In such cases, no one individual can reliably assign labels because the ground truth derives from consensus among the community of language users. Word sense annotation is such a task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "An annotation model applied to the type of crowdsourced labels collected here provides more knowledge and higher quality gold standard labels at lower cost than the conventional method used in the MASC project. Those who would use the corpus for training benefit because they can differen-tiate high from low confidence labels. Those who would use such a corpus for cross-site evaluations of word sense disambiguation systems benefit because there are more evaluation options. Where the most probable label is relatively uncertain, systems can be penalized less for an incorrect but close response. Crowdsourcing has already made it possible to annotate corpora more cheaply, and wider use of annotation models in NLP should lead to more confidence from users in the corpora we create.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, 2 (2014) 311-326. Action Editor: Chris Callison-Burch.Submitted 2/2014; Revised 6/2014; Published 10/2014. c 2014 Association for Computational Linguistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Some researchers believe that all that is needed is one trustworthy annotator, which begs the question of how trust is assessed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "If items are not independent, as assumed here, the interval becomes wider.4 In a Bayesian setting, model parameters are also modeled as randomly generated from a prior distribution.5 The size constants defining the data collection are not generated as part of the model. In a \"discriminative\" model, only the outcomes and parameters are generated in this sense, not the predictors (i.e., features).6 For the data indexing, we use jj and ii to avoid confusion with the I items and J annotators of the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the data indexing, we use jj and ii to avoid confusion with the I items and J annotators of the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "URL not given yet to preserve anonymity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.anc.org/data/masc/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The first author was supported by NSF CRI-0708952 and CRI-1059312. The second author was supported by NSF CNS-1205516 and DoE DE-SC0002099. We thank Shreya Prasad for work on the data collection, Mizi Morris and Boyi Xie for results munging and feedback on the paper, and Marilyn Walker for advice on collaborating with turkers on the design of HITs through message boards.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "On estimating diagnostic accuracy from studies with multiple raters and partial gold standard evaluation", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [ |
| "S" |
| ], |
| "last": "Albert", |
| "suffix": "" |
| }, |
| { |
| "first": "Lori", |
| "middle": [ |
| "E" |
| ], |
| "last": "Dodd", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of the American Statistical Association", |
| "volume": "103", |
| "issue": "481", |
| "pages": "61--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul S. Albert and Lori E. Dodd. 2008. On estimating diagnostic accuracy from studies with multiple raters and partial gold standard evaluation. Journal of the American Statistical Association, 103(481):61-73.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Inter-coder agreement for computational linguistics", |
| "authors": [ |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "4", |
| "pages": "555--596", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Wordsense distinguishability and inter-coder agreement", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [ |
| "F" |
| ], |
| "last": "Bruce", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [ |
| "M" |
| ], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "53--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca F. Bruce and Janyce M. Wiebe. 1998. Word- sense distinguishability and inter-coder agreement. In Proceedings of Empirical Methods in Natural Lan- guage Processing, pages 53-60.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Recognizing subjectivity: a case study of manual tagging", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [ |
| "F" |
| ], |
| "last": "Bruce", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [ |
| "M" |
| ], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Natural Language Engineering", |
| "volume": "1", |
| "issue": "1", |
| "pages": "1--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca F. Bruce and Janyce M. Wiebe. 1999. Recog- nizing subjectivity: a case study of manual tagging. Natural Language Engineering, 1(1):1-16.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Creating speech and language data with Amazon's Mechanical Turk", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", |
| "volume": "", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Callison-Burch and Mark Dredze. 2010. Creating speech and language data with Amazon's Mechanical Turk. In Proceedings of the NAACL HLT 2010 Work- shop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 1-12.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Multilevel Bayesian models of categorical data annotation", |
| "authors": [ |
| { |
| "first": "Bob", |
| "middle": [], |
| "last": "Carpenter", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bob Carpenter. 2008. Multilevel Bayesian models of categorical data annotation. Technical report, Alias-i, Inc.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 1960, |
| "venue": "", |
| "volume": "20", |
| "issue": "", |
| "pages": "37--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nom- inal scales. Educational and Psychological Measure- ment, 20:37-46.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Elements of Information Theory", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Joy", |
| "middle": [ |
| "A" |
| ], |
| "last": "Cover", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. Wiley-Interscience, New York, NY, USA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Maximum likelihood estimation of observer error-rates using the EM algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "P" |
| ], |
| "last": "Dawid", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "M" |
| ], |
| "last": "Skene", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics)", |
| "volume": "28", |
| "issue": "1", |
| "pages": "20--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. P. Dawid and A. M. Skene. 1979. Maximum likeli- hood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society. Se- ries C (Applied Statistics), 28(1):20-28.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The Kappa statistic: A second look", |
| "authors": [ |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Di", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugenio", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "1", |
| "pages": "95--101", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barbara di Eugenio and Michael Glass. 2004. The Kappa statistic: A second look. Computational Linguistics, 30(1):95-101.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "On the usage of Kappa to evaluate agreement on coding tasks", |
| "authors": [ |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Di", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugenio", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the Second International Conference on Language Resources and Evaluation (LREC)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barbara di Eugenio. 2000. On the usage of Kappa to evaluate agreement on coding tasks. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Efron", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Statistical Science", |
| "volume": "1", |
| "issue": "1", |
| "pages": "54--77", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Efron and R. Tibshirani. 1986. Bootstrap meth- ods for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science, 1(1):54-77.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "High agreement but low Kappa: I. The problems of two paradoxes", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Alvan", |
| "suffix": "" |
| }, |
| { |
| "first": "Domenic", |
| "middle": [ |
| "V" |
| ], |
| "last": "Feinstein", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Cicchetti", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of Clinical Epidemiology", |
| "volume": "43", |
| "issue": "6", |
| "pages": "543--549", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alvan R. Feinstein and Domenic V. Cicchetti. 1990. High agreement but low Kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology, 43(6):543 -549.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "An extensive empirical study of feature selection metrics for text classification", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Forman", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "1289--1305", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Forman. 2003. An extensive empirical study of feature selection metrics for text classification. Jour- nal of Machine Learning Research, 3:1289-1305.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Computing inter-rater reliability and its variance in the presence of high agreement", |
| "authors": [ |
| { |
| "first": "Kilem", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Gwet", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "British Journal of Mathematical and Statistical Psychology", |
| "volume": "61", |
| "issue": "1", |
| "pages": "29--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kilem Li Gwet. 2008. Computing inter-rater reliabil- ity and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psy- chology, 61(1):29-48.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A note on the G index of agreement", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "W" |
| ], |
| "last": "Holley", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "P" |
| ], |
| "last": "Guildford", |
| "suffix": "" |
| } |
| ], |
| "year": 1964, |
| "venue": "Educational and Psychological Measurement", |
| "volume": "24", |
| "issue": "", |
| "pages": "749--753", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. W. Holley and J. P. Guildford. 1964. A note on the G index of agreement. Educational and Psychological Measurement, 24:749-753.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning whom to trust with MACE", |
| "authors": [ |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Tayler", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1120--1130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dirk Hovy, Tayler Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 1120-1130, Atlanta, Georgia, June. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Quality management on Amazon Mechanical Turk", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Panagiotis", |
| "suffix": "" |
| }, |
| { |
| "first": "Foster", |
| "middle": [], |
| "last": "Ipeirotis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Provost", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "64--67", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Panagiotis G. Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP 2010, pages 64-67, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "From annotator agreement to noise models", |
| "authors": [ |
| { |
| "first": "Eyal", |
| "middle": [], |
| "last": "Beata Beigman Klebanov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Beigman", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "4", |
| "pages": "495--503", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2009. From annotator agreement to noise models. Compu- tational Linguistics, 35(4):495-503.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Content analysis: An introduction to its methodology", |
| "authors": [ |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Krippendorff", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klaus Krippendorff. 1980. Content analysis: An intro- duction to its methodology. Sage Publications, Bev- erly Hills, CA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Information-based objective functions for active data selection", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "C" |
| ], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mackay", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Neural Computation", |
| "volume": "4", |
| "issue": "", |
| "pages": "590--604", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David J. C. MacKay. 1992. Information-based objective functions for active data selection. Neural Computa- tion, 4:590-604.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Comparison of the predicted and observed secondary structure of t4 phage lysozyme", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [ |
| "W" |
| ], |
| "last": "Matthews", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Biochimica et Biophysica Acta (BBA) -Protein Structure", |
| "volume": "405", |
| "issue": "2", |
| "pages": "442--451", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA) -Protein Struc- ture, 405(2):442-451.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A lexical database for English", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Communications of the ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller. 1995. A lexical database for English. Communications of the ACM, 38(11):39-41.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "The MASC word sense corpus", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [ |
| "J" |
| ], |
| "last": "Passonneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Collin", |
| "middle": [ |
| "F" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Ide", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "3025--3030", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca J. Passonneau, Collin F. Baker, Christiane Fell- baum, and Nancy Ide. 2012a. The MASC word sense corpus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 3025-3030, Istanbul, Turkey. Eu- ropean Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Multiplicity and word sense: Evaluating and learning from multiply labeled word sense annotations", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [ |
| "J" |
| ], |
| "last": "Passonneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Vikas", |
| "middle": [], |
| "last": "Bhardwaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Ansaf", |
| "middle": [], |
| "last": "Salleb-Aouissi", |
| "suffix": "" |
| }, |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Ide", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Language Resources and Evaluation", |
| "volume": "46", |
| "issue": "2", |
| "pages": "219--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca J. Passonneau, Vikas Bhardwaj, Ansaf Salleb- Aouissi, and Nancy Ide. 2012b. Multiplicity and word sense: Evaluating and learning from multiply la- beled word sense annotations. Language Resources and Evaluation, 46(2):219-252.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Learning from crowds", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Vikas", |
| "suffix": "" |
| }, |
| { |
| "first": "Shipeng", |
| "middle": [], |
| "last": "Raykar", |
| "suffix": "" |
| }, |
| { |
| "first": "Linda", |
| "middle": [ |
| "H" |
| ], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerardo", |
| "middle": [ |
| "Hermosillo" |
| ], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Valadez", |
| "suffix": "" |
| }, |
| { |
| "first": "Luca", |
| "middle": [], |
| "last": "Florin", |
| "suffix": "" |
| }, |
| { |
| "first": "Linda", |
| "middle": [], |
| "last": "Bogoni", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Moy", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "11", |
| "issue": "", |
| "pages": "1297--1322", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Ger- ardo Hermosillo Valadez, Charles Florin, Luca Bo- goni, and Linda Moy. 2010. Learning from crowds. Journal of Machine Learning Research, 11:1297- 1322.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Semi-parametric analysis of multi-rater data", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Rogers", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Girolami", |
| "suffix": "" |
| }, |
| { |
| "first": "Tamara", |
| "middle": [], |
| "last": "Polajnar", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Statistical Computing", |
| "volume": "20", |
| "issue": "", |
| "pages": "317--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Rogers, Mark Girolami, and Tamara Polajnar. 2010. Semi-parametric analysis of multi-rater data. Statistical Computing, 20:317-334.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "How to get the most out of your curation effort", |
| "authors": [ |
| { |
| "first": "Andrey", |
| "middle": [], |
| "last": "Rzhetsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Hagit", |
| "middle": [], |
| "last": "Shatkay", |
| "suffix": "" |
| }, |
| { |
| "first": "W. John", |
| "middle": [], |
| "last": "Wilbur", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "PLoS Computational Biology", |
| "volume": "5", |
| "issue": "5", |
| "pages": "1--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrey Rzhetsky, Hagit Shatkay, and W. John Wilbur. 2009. How to get the most out of your curation effort. PLoS Computational Biology., 5(5):1-13.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Get another label? Improving data quality and data mining using multiple, noisy labelers", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Victor", |
| "suffix": "" |
| }, |
| { |
| "first": "Foster", |
| "middle": [], |
| "last": "Sheng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Provost", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Panagiotis", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Fourteenth ACM International Conference on Knowledge Discovery and Data Mining (KDD)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeiro- tis. 2008. Get another label? Improving data qual- ity and data mining using multiple, noisy labelers. In Proceedings of the Fourteenth ACM International Conference on Knowledge Discovery and Data Min- ing (KDD).", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Inferring ground truth from subjectively-labeled images of Venus", |
| "authors": [ |
| { |
| "first": "Padhraic", |
| "middle": [], |
| "last": "Smyth", |
| "suffix": "" |
| }, |
| { |
| "first": "Usama", |
| "middle": [], |
| "last": "Fayyad", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Burl", |
| "suffix": "" |
| }, |
| { |
| "first": "Pietro", |
| "middle": [], |
| "last": "Perona", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Baldi", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "7", |
| "issue": "", |
| "pages": "1085--1092", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Padhraic Smyth, Usama Fayyad, Michael Burl, Pietro Perona, and Pierre Baldi. 1995. Inferring ground truth from subjectively-labeled images of Venus. In Ad- vances in Neural Information Processing Systems 7, pages 1085-1092. MIT Press.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Cheap and fast -but is it good? Evaluating non-expert annotations for natural language tasks", |
| "authors": [ |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "254--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast -but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of Empirical Meth- ods in Natural Language Processing (EMNLP), pages 254-263, Honolulu.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Significance tests for the G-index", |
| "authors": [], |
| "year": 1981, |
| "venue": "Educational and Psychological Measurement", |
| "volume": "41", |
| "issue": "", |
| "pages": "99--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Vegelius. 1981. Significance tests for the G-index. Educational and Psychological Measurement, 41:99- 108.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Whitehill", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Ruvolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Tingfan", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Bergsma", |
| "suffix": "" |
| }, |
| { |
| "first": "Javier", |
| "middle": [], |
| "last": "Movellan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "22", |
| "issue": "", |
| "pages": "2035--2043", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Whitehill, Paul Ruvolo, Tingfan Wu, Jacob Bergsma, and Javier Movellan. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Y. Bengio, D. Schu- urmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2035-2043, December.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A comparative study on feature selection in text categorization", |
| "authors": [ |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [ |
| "O" |
| ], |
| "last": "Pedersen", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Fourteenth International Conference on Machine Learning, ICML '97", |
| "volume": "", |
| "issue": "", |
| "pages": "412--420", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yiming Yang and Jan O. Pedersen. 1997. A compara- tive study on feature selection in text categorization. In Proceedings of the Fourteenth International Con- ference on Machine Learning, ICML '97, pages 412- 420, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Graphical model sketch of the Dawid-Skene model enhanced with Dirichlet priors. Sizes: J number of annotators, K number of categories, I number of items, N number of labels collected. Estimated parameters: \u2713 annotator accuracies/biases, \u21e1 category prevalence, z true category. Observed data: y labels. Hyperpriors: \u21b5 accuracies/biases, prevalence.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "text": "Graphical model sketch of the Dawid andSkene model enhanced with Dirichlet priors. Sizes: J number of annotators, K number of categories, I number of items, N number of labels collected. Estimated parameters: \u03b8 annotator accuracies/biases, \u03c0 category prevalence, z true category. Observed data: y labels. Hyperpriors: \u03b1 accuracies/biases, \u03b2 prevalence.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "text": "Table of annotations y indexed by word instance ii and annotator jj.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "text": "Krippendorff's \u03b1 and pairwise agreement for the 45 MASC words in the crowdsourcing study, with number of WordNet senses available and used. Pairwise agreement was computed according to the formula in Section 2.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "text": "date (noun) (\u03b1 = 0.47, agreement=0.57) help (verb) (\u03b1 = 0.26, agreement=0.58) ask (verb) (\u03b1 = 0.20, agreement=0.45) Prevalence estimates for 4 words: the x-axis", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "text": "Proportion of instances where posterior probabilities \u2265 0.99", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "text": "(a) Four of 57 annotators for add (verb) (b) Four of 49 annotators for help(verb)", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF7": { |
| "text": "Example confusion matrices of estimated annotator accuracies and biases ask (0.20).", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF8": { |
| "text": "Proportion of instances labeled by both trained annotators and Turkers (total instances in parentheses) where the trained annotator label matches the Turker plurality (blue), where the trained annotator label matches the model (red), and where the Turker plurality matches the model (green)", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF9": { |
| "text": "0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 mutual_info count Histograms of mutual information estimates for the four example words; trained annotators are in the top row and Turkers in the bottom.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "text": "Table of annotations y ind instance ii and annotator jj.", |
| "num": null, |
| "content": "<table><tr><td>example, the first two rows show t</td></tr><tr><td>annotators 1 and 3 assigned labels 4</td></tr><tr><td>tively. The third row says that for i</td></tr><tr><td>tator 17 provided label 5.</td></tr><tr><td>Dawid and Skene's model includ</td></tr><tr><td>\u2022 z i 2 1:K for the true category \u2022 \u21e1 k 2 [0, 1] for the probability P K of category k, subject to k=1</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "text": "Kinds of Annotators. A spam annotator provides zero information about a category, because H[Z i |Y n ] = H[Z i ]. Spam annotators provide the minimum possible mutual information, i.e., I[Z i ; Y n ] = 0. A perfectly accurate annotator is one for whom Pr[Y i = k|Z i ] is 1 if k = Z i and 0 otherwise. For such annotators, observing their label removes all uncertainty, so that H[Z i |Y n ] = 0. A perfect annotator provides maximum mutual information, i.e., I[Z i", |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |