ACL-OCL / Base_JSON /prefixD /json /deeplo /2022.deeplo-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:30.599769Z"
},
"title": "Clean or Annotate: How to Spend a Limited Data Collection Budget",
"authors": [
{
"first": "Derek",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ASAPP",
"location": {
"settlement": "New York",
"region": "NY"
}
},
"email": "dchen@asapp.com"
},
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"region": "NY"
}
},
"email": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {
"region": "NY"
}
},
"email": "bowman@nyu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Crowdsourcing platforms are often used to collect datasets for training machine learning models, despite higher levels of inaccurate labeling compared to expert labeling. There are two common strategies to manage the impact of such noise: The first involves aggregating redundant annotations, but comes at the expense of labeling substantially fewer examples. Secondly, prior works have also considered using the entire annotation budget to label as many examples as possible and subsequently apply denoising algorithms to implicitly clean the dataset. We find a middle ground and propose an approach which reserves a fraction of annotations to explicitly clean up highly probable error samples to optimize the annotation process. In particular, we allocate a large portion of the labeling budget to form an initial dataset used to train a model. This model is then used to identify specific examples that appear most likely to be incorrect, which we spend the remaining budget to relabel. Experiments across three model variations and four natural language processing tasks show our approach outperforms or matches both label aggregation and advanced denoising methods designed to handle noisy labels when allocated the same finite annotation budget.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Crowdsourcing platforms are often used to collect datasets for training machine learning models, despite higher levels of inaccurate labeling compared to expert labeling. There are two common strategies to manage the impact of such noise: The first involves aggregating redundant annotations, but comes at the expense of labeling substantially fewer examples. Secondly, prior works have also considered using the entire annotation budget to label as many examples as possible and subsequently apply denoising algorithms to implicitly clean the dataset. We find a middle ground and propose an approach which reserves a fraction of annotations to explicitly clean up highly probable error samples to optimize the annotation process. In particular, we allocate a large portion of the labeling budget to form an initial dataset used to train a model. This model is then used to identify specific examples that appear most likely to be incorrect, which we spend the remaining budget to relabel. Experiments across three model variations and four natural language processing tasks show our approach outperforms or matches both label aggregation and advanced denoising methods designed to handle noisy labels when allocated the same finite annotation budget.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modern machine learning often depends on heavy data annotation efforts. To keep costs in check while maintaining speed and scalability, many people turn to non-specialist crowd-workers through platforms like Mechanical Turk. Although crowdsourcing reduces costs to a reasonable level, it also tends to produce substantially higher error rates compared with expert labeling. The classic approach for improving reliability in classification tasks is to perform redundant annotations which are later aggregated using a majority vote to form a single gold label (Snow et al., 2008; Sap et al., 2019a; Potts et al., 2021; Sap et al., 2019b) . This",
"cite_spans": [
{
"start": 558,
"end": 577,
"text": "(Snow et al., 2008;",
"ref_id": "BIBREF52"
},
{
"start": 578,
"end": 596,
"text": "Sap et al., 2019a;",
"ref_id": "BIBREF41"
},
{
"start": 597,
"end": 616,
"text": "Potts et al., 2021;",
"ref_id": "BIBREF35"
},
{
"start": 617,
"end": 635,
"text": "Sap et al., 2019b)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Selected for additional annotation Figure 1 : Data cleaning reserves a small portion of the annotation budget for targeted relabeling of examples that are identified as especially likely to be noisy. In contrast, the default and denoising methods spend the entire budget upfront, yielding lower quality data.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "solution is easy to understand and implement, but comes at the expense of severely reducing the number of labeled examples available for training. As an alternative, researchers have made great strides in designing automatic label cleaning methods, noise-insensitive training schemes and other mechanisms to work with noisy data (Sukhbaatar et al., 2015; Han et al., 2018; Tanaka et al., 2018) . For example, some methods learn a noise transition matrix for reweighting the label (Dawid and Skene, 1979; Goldberger and Ben-Reuven, 2017) , while others modify the loss (Ghosh et al., 2017; Patrini et al., 2017) . Another set of options generate cleaned examples from mislabeled ones through semi-supervised pseudo-labeling (Jiang et al., 2018; Li et al., 2020) . However, empirically getting many of these techniques to work well in practice is often a struggle due to the difficulty of training extra model components.",
"cite_spans": [
{
"start": 329,
"end": 354,
"text": "(Sukhbaatar et al., 2015;",
"ref_id": "BIBREF53"
},
{
"start": 355,
"end": 372,
"text": "Han et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 373,
"end": 393,
"text": "Tanaka et al., 2018)",
"ref_id": "BIBREF55"
},
{
"start": 480,
"end": 503,
"text": "(Dawid and Skene, 1979;",
"ref_id": "BIBREF8"
},
{
"start": 504,
"end": 536,
"text": "Goldberger and Ben-Reuven, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 568,
"end": 588,
"text": "(Ghosh et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 589,
"end": 610,
"text": "Patrini et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 723,
"end": 743,
"text": "(Jiang et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 744,
"end": 760,
"text": "Li et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "We avoid the complexity of repairing or reweighting the labels of existing annotations by instead obtaining wholly new annotations from crowdworkers for a selected subset of samples. In doing so, our proposed methods require no extra model parameters to train, yet still retains the benefits of high label quality. Concretely, we start by allocating a large portion of the labeling budget to obtain an initial training dataset. The examples in this dataset are annotated in a single pass, and we would expect some percentage of them to be incorrectly labeled. However, enough of the labels should be correct to train a reasonable base model. Next, we take advantage of the recently trained model to identify incorrectly labeled examples, and then spend the remaining budget to relabel those examples. Finally, we train a new model using the original data combined with the cleaned data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "The key ingredient of our method is a function for selecting which examples to re-annotate. We consider multiple approaches for identifying candidates for relabeling, none of which have been applied before to denoising data within NLP settings. In all cases, relabeling the target examples relies on neither training any extra model components nor on tuning sensitive hyper-parameters. By using the existing annotation pipeline, the implementation becomes relatively trivial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "To test the generalizability of our method, we compare against multiple baselines on four tasks spanning multiple natural language formats. This departs from previous studies on human labeling in NLP, which focus exclusively on text classification (Wang et al., 2019; Jindal et al., 2019; Tayal et al., 2020) . The control baseline and denoising baselines perform a single annotation per example. The majority vote baseline triples the annotations per example, but consequently is trained on only one third the number of examples to meet the annotation budget. We lastly include an oracle baseline that lifts the restriction on a fixed budget and instead uses all available annotations. We test across three model types, ranging from small ones taking minutes to train up to large transformer models which require a week to reach convergence. We find that under the same fixed annotation budget, cleaning methods match or surpass all baselines.",
"cite_spans": [
{
"start": 248,
"end": 267,
"text": "(Wang et al., 2019;",
"ref_id": "BIBREF60"
},
{
"start": 268,
"end": 288,
"text": "Jindal et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 289,
"end": 308,
"text": "Tayal et al., 2020)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "In summary, our contributions include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "1. We examine an alternative direction to learning with noisy labels that appear when data is collected under low-resource settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "2. We build four versions of our approach that vary in how they target examples to relabel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "3. We compare against a number of baselines, many of which have never been implemented before in the natural language setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "Overall, our Large Loss method, which selects examples for relabeling by the size of their training loss, performs the best out of all variations we consider despite requiring no extra parameters to train.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Num Annotations",
"sec_num": null
},
{
"text": "The standard method for learning in the presence of unreliable annotation is to perform redundant annotation, where each example is annotated multiple times and a simple majority vote determines the final label (Snow et al., 2004; Russakovsky et al., 2015; Bowman et al., 2015) . While effective, this can be costly since it severely reduces the amount of data collected. To tackle this problem, researchers have developed several alternative methods for dealing with noisy data that can be broken down into three categories.",
"cite_spans": [
{
"start": 211,
"end": 230,
"text": "(Snow et al., 2004;",
"ref_id": "BIBREF51"
},
{
"start": 231,
"end": 256,
"text": "Russakovsky et al., 2015;",
"ref_id": "BIBREF39"
},
{
"start": 257,
"end": 277,
"text": "Bowman et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Denoising Techniques Noisy training examples can be thought of as the result of perturbing the true, underlying labels by some source of noise. One group of methods assume the source of noise is from confusing one label class for another, and is resolved by reverting the errors through a noise transition matrix (Sukhbaatar et al., 2015; Goldberger and Ben-Reuven, 2017) . Other methods work under the assumption that labeling errors occur due to annotator biases (Raykar et al., 2009; Rodrigues and Pereira, 2018) , such as non-expert labelers (Welinder et al., 2010; Guan et al., 2018) or spammers (Hovy et al., 2013; Khetan et al., 2018) .",
"cite_spans": [
{
"start": 313,
"end": 338,
"text": "(Sukhbaatar et al., 2015;",
"ref_id": "BIBREF53"
},
{
"start": 339,
"end": 371,
"text": "Goldberger and Ben-Reuven, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 465,
"end": 486,
"text": "(Raykar et al., 2009;",
"ref_id": "BIBREF36"
},
{
"start": 487,
"end": 515,
"text": "Rodrigues and Pereira, 2018)",
"ref_id": "BIBREF38"
},
{
"start": 546,
"end": 569,
"text": "(Welinder et al., 2010;",
"ref_id": "BIBREF61"
},
{
"start": 570,
"end": 588,
"text": "Guan et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 601,
"end": 620,
"text": "(Hovy et al., 2013;",
"ref_id": "BIBREF17"
},
{
"start": 621,
"end": 641,
"text": "Khetan et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, some methods model the noise of each individual example, either through expectationmaximization (Dawid and Skene, 1979; Whitehill et al., 2009; Mnih and Hinton, 2012) , or neural networks (Felt et al., 2016; Jindal et al., 2019) . Another set of methods modify the loss function to make the model more robust to noise (Patrini et al., 2017) . For example, some methods add a regularization term (Tanno et al., 2019) , while others bound the amount of loss contributed by individual training examples (Ghosh et al., 2017; Zhang and Sabuncu, 2018) . The learning procedure can also be modified such that the importance of training examples is dynamically reweighted to prevent overfitting to noise (Jiang et al., 2018) .",
"cite_spans": [
{
"start": 105,
"end": 128,
"text": "(Dawid and Skene, 1979;",
"ref_id": "BIBREF8"
},
{
"start": 129,
"end": 152,
"text": "Whitehill et al., 2009;",
"ref_id": "BIBREF62"
},
{
"start": 153,
"end": 175,
"text": "Mnih and Hinton, 2012)",
"ref_id": "BIBREF31"
},
{
"start": 197,
"end": 216,
"text": "(Felt et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 217,
"end": 237,
"text": "Jindal et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 327,
"end": 349,
"text": "(Patrini et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 404,
"end": 424,
"text": "(Tanno et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 509,
"end": 529,
"text": "(Ghosh et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 530,
"end": 554,
"text": "Zhang and Sabuncu, 2018)",
"ref_id": "BIBREF68"
},
{
"start": 705,
"end": 725,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pseudo-labeling represents a final set of methods that either devise new labels for noisy data (Reed et al., 2015; Tanaka et al., 2018) or generate wholly new training examples (Arazo et al., 2019; Li et al., 2020) . Other approaches from this family use two distinct networks to produce examples for each other to learn from (Han et al., 2018; Yu et al., 2019) .",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Reed et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 115,
"end": 135,
"text": "Tanaka et al., 2018)",
"ref_id": "BIBREF55"
},
{
"start": 177,
"end": 197,
"text": "(Arazo et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 198,
"end": 214,
"text": "Li et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 326,
"end": 344,
"text": "(Han et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 345,
"end": 361,
"text": "Yu et al., 2019)",
"ref_id": "BIBREF64"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Budget Constrained Data Collection Our work also falls under research studying how to maximize the benefit of labeled data given a fixed annotation budget. Khetan and Oh (2016) apply model-based EM to model annotator noise, allowing singly-labeled data to outperform multiplylabeled data when annotation quality goes above a certain threshold. Bai et al. (2021) show that similar trade-offs exist when performing domain adaptation on a constrained budget. Zhang et al. (2021) observe that difficult examples benefit from additional annotations, so optimal spending actually varies the amount of labels given to each example. Our approach actively targets examples for relabeling based on its likelihood of noise, whereas they randomly select examples for multi-labeling without considering its characteristics.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "Khetan and Oh (2016)",
"ref_id": "BIBREF24"
},
{
"start": 344,
"end": 361,
"text": "Bai et al. (2021)",
"ref_id": "BIBREF4"
},
{
"start": 456,
"end": 475,
"text": "Zhang et al. (2021)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Human in the Loop Finally, our work is also related to data labeling with humans. Annotators can be assisted through iterative labeling where models suggest labels for each training example (Settles, 2011; Schulz et al., 2019) , or through active learning where models suggest which examples to label (Settles and Craven, 2008; Ash et al., 2020) . In both cases, forward facing decisions are made on incoming batches of unlabeled data. In contrast, our methods look back to previously collected data to select examples for relabeling. These activities are orthogonal to each other and can both be included when training a model. (See Appendix C)",
"cite_spans": [
{
"start": 190,
"end": 205,
"text": "(Settles, 2011;",
"ref_id": "BIBREF46"
},
{
"start": 206,
"end": 226,
"text": "Schulz et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 301,
"end": 327,
"text": "(Settles and Craven, 2008;",
"ref_id": "BIBREF47"
},
{
"start": 328,
"end": 345,
"text": "Ash et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lastly, re-active learning from (Sheng et al., 2008; Lin et al., 2016) proposes to relabel examples based on their predicted impact by retraining a classifier from scratch for every iteration of annotation. Accordingly, their method is impractical when adapted to the large Transformer models studied in this paper 1 . Instead, we identify examples to relabel through much less computationally expensive means, making the process tractable for real-life deployment.",
"cite_spans": [
{
"start": 32,
"end": 52,
"text": "(Sheng et al., 2008;",
"ref_id": "BIBREF48"
},
{
"start": 53,
"end": 70,
"text": "Lin et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We study how to maximize model performance given a static data annotation budget. Concretely, we are given some model M for a target task, along with a budget as measured by B number of annotations, where each annotation allows us to apply a possibly noisy labeling function f r (x), where r is the number of redundant annotations applied to a single example. Annotating some set of unlabeled instances produces noisy examples (X, f r (X)) = (X,\u1ef8 ). Our goal is to achieve the best score possible for some primary evaluation metric S on a given task by cleaning the noisy labels\u1ef8 clean \u2212 \u2212\u2212 \u2192 Y . Afterwards, we train a model with the cleaned data and then test it on a separate test set. For all our experiments, we set B = 12, 000 as the total annotation budget.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Under Study",
"sec_num": "3"
},
{
"text": "As a default setting, we start with a Control baseline which uses the entire budget to annotate 12k examples, once each (n = 12, 000; r = 1). To simulate a single annotation, we randomly sample a label from the set of labels offered for each example by the dataset. To obtain more accurate labels, people often perform multiple annotations on each example and use Majority Vote to aggregate the annotations. Accordingly, as a second baseline we annotate 4k examples three times each (n = 4, 000; r = 3), matching the same total budget as before. In the event of a tie, we randomly select one of the candidate labels. Finally, we also include an Oracle baseline which uses the gold label for 12k examples (n = 12, 000; r = 3|5). The gold label is either given by the dataset or generated by majority vote, where the label might result from aggregating five annotations rather than just three annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Under Study",
"sec_num": "3"
},
{
"text": "We consider four advanced baselines, all of which perform a single annotation per example (n = 12, 000, r = 1) as seen in Figure 1. (1) (Goldberger and Ben-Reuven, 2017) propose applying a noise Adaptation layer which models the error probability of label classes. This layer is initialized as an identity matrix, which biases the layer to act as if there is no confusion in the labels. This noise transition matrix is then learned as a nonlinear layer on top of the baseline model M to denoise predictions. The layer is discarded during final inference since gold labels are used during test time and are assumed to no longer be noisy.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 131,
"text": "Figure 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Noise Correction Baselines",
"sec_num": "3.1"
},
{
"text": "(2) The Crowdlayer also operates by modeling the error probability, but assumes the noise arises due to annotator error, so a noise transition matrix is created for each worker (Rodrigues and Pereira, 2018) . Once again, this matrix is learned with gradient descent and removed for final inference. 3The Forward correction method from (Patrini et al., 2017 ) adopts a loss correction approach which modifies the training objective. Given \u2212 log p(\u0177 =\u1ef9|x) as the original loss, Forward modifies this to become \u2212log c j=1 T ji p(\u0177 = y|x) where c is the number of classes being predicted, and both i and j are used to index the number of classes. Matrix T is represented as a neural network that is learned jointly during pre-training. (4) Lastly, the Bootstrap method proposed by (Reed et al., 2015) generates pseudo-labels by gradually interpolating the predicted label\u0177 with the given noisy label y. We apply their recommended hard bootstrap variant which uses the one-hot prediction for interpolation since this was shown to work better in their experiments.",
"cite_spans": [
{
"start": 177,
"end": 206,
"text": "(Rodrigues and Pereira, 2018)",
"ref_id": "BIBREF38"
},
{
"start": 335,
"end": 356,
"text": "(Patrini et al., 2017",
"ref_id": "BIBREF32"
},
{
"start": 777,
"end": 796,
"text": "(Reed et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Correction Baselines",
"sec_num": "3.1"
},
{
"text": "Rather than maximizing the number of examples annotated given our budget, we propose reserving a portion of the budget for reannotating the labels most likely to be incorrect. Specifically, we start by annotating a large number of examples one time each using the majority of the budget (n a = 10, 000; r = 1). We then pretrain a model M 1 using this noisy data, and observe either the model's training dynamics or output predictions to target examples for relabeling. Next, we use the remaining budget to annotate those examples two more times (n b = 1, 000; r = 2), allowing us to obtain a majority vote on those examples. The final training set is formed by combining the 1k multiply-annotated examples with the remaining 9k singly-annotated examples. We wrap up by initializing a new model M 2 with the weights from M 1 and fine-tune it with the clean data until convergence. We experiment with four approaches for discovering the most probable noisy labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning through Targeted Relabeling",
"sec_num": "3.2"
},
{
"text": "Area Under the Margin AUM identifies problematic labels by tracking the margin between the likelihood assigned to the target label class and the likelihood of the next highest class as training progresses (Pleiss et al., 2020) . Intuitively, if the gap between these two likelihoods is large, then the model is confident of its argmax prediction, pre-sumably because the training label is correct. On the other hand, if the gap between them is small, or even negative, then the model is uncertain of its prediction, presumably because the label is noisy. AUM averages the margins over all training epochs and targets the examples with the smallest margins for relabeling.",
"cite_spans": [
{
"start": 205,
"end": 226,
"text": "(Pleiss et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning through Targeted Relabeling",
"sec_num": "3.2"
},
{
"text": "Cartography Dataset Cartography is a technique for mapping the training dynamics of a dataset to diagnose its issues (Swayamdipta et al., 2020) . The intuition is largely the same as AUM, such that Cartography also chooses consistently low-confidence (ie. low probability) examples for relabeling. We take the suggestion from Section 5 of their paper to detect mislabeled examples by tracking the mean model probability of the true label across epochs. Note that unlike AUM, Cartography tracks the final model outputs after the softmax, rather than the logits before the softmax. These can lead to different rankings since Cartography does not take the other probabilities in the distribution into account.",
"cite_spans": [
{
"start": 117,
"end": 143,
"text": "(Swayamdipta et al., 2020)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning through Targeted Relabeling",
"sec_num": "3.2"
},
{
"text": "Large Loss (Arpit et al., 2017) found that correctly labeled examples are easier for a model to learn, and thus incur a small loss during training, whereas incorrectly labeled examples produce a large loss. Inspired by this observation and other similar works (Jiang et al., 2018) , the Large Loss method selects examples for cleaning by ranking the top n b examples where the model achieves the largest loss during the optimal stopping point. The ideal stopping point is the moment after the model has learned to fit the clean data, but before it has started to memorize the noisy data (Zhang et al., 2017) . We approximate this stopping point by performing early stopping during training when the progression of the development set fails to improve for three epochs in a row. We then use the earlier checkpoint for identifying errors.",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Arpit et al., 2017)",
"ref_id": null
},
{
"start": 260,
"end": 280,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 587,
"end": 607,
"text": "(Zhang et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning through Targeted Relabeling",
"sec_num": "3.2"
},
{
"text": "Prototype We lastly consider identifying noisy labels as those which are farthest away compared to the other training data (Lee et al., 2018) . More specifically, we use a pretrained model to map all training examples into the same embedding space. Then, we select the vectors for each label class to form clusters where the centroid of each cluster is the \"prototype\" (Snell et al., 2017) . Finally, we define outliers as those far away from the centroid for their given class, as measured by Euclidean distance, which are then selected for cleaning.",
"cite_spans": [
{
"start": 123,
"end": 141,
"text": "(Lee et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 369,
"end": 389,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning through Targeted Relabeling",
"sec_num": "3.2"
},
{
"text": "To test our proposal, we select datasets that span across four natural language processing tasks. We choose these datasets because they provide multiple labels per example, allowing us to simulate singleand multiple-annotation scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tasks",
"sec_num": "4.1"
},
{
"text": "Offense The Social Bias Frames dataset collects instances of biases and implied stereotypes found in text (Sap et al., 2020) . We extract just the label of whether a statement is offensive for binary classification.",
"cite_spans": [
{
"start": 106,
"end": 124,
"text": "(Sap et al., 2020)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tasks",
"sec_num": "4.1"
},
{
"text": "NLI We adopt the MultiNLI dataset for natural language inference (Williams et al., 2018) . The three possible label classes for each sentence pair are entailment, contradiction, and neutral.",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF63"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tasks",
"sec_num": "4.1"
},
{
"text": "Sentiment Our third task uses the first round of the DynaSent corpus for four-way sentiment analysis (Potts et al., 2021) . The possible labels are positive, negative, neutral, and mixed.",
"cite_spans": [
{
"start": 101,
"end": 121,
"text": "(Potts et al., 2021)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tasks",
"sec_num": "4.1"
},
{
"text": "QA Our final task is question answering with examples coming from the NewsQA dataset (Trischler et al., 2017) . The input includes a premise taken from a news article, along with a query related to the topic. The target label consists of two indexes representing the start and end locations within the article that extract a span of text answering the query. Unlike the other tasks, the format for QA is span selection rather than classification. Due to this distinction, certain denoising methods that assume a fixed set of candidate labels are omitted from comparison.",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tasks",
"sec_num": "4.1"
},
{
"text": "In our experiments, we fine-tune parameters during initial training with only six runs, which is composed of three learning rates and two levels of dropout at 0.1 and 0.05. Occasionally, when varying dropout had no effect, we consider doubling the batch size instead from 16 to 32. We found an appropriate range of learning rates by initially conducting some sanity checks on a sub-sample of development data for each task and model combination. Learning rates were chosen from the set of [1e-6, 3e-6, 1e-5, 3e-5, 1e-4]. When a technique contained method-specific variables, we defaulted to the suggestions offered in their respective papers. We do not expect any of the methods to be particularly sensitive to specific hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Configuration",
"sec_num": "4.2"
},
{
"text": "We select three models for comparison that represent strong options at their respective model sizes. We repeat the process of example identification and simulated re-annotation separately for each model. We use all models as a pre-trained encoders to embed the text inputs of the different tasks we study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variations",
"sec_num": "4.3"
},
{
"text": "DeBERTa-XLarge is our large model, which contains 750 million parameters and currently is the state-of-the-art on many natural language understanding tasks (He et al., 2021) . DistilRoBERTa represents a distilled version of RoBERTa-base . It contains 82 million parameters, compared to the 125 million parameters found in RoBERTa. Learning follows the distillation process set by DistillBERT where a student model is trained to match the soft target probabilities produced by the larger teacher model (Sanh et al., 2019) . Fine-tuning DistilRoBERTa is approximately 60-70 times faster compared to fine-tuning DeBERTa-XLarge on the same task.",
"cite_spans": [
{
"start": 156,
"end": 173,
"text": "(He et al., 2021)",
"ref_id": "BIBREF16"
},
{
"start": 501,
"end": 520,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variations",
"sec_num": "4.3"
},
{
"text": "For the final model, we avoid using Transformers altogether and instead use the FastText bag-ofwords encoder (Joulin et al., 2017) . The FastText embeddings are left unchanged during training, so the only learned parameters are in the 2-layer MLP we use for producing the model's final output. The same output prediction setup is used for all models, with a 300-dimensional hidden state. Training the FastText models run roughly 100-120 faster compared to working with DeBERTa-XLarge. Table 1 displays results across all models types and tasks, with each row representing a different technique. All rows except the Oracle were trained using the same label budget of 12,000 annotations. 2 In some cases, a method may surpass the Oracle since we conducted limited hyperparameter tuning, but as expected, the Oracle model outperforms all other methods overall. Notably, the Control setting always beats the Majority setting. In fact, Majority is consistently the lowest-performing method on all models and tasks, showing that improved label quality is never quite enough to overcome the reduction in annotation quantity. Adaptation is the best among denoising methods, achieving the strongest results in two out of four settings. Large Loss is the best among cleaning methods, with the highest scores in the remaining two tasks. Prototypical is also a strong runner-up. Large Loss is the best overall method due to its consistency since it never drops below second on all tasks. Variance among the three seeds is fairly consistent for all models and methods within the same task. Specifically, the standard deviation for offense detection and NLI are both around 0.5, with sentiment analysis and QA around 1.5 and 4.5, respectively. We do not see any strong trends across tasks, nor any outliers for a specific method. Task Table 1a contains the results for offense language detection, where we see that Large Loss and Adaptation are the only methods to overtake the Control. These two are also the best overall performers on natural language inference as seen in Table 1b . The cleaning methods really shine on sentiment analysis and question answering where even the worst cleaning method often tops the best denoising method. We hypothesize this happens because the denoising methods work best in simple classification tasks, which we further explore in the next section. A handful of results are not reported in Table 1d since they refer to methods that are designed exclusively for classification tasks, and cannot be directly transferred to span selection.",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "(Joulin et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 686,
"end": 687,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 485,
"end": 492,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1816,
"end": 1829,
"text": "Task Table 1a",
"ref_id": "TABREF1"
},
{
"start": 2061,
"end": 2069,
"text": "Table 1b",
"ref_id": "TABREF1"
},
{
"start": 2413,
"end": 2421,
"text": "Table 1d",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model Variations",
"sec_num": "4.3"
},
{
"text": "The larger models perform better than the smaller models in terms of downstream accuracy, but somewhat surprisingly, there does not seem to be any clear patterns in relation to the method. In other words, if a particular method performs well (poorly) with one model size, it tends to also do well (poorly) when applied to the other model sizes too. One possible exception to this is the Prototype method showing strong performance with DeBERTA-XLarge. This is possibly because a stronger model produces more valuable hidden state representations for determining outliers. Since method performance is largely independent of the model size, we use Dis-tillRoBERTa as the encoder for simplicity in the upcoming analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breakdown by Model",
"sec_num": null
},
{
"text": "Ablation How can we be sure that the cleaning methods are actually exhibiting a small, but consistent gain over the baselines rather than just natural variation? Perhaps the scores are close simply because all the methods use the same amount of training data. If the cleaning methods are indeed adding value, then they should perform much better than random selection. To measure this, we replace the pre-trained DistilRoBERTa model with a uniform sampler to identify examples for cleaning. Active learning has been shown to exhibit significant decrease when transferring across model types (Lowell et al., 2019) . In contrast, we argue that our method is not active learning since it is not directly dependent on the specific abilities of the target model. To test this claim, we also conduct an additional ablation whereby we replace one model type for another. Namely, we use the DeBERTa-XLarge model to select examples for cleaning, then train on the DistilRoBERTa model.",
"cite_spans": [
{
"start": 591,
"end": 612,
"text": "(Lowell et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Breakdown by Model",
"sec_num": null
},
{
"text": "The results in Table 3 show that randomly select-ing data points to relabel indeed lowers the final performance by a noticeable amount. By comparison, cross training models leads to a negligible drop in performance. We believe this occurs because targeted relabeling produces clean data, and clean data is useful regardless of the situation.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Breakdown by Model",
"sec_num": null
},
{
"text": "To better understand how the proposed clean methods operate, we conduct additional analysis with the sentiment analysis task. How well do clean methods select items? We compare the four proposed methods by first looking at the amount of overlap in the examples selected for relabeling. To calculate this, we gather all examples chosen for relabeling by their likelihood of annotation error. For a given pair of methods, we then find the size of their intersection and divide by the size of their union, which yields the Jaccard similarity. As shown in Table 2 , AUM and Large Loss have high overlap meaning that they select similar examples for cleaning. We additionally calculate the precision of each method by counting the number of times a label targeted for relabeling did not match the oracle label, and therefore legitimately requires cleaning. Based on Table 4 , we once again see reasonable performance for the Large Loss cleaning method. Qualitative examples for sentiment analysis are displayed in Table 5 , which were chosen as the most likely examples of label errors according to their respective methods. Large Loss consistently discovers 'neutral' labels that were mis-labeled as ",
"cite_spans": [],
"ref_spans": [
{
"start": 552,
"end": 559,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 861,
"end": 868,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1009,
"end": 1016,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Analysis",
"sec_num": "6"
},
{
"text": "Great service for many years on our cars, but always at an additional price.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prototype",
"sec_num": null
},
{
"text": "Salad was great but a bit small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "We had to specify the order multiple times, but eventually when the food came it was actually pretty good. NEUTRAL Table 5 : Sentiment Analysis examples each method identified as being most likely to be label errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "'mixed', while Prototype also does a good job uncovering label errors, finding 'positive' examples mislabeled as 'negative'. Overall, we see that the best performing cleaning methods do seem to pick up on meaningful patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "How many examples should be cleaned? All cleaning experiments so far have been run with n a = 10,000 examples with n b = 1,000 samples chosen for relabeling. This is equivalent to using up \u03bb = 5 of the labeling budget upfront, with the remaining annotations saved for later. This \u03bb ratio was chosen as a reasonable default, but can also be tuned to be higher or lower. Figure 2 shows the results of varying the \u03bb parameter from a range of 1 6 to 11 12 . Based on the results, choosing \u03bb = 2 3 would have actually been the best option. This translates to n a = 8,000 examples with n b = 2,000 of those examples selected for re-labeling. As a sanity check, we also try dropping the n b cleaned examples when retraining, keeping only the noisy data. As seen in Figure 2 , the performance decreases as expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 377,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 758,
"end": 766,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "What if we increase the number of classes? Based on the trends in the task breakdown of section 5, denoising methods seem to perform worse than explicit relabeling methods as the task gets harder. Most denoising methods may even become intractable for complex settings, such as those which require span selection. To test this hypothesis, we extend our setup to the GoEmotions dataset, where the goal of the task is to predict the emotion associated with a given utterance (Demszky et al., 2020) . Whereas previous tasks dealt with 2-4 classes, the GoEmotions dataset requires a model to select from 27 granular emotions and a neutral option, for a total of 28 classes. Intuitively, we would expect the denoising methods to struggle since the pairwise interactions among classes has grown exponentially larger. The results in Table 4 reveal that Large Loss again outperforms all other methods in prediction accuracy. Notably, Adaptation in particular continues to exhibit lower than average scores compared to other methods. This supports our claim that relabeling methods are more robust as the number of classes grows.",
"cite_spans": [
{
"start": 473,
"end": 495,
"text": "(Demszky et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 826,
"end": 833,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "What happens if noise is synthetically created? Many of the advanced denoising methods were originally tested on synthetically generated noise, whereas the noise in our datasets originates from noisy annotations, caused by the inherent ambiguity of natural language text (Pavlick and Kwiatkowski, 2019; Chen et al., 2020) . Perhaps this partially explains how our proposed relabeling methods are able to outperform prior work. To study this further, we create a perturbed dataset starting from the gold DynaSent examples. Specifically, we randomly sample replacement labels according to a fabricated noise transition matrix, rather than sampling from the label distribution provided by annotators. (More details in Appendix D.) With noise coming from an explicit transition matrix, it might be easier for all models to pick up on this pattern.",
"cite_spans": [
{
"start": 271,
"end": 302,
"text": "(Pavlick and Kwiatkowski, 2019;",
"ref_id": "BIBREF33"
},
{
"start": 303,
"end": 321,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "The middle column of Table 4 shows that all eight cleaning methods perform on par with each other. When comparing the variance on this dataset with synthetic noise against the original DynaSent dataset with natural noise, the standard deviation drops from 0.34 down to 0.28, highlighting the uniformity in performance among the eight methods. The denoising methods work as intended on synthetic noise, but such assumptions may not hold on real data with more nuanced errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "NEUTRAL",
"sec_num": null
},
{
"text": "Noisy data is a common problem when annotating data under low resource settings. Performing redundant annotation on the same examples to mitigate noise leads to having even less data to work with, so we propose data cleaning instead through targeted relabeling. We apply our methods on multiple model sizes and NLP tasks of varying difficulty, which show that saving a portion of a labeling budget for re-annotation matches or outperforms other baselines despite requiring no extra parameters to train or hyper-parameters to tune. Intuitively, our best method can be summarized as double-checking the examples that the model gets wrong to see if it is actually an incorrect label causing problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Thus, to make the most out of the scarce labeled data available, we believe a best practice should include targeting examples for cleaning rather than spending the entire annotation budget upfront. Future work includes exploring more sophisticated techniques for identifying examples to relabel and understanding how such cleaning models perform on additional NLP tasks such as machine translation or dialogue state tracking, which have distinct output formats.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Looking at Figure 3 , the similarity scores for offensive language detection and natural language inference largely match up with the scores found in sentiment analysis. In particular, Large Loss and AUM exhibit higher overlap with each other. Additionally, Prototype shows a medium overlap and Cartography shows no overlap at all with the other methods. We reach a similar conclusion that the Large Loss method is a reasonable technique.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Additional Quantitative Results",
"sec_num": null
},
{
"text": "More examples can be found in Table 6 on the next page. We see that Large Loss is once again quite accurate in picking up labeling errors. Prototype for NLI does a great job at finding examples labeled as 'entailment' which should be something else. The hypotheses for all the selected examples contain negative sentiment, which may be located far away from the entailment examples in the embedding space. Cartography exhibits a pattern of always choosing examples labeled as 'contradiction'.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Additional Qualitative Examples",
"sec_num": null
},
{
"text": "On the surface, targeting examples for relabeling contains may seem similar to active learning or curriculum learning. Although there are certainly some parallels between these techniques, these are fundamentally different learning paradigms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Comparison to Learning Schemes",
"sec_num": null
},
{
"text": "Active learning methods typically choose new examples to label based on the uncertainty of the model (Tong and Koller, 2001; Hanneke, 2014) or on the diversity they can add to the existing distribution (Sener and Savarese, 2018; Ash et al., 2020) . Although sample noise can also be measured through model uncertainty, denoising and active learning do not have the same goal. More specifically, the noise of a training example is related to how its label is somehow incorrect. Perhaps the start of a span was not properly selected or an example that should not be tagged was accidentally included. More simply, an example is mislabeled as class A, when in fact it belongs to class B. This situation is not possible with active learning because the examples in active learning do not have labels yet! The entire point of active learning is to choose which examples should be labeled next (Settles and Craven, 2008; Settles, 2011) . Thus, when we try to identify examples for cleaning, we are re-labeling rather than labeling for the first time.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Tong and Koller, 2001;",
"ref_id": "BIBREF58"
},
{
"start": 125,
"end": 139,
"text": "Hanneke, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 202,
"end": 228,
"text": "(Sener and Savarese, 2018;",
"ref_id": "BIBREF45"
},
{
"start": 229,
"end": 246,
"text": "Ash et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 887,
"end": 913,
"text": "(Settles and Craven, 2008;",
"ref_id": "BIBREF47"
},
{
"start": 914,
"end": 928,
"text": "Settles, 2011)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Comparison to Learning Schemes",
"sec_num": null
},
{
"text": "Curriculum learning also selects examples for training based on model uncertainty (Bengio et al., 2009) and diversity maximization (Jiang et al., 2014) . It could be interpreted that easier examples are those that contain less noise, which would connect to our proposed process. However, traditional curriculum learning chooses these examples upfront rather than based on modeling dynamics (Jiang et al., 2015) . Extensions have been made under the umbrella of self-paced curriculum learning whereby examples are chosen for a curriculum based on how they react to a model's behavior (Kumar et al., 2010) . This is indeed similar to how we can choose to relabel examples based on the model loss. This aspect of relabeling though is the key distinction -we act on these examples in an attempt to denoise the dataset. On the other hand, self-paced learning simply feeds those same examples back into the model without any modification.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 131,
"end": 151,
"text": "(Jiang et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 390,
"end": 410,
"text": "(Jiang et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 583,
"end": 603,
"text": "(Kumar et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Comparison to Learning Schemes",
"sec_num": null
},
{
"text": "The synthetic dataset is created by applying an explicit noise transition matrix with 20% noise. Since the original dataset contains four classes, we start with an empty 4x4 matrix. The labels should not be confused most of the time so we assign a likelihood of 0.8 across the diagonal of the matrix. Next, we randomly select another class for each row to receive 0.1 likelihood of confusion. This leaves 0.1 for each row to be divided between the two remaining classes, which receive 0.05 each. For each example in the oracle dataset, we use the original label to select a single row from the constructed noise transition matrix. Lastly, we are able to sample a new label according to the weights provided by this 4-D vector. In contrast, the original sampling procedure obtained its weights according to the normalized label distribution provided by the annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Synthetic Data Generation",
"sec_num": null
},
{
"text": "To prepare the GoEmotions dataset, we filter the raw data to include only examples that have at least three annotators and a clear majority vote (used for determining the gold label). We then cross-reference this against the proposed data splits offered by the authors which have high interannotator agreement. From this joint pool of examples, we sample 12k training examples to match the setting of all our other experiments. This results in 166 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.2 GoEmotions Preprocessing",
"sec_num": null
},
{
"text": "Our proposed methods are limited to studying noise which comes from human annotators acting in good faith. Other sources of label noise include errors which occur due to spammers, distant supervision (as commonly seen in Named Entity Recognition), and/or pseudo-labels from bootstrapping. Within interactive settings, such as for dialogue systems, models may also encounter noisy user inputs such as out-of-domain requests or ambiguous queries. Our methods would not work well in those regimes either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Limitations",
"sec_num": null
},
{
"text": "Training a large language model (such as RoBERTa-Large) until convergence can easily take a day or longer. Doing so each time for 12k annotations would take 30+ years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our annotation amount is much less than total available data for a task so our results are not directly comparable to prior work. For example, DynaSent train set includes 94,459 examples and Social Bias Frames contains 43,448 examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to sincerely thank the reviewers for their attention to detail when reading through the paper. Their insightful questions and advice have noticeably improved the final manuscript. We would also like to thank ASAPP for sponsoring the costs of this project. Finally, we would like to acknowledge the helpful feedback from discussions with members of the Columbia Dialogue Lab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 1120-1130. The Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Why shouldn't he be? He doesn't actually want to be that way. : Natural language inference examples that each method identified as being most likely to be label errors. Sentences were truncated in some cases for brevity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method Premise Hypothesis Label",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised label noise modeling and loss correction",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Arazo",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Ortego",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Albert",
"suffix": ""
},
{
"first": "Noel",
"middle": [
"E"
],
"last": "O'connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Mcguinness",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019",
"volume": "97",
"issue": "",
"pages": "312--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, and Kevin McGuinness. 2019. Unsuper- vised label noise modeling and loss correction. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Pro- ceedings of Machine Learning Research, pages 312- 321. PMLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A closer look at memorization in deep networks",
"authors": [
{
"first": "Tegan",
"middle": [],
"last": "Kanwal",
"suffix": ""
},
{
"first": "Asja",
"middle": [],
"last": "Maharaj",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Fischer",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste- Julien. 2017. A closer look at memorization in deep networks. In Proceedings of the 34th Inter- national Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Re- search, pages 233-242. PMLR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep batch active learning by diverse, uncertain gradient lower bounds",
"authors": [
{
"first": "Jordan",
"middle": [
"T"
],
"last": "Ash",
"suffix": ""
},
{
"first": "Chicheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Alekh",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations, ICLR 2020, Addis Ababa",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan T. Ash, Chicheng Zhang, Akshay Krishna- murthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gra- dient lower bounds. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-train or annotate? domain adaptation with a constrained budget",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2021,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fan Bai, Alan Ritter, and Wei Xu. 2021. Pre-train or annotate? domain adaptation with a constrained bud- get. ArXiv, abs/2109.04711.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning",
"volume": "382",
"issue": "",
"pages": "41--48",
"other_ids": {
"DOI": [
"10.1145/1553374.1553380"
]
},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Con- ference on Machine Learning, ICML 2009, Mon- treal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, pages 41-48. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/d15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- ence. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 632-642. The Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Uncertain natural language inference",
"authors": [
{
"first": "Tongfei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhengping",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "8772--8779",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.774"
]
},
"num": null,
"urls": [],
"raw_text": "Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, and Benjamin Van Durme. 2020. Uncer- tain natural language inference. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8772-8779. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Maximum likelihood estimation of observer errorrates using the EM algorithm",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Philip Dawid",
"suffix": ""
},
{
"first": "Allan M",
"middle": [],
"last": "Skene",
"suffix": ""
}
],
"year": 1979,
"venue": "Journal of the Royal Statistical Society: Series C (Applied Statistics)",
"volume": "28",
"issue": "1",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer error- rates using the EM algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics), 28(1):20-28.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "GoEmotions: A Dataset of Fine-Grained Emotions",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Movshovitz-Attias",
"suffix": ""
},
{
"first": "Jeongwoo",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cowen",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "4040--4054",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.372"
]
},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeong- woo Ko, Alan Cowen, Gaurav Nemade, and Su- jith Ravi. 2020. GoEmotions: A Dataset of Fine- Grained Emotions. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4040-4054. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semantic annotation aggregation with conditional crowdsourcing models and word embeddings",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Felt",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"K"
],
"last": "Ringger",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"D"
],
"last": "Seppi",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "1787--1796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Felt, Eric K. Ringger, and Kevin D. Seppi. 2016. Semantic annotation aggregation with conditional crowdsourcing models and word embeddings. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Con- ference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 1787-1796. ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Robust loss functions under label noise for deep neural networks",
"authors": [
{
"first": "Aritra",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Himanshu",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Sastry",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9",
"volume": "",
"issue": "",
"pages": "1919--1925",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aritra Ghosh, Himanshu Kumar, and P. S. Sastry. 2017. Robust loss functions under label noise for deep neu- ral networks. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, Febru- ary 4-9, 2017, San Francisco, California, USA, pages 1919-1925. AAAI Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Training deep neural-networks using a noise adaptation layer",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Ben-Reuven",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Goldberger and Ehud Ben-Reuven. 2017. Train- ing deep neural-networks using a noise adaptation layer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Who said what: Modeling individual labelers improves classification",
"authors": [
{
"first": "Melody",
"middle": [
"Y"
],
"last": "Guan",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Gulshan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "3109--3118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melody Y. Guan, Varun Gulshan, Andrew M. Dai, and Geoffrey E. Hinton. 2018. Who said what: Mod- eling individual labelers improves classification. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th inno- vative Applications of Artificial Intelligence (IAAI- 18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 3109-3118. AAAI Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Quanming",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Xingrui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Miao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ivor",
"middle": [
"W"
],
"last": "Tsang",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8536--8546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Sys- tems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, pages 8536-8546.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Theory of disagreement-based active learning",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Hanneke",
"suffix": ""
}
],
"year": 2014,
"venue": "Found. Trends Mach. Learn",
"volume": "7",
"issue": "2-3",
"pages": "131--309",
"other_ids": {
"DOI": [
"10.1561/2200000037"
]
},
"num": null,
"urls": [],
"raw_text": "Steve Hanneke. 2014. Theory of disagreement-based active learning. Found. Trends Mach. Learn., 7(2- 3):131-309.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "DeBERTa: Decodingenhanced BERT with disentangled attention",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "9th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding- enhanced BERT with disentangled attention. In 9th International Conference on Learning Representa- tions, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning whom to trust with MACE",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard H. Hovy. 2013. Learning whom to trust with MACE. In Human Language Technologies:",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Self-paced learning with diversity",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Deyu",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "-I",
"middle": [],
"last": "Shoou",
"suffix": ""
},
{
"first": "Zhen-Zhong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shiguang",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"G"
],
"last": "Shan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014",
"volume": "",
"issue": "",
"pages": "2078--2086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Jiang, Deyu Meng, Shoou-I Yu, Zhen-Zhong Lan, Shiguang Shan, and Alexander G. Hauptmann. 2014. Self-paced learning with diversity. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Sys- tems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2078-2086.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Self-paced curriculum learning",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Deyu",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shiguang",
"middle": [],
"last": "Shan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"G"
],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2694--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G. Hauptmann. 2015. Self-paced curricu- lum learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2694-2700. AAAI Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "MentorNet: Learning datadriven curriculum for very deep neural networks on corrupted labels",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhengyuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "2309--2318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. MentorNet: Learning data- driven curriculum for very deep neural networks on corrupted labels. In Proceedings of the 35th Inter- national Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2309-2318. PMLR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An effective label noise model for DNN text classification",
"authors": [
{
"first": "Ishan",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Pressel",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Lester",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Nokleby",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "3246--3256",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1328"
]
},
"num": null,
"urls": [],
"raw_text": "Ishan Jindal, Daniel Pressel, Brian Lester, and Matthew S. Nokleby. 2019. An effective label noise model for DNN text classification. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3246-3256. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {
"DOI": [
"10.18653/v1/e17-2068"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tom\u00e1s Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valen- cia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 427-431. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning from noisy singlylabeled data",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Khetan",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Animashree",
"middle": [],
"last": "Anandkumar",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Khetan, Zachary C. Lipton, and Animashree Anandkumar. 2018. Learning from noisy singly- labeled data. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Achieving budget-optimality with adaptive schemes in crowdsourcing",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Khetan",
"suffix": ""
},
{
"first": "Sewoong",
"middle": [],
"last": "Oh",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4844--4852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Khetan and Sewoong Oh. 2016. Achieving budget-optimality with adaptive schemes in crowd- sourcing. In Advances in Neural Information Pro- cessing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5- 10, 2016, Barcelona, Spain, pages 4844-4852.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Self-paced learning for latent variable models",
"authors": [
{
"first": "M",
"middle": [
"Pawan"
],
"last": "Kumar",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Packer",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9",
"volume": "",
"issue": "",
"pages": "1189--1197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pawan Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Advances in Neural Information Process- ing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada, pages 1189-1197. Cur- ran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Cleannet: Transfer learning for scalable image classifier training with label noise",
"authors": [
{
"first": "Kuang-Huei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "5447--5456",
"other_ids": {
"DOI": [
"10.1109/CVPR.2018.00571"
]
},
"num": null,
"urls": [],
"raw_text": "Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. 2018. Cleannet: Transfer learning for scalable image classifier training with label noise. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 5447-5456. Computer Vi- sion Foundation / IEEE Computer Society.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dividemix: Learning with noisy labels as semi-supervised learning",
"authors": [
{
"first": "Junnan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"C H"
],
"last": "Hoi",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junnan Li, Richard Socher, and Steven C. H. Hoi. 2020. Dividemix: Learning with noisy labels as semi-supervised learning. In 8th International Con- ference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Re-active learning: Active learning with relabeling",
"authors": [
{
"first": "H",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1845--1852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher H. Lin, Mausam, and Daniel S. Weld. 2016. Re-active learning: Active learning with relabeling. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12- 17, 2016, Phoenix, Arizona, USA, pages 1845-1852. AAAI Press.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv, abs/1907.11692.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Practical obstacles to deploying active learning",
"authors": [
{
"first": "David",
"middle": [],
"last": "Lowell",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1003"
]
},
"num": null,
"urls": [],
"raw_text": "David Lowell, Zachary C. Lipton, and Byron C. Wal- lace. 2019. Practical obstacles to deploying active learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 21- 30. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning to label aerial images from noisy data",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 29th International Conference on Machine Learning, ICML 2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Mnih and Geoffrey E. Hinton. 2012. Learn- ing to label aerial images from noisy data. In Pro- ceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scot- land, UK, June 26 -July 1, 2012. icml.cc / Omni- press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Making deep neural networks robust to label noise: A loss correction approach",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Patrini",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Rozza",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"Krishna"
],
"last": "Menon",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Nock",
"suffix": ""
},
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "2233--2241",
"other_ids": {
"DOI": [
"10.1109/CVPR.2017.240"
]
},
"num": null,
"urls": [],
"raw_text": "Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. 2017. Mak- ing deep neural networks robust to label noise: A loss correction approach. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2233-2241. IEEE Computer Society.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Inherent disagreements in human textual inferences",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "7",
"issue": "",
"pages": "677--694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Trans. Assoc. Comput. Linguistics, 7:677-694.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Identifying mislabeled data using the area under the margin ranking",
"authors": [
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ethan",
"middle": [
"R"
],
"last": "Elenberg",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoff Pleiss, Tianyi Zhang, Ethan R. Elenberg, and Kilian Q. Weinberger. 2020. Identifying mislabeled data using the area under the margin ranking. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Pro- cessing Systems 2020, NeurIPS 2020, December 6- 12, 2020, virtual.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Dynasent: A dynamic benchmark for sentiment analysis",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Zhengxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Atticus",
"middle": [],
"last": "Geiger",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021",
"volume": "1",
"issue": "",
"pages": "2388--2404",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.186"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. Dynasent: A dynamic bench- mark for sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Vir- tual Event, August 1-6, 2021, pages 2388-2404. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Supervised learning from multiple experts: whom to trust when everyone lies a bit",
"authors": [
{
"first": "C",
"middle": [],
"last": "Vikas",
"suffix": ""
},
{
"first": "Shipeng",
"middle": [],
"last": "Raykar",
"suffix": ""
},
{
"first": "Linda",
"middle": [
"H"
],
"last": "Yu",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"K"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Jerebko",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [
"Hermosillo"
],
"last": "Florin",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Valadez",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Bogoni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moy",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning",
"volume": "382",
"issue": "",
"pages": "889--896",
"other_ids": {
"DOI": [
"10.1145/1553374.1553488"
]
},
"num": null,
"urls": [],
"raw_text": "Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Anna K. Jerebko, Charles Florin, Gerardo Her- mosillo Valadez, Luca Bogoni, and Linda Moy. 2009. Supervised learning from multiple experts: whom to trust when everyone lies a bit. In Proceed- ings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, pages 889-896. ACM.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Training deep neural networks on noisy labels with bootstrapping",
"authors": [
{
"first": "Scott",
"middle": [
"E"
],
"last": "Reed",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Anguelov",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Ra- binovich. 2015. Training deep neural networks on noisy labels with bootstrapping. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Deep learning from crowds",
"authors": [
{
"first": "Filipe",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Francisco",
"middle": [
"C"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "1611--1618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filipe Rodrigues and Francisco C. Pereira. 2018. Deep learning from crowds. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1611-1618. AAAI Press.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"S"
],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "Int. Journal of Computer Vision",
"volume": "115",
"issue": "3",
"pages": "211--252",
"other_ids": {
"DOI": [
"10.1007/s11263-015-0816-y"
]
},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. Imagenet large scale visual recognition challenge. Int. Journal of Computer Vision, 115(3):211-252.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "ATOMIC: an atlas of machine commonsense for if-then reasoning",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Roof",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "3027--3035",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33013027"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. ATOMIC: an atlas of machine commonsense for if-then reasoning. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 3027-3035. AAAI Press.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Social bias frames: Reasoning about social and power implications of language",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "5477--5490",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.486"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Ju- rafsky, Noah A. Smith, and Yejin Choi. 2020. So- cial bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5477-5490. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Social iqa: Commonsense reasoning about social interactions",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4462--4472",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1454"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social iqa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4462- 4472. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Analysis of automatic annotation suggestions for hard discourse-level tasks in expert domains",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"M"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Kiesewetter",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sailer",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"R"
],
"last": "Fischer",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Fischer",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2761--2772",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1265"
]
},
"num": null,
"urls": [],
"raw_text": "Claudia Schulz, Christian M. Meyer, Jan Kiesewetter, Michael Sailer, Elisabeth Bauer, Martin R. Fischer, Frank Fischer, and Iryna Gurevych. 2019. Anal- ysis of automatic annotation suggestions for hard discourse-level tasks in expert domains. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2761-2772, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Active learning for convolutional neural networks: A core-set approach",
"authors": [
{
"first": "Ozan",
"middle": [],
"last": "Sener",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Savarese",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozan Sener and Silvio Savarese. 2018. Active learn- ing for convolutional neural networks: A core-set approach. In 6th International Conference on Learn- ing Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1467--1478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2011. Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In Proceedings of the 2011 Confer- ence on Empirical Methods in Natural Language Processing, pages 1467-1478, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "An analysis of active learning strategies for sequence labeling tasks",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1070--1079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles and Mark Craven. 2008. An analysis of ac- tive learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 1070-1079, Honolulu, Hawaii. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Get another label? improving data quality and data mining using multiple, noisy labelers",
"authors": [
{
"first": "S",
"middle": [],
"last": "Victor",
"suffix": ""
},
{
"first": "Foster",
"middle": [
"J"
],
"last": "Sheng",
"suffix": ""
},
{
"first": "Panagiotis",
"middle": [
"G"
],
"last": "Provost",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ipeirotis",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "614--622",
"other_ids": {
"DOI": [
"10.1145/1401890.1401965"
]
},
"num": null,
"urls": [],
"raw_text": "Victor S. Sheng, Foster J. Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? improving data quality and data mining using multiple, noisy label- ers. In Proceedings of the 14th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, Las Vegas, Nevada, USA, August 24- 27, 2008, pages 614-622. ACM.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Prototypical networks for few-shot learning",
"authors": [
{
"first": "Jake",
"middle": [],
"last": "Snell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "4077--4087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Advances in Neural Information Processing Sys- tems 30: Annual Conference on Neural Informa- tion Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4077-4087.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Learning syntactic patterns for automatic hypernym discovery",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1297--1304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Pro- cessing Systems 17 [Neural Information Processing Systems, NIPS 2004, December 13-18, 2004, Van- couver, British Columbia, Canada], pages 1297- 1304.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "254--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks. In 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25-27 October 2008, Honolulu, Hawaii, USA, A meeting of SIG- DAT, a Special Interest Group of the ACL, pages 254-263. ACL.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Training convolutional neural networks with noisy labels",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Manohar",
"middle": [],
"last": "Paluri",
"suffix": ""
},
{
"first": "Lubomir",
"middle": [],
"last": "Bourdev",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. 2015. Training convolutional neural networks with noisy labels. In 3rd International Conference on Learning Represen- tations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Dataset cartography: Mapping and diagnosing datasets with training dynamics",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "9275--9293",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.746"
]
},
"num": null,
"urls": [],
"raw_text": "Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dy- namics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2020, Online, November 16-20, 2020, pages 9275-9293. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Joint optimization framework for learning with noisy labels",
"authors": [
{
"first": "Daiki",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Daiki",
"middle": [],
"last": "Ikami",
"suffix": ""
},
{
"first": "Toshihiko",
"middle": [],
"last": "Yamasaki",
"suffix": ""
},
{
"first": "Kiyoharu",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018",
"volume": "",
"issue": "",
"pages": "5552--5560",
"other_ids": {
"DOI": [
"10.1109/CVPR.2018.00582"
]
},
"num": null,
"urls": [],
"raw_text": "Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2018. Joint optimization frame- work for learning with noisy labels. In 2018 IEEE Conference on Computer Vision and Pattern Recog- nition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 5552-5560. Computer Vision Foundation / IEEE Computer Society.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Learning from noisy labels by regularized estimation of annotator confusion",
"authors": [
{
"first": "Ryutaro",
"middle": [],
"last": "Tanno",
"suffix": ""
},
{
"first": "Ardavan",
"middle": [],
"last": "Saeedi",
"suffix": ""
},
{
"first": "Swami",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"C"
],
"last": "Alexander",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Silberman",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "11244--11253",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.01150"
]
},
"num": null,
"urls": [],
"raw_text": "Ryutaro Tanno, Ardavan Saeedi, Swami Sankara- narayanan, Daniel C. Alexander, and Nathan Silber- man. 2019. Learning from noisy labels by regular- ized estimation of annotator confusion. In IEEE Conference on Computer Vision and Pattern Recog- nition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 11244-11253. Computer Vision Foundation / IEEE.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Model-agnostic methods for text classification with inherent noise",
"authors": [
{
"first": "Kshitij",
"middle": [],
"last": "Tayal",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Vipin",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020 -Industry Track",
"volume": "",
"issue": "",
"pages": "202--213",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-industry.19"
]
},
"num": null,
"urls": [],
"raw_text": "Kshitij Tayal, Rahul Ghosh, and Vipin Kumar. 2020. Model-agnostic methods for text classification with inherent noise. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, COLING 2020 -Industry Track, Online, December 12, 2020, pages 202-213. International Committee on Computational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Support vector machine active learning with applications to text classification",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2001,
"venue": "J. Mach. Learn. Res",
"volume": "2",
"issue": "",
"pages": "45--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Tong and Daphne Koller. 2001. Support vec- tor machine active learning with applications to text classification. J. Mach. Learn. Res., 2:45-66.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "NewsQA: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP at ACL 2017",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {
"DOI": [
"10.18653/v1/w17-2623"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, Rep4NLP at ACL 2017, Vancouver, Canada, August 3, 2017, pages 191-200. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Learning with noisy labels for sentence-level sentiment classification",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chaozhuo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tianrui",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019",
"volume": "",
"issue": "",
"pages": "6285--6291",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1655"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Wang, Bing Liu, Chaozhuo Li, Yan Yang, and Tianrui Li. 2019. Learning with noisy labels for sentence-level sentiment classification. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 6285-6291. Association for Computational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "The multidimensional wisdom of crowds",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Welinder",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Branson",
"suffix": ""
},
{
"first": "Serge",
"middle": [
"J"
],
"last": "Belongie",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2424--2432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Welinder, Steve Branson, Serge J. Belongie, and Pietro Perona. 2010. The multidimensional wisdom of crowds. In Advances in Neural Information Pro- cessing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Pro- ceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada, pages 2424- 2432. Curran Associates, Inc.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Whitehill",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Ruvolo",
"suffix": ""
},
{
"first": "Tingfan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Javier",
"middle": [
"R"
],
"last": "Movellan",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2035--2043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Whitehill, Paul Ruvolo, Tingfan Wu, Jacob Bergsma, and Javier R. Movellan. 2009. Whose vote should count more: Optimal integration of la- bels from labelers of unknown expertise. In Ad- vances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meet- ing held 7-10 December 2009, Vancouver, British Columbia, Canada, pages 2035-2043. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "How does disagreement help generalization against label corruption?",
"authors": [
{
"first": "Xingrui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jiangchao",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Ivor",
"middle": [
"W"
],
"last": "Tsang",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019",
"volume": "",
"issue": "",
"pages": "7164--7173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, and Masashi Sugiyama. 2019. How does disagreement help generalization against label cor- ruption? In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9- 15 June 2019, Long Beach, California, USA, vol- ume 97 of Proceedings of Machine Learning Re- search, pages 7164-7173. PMLR.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization",
"authors": [
{
"first": "Chiyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Hardt",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiyuan Zhang, Samy Bengio, Moritz Hardt, Ben- jamin Recht, and Oriol Vinyals. 2017. Understand- ing deep learning requires rethinking generalization.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "5th International Conference on Learning Representations",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In 5th International Conference on Learning Rep- resentations, ICLR 2017, Toulon, France, April 24- 26, 2017, Conference Track Proceedings. OpenRe- view.net.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Learning with different amounts of annotation: From zero to many labels",
"authors": [
{
"first": "Shujian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengyue",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "2021",
"issue": "",
"pages": "7--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shujian Zhang, Chengyue Gong, and Eunsol Choi. 2021. Learning with different amounts of annota- tion: From zero to many labels. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Vir- tual Event / Punta Cana, Dominican Republic, 7-11",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Generalized cross entropy loss for training deep neural networks with noisy labels",
"authors": [
{
"first": "Zhilu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mert",
"middle": [
"R"
],
"last": "Sabuncu",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8792--8802",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilu Zhang and Mert R. Sabuncu. 2018. General- ized cross entropy loss for training deep neural net- works with noisy labels. In Advances in Neural Information Processing Systems 31: Annual Con- ference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Mon- tr\u00e9al, Canada, pages 8792-8802.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"text": "Varying the number of training examples changes the amount of budget remaining for cleaning. 10,000 examples is set as the default and the percent change is measured in comparison to this point.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "(a) Jaccard similarity on Social Bias Frames (b) Jaccard similarity on MNLI datasetFigure 3: Jaccard similarity overlap for all pairs of targeted relabeling methods on the offensive language detection task and the natural language inference task. 12000/2954/2946 examples for train, development and test splits respectively.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">Methods Offense NLI Sentiment QA</td></tr><tr><td>Default Random Cross</td><td>81.6 80.9 81.7</td><td>48.9 48.0 48.4</td><td>57.4 55.8 57.3</td><td>6.95 6.41 6.56</td></tr></table>",
"text": "Jaccard similarity for all pairs of targeted relabeling methods on the sentiment analysis task. Large, Cart and Proto are short for Large Loss, Cartography and Prototype, respectively. Results for other tasks available in Appendix A.",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Ablation results that vary the method of identifying errors for relabeling. Default uses the same model for error selection and training.",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "This table contains results for the three different post-hoc analyses. Left column is precision of the model in identifying mislabeled examples. Right columns are results training on extended datasets. All scores are average of three seeds on DistillRoBERTa.",
"html": null
}
}
}
}