Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1056",
"Title": "Characterizing Datapoints via Second-Split Forgetting",
"Limitation": "Limitations One limitation of the proposed metric is that it is brittle to the choice of the learning rate for the second-split training. If we use a very small learning rate, then overparametrized deep models are capable of learning the new dataset without forgetting examples from the first split. Alternately, if we use a very large learning rate, the model may diverge and undergo catastrophic forgetting. However, under ‘reasonable’ choices of learning rate (like that for first-split training), we find SSFT is robust. We provide a detailed anaylsis of the same in Appendix C.1.",
"Reviewer Comment": "Reviewer_3: Strengths:\nA new metric for characterizing the example hardness.\nStudied three types of hard examples and verify the effectiveness of the new metric with comprehensive experimental studies.\nWell-written paper.\nWeakness:\nThe theoretical results are derived from a simplified setting.\nQuestions:\nThis paper proposes a new and interesting metric and the empirical evaluation is comprehensive. Overall, I think this is a solid paper.\nLimitations:\nThe authors have adequately addressed the limitations and potential negative societal impact of their work\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 4 excellent\nContribution: 4 excellent\n\nReviewer_4: The paper is a strong paper, and I am quite fond of it. It has implications for the communities which makes specific use of hard examples, noisy labels and rare classes, and more broadly the community interested in learning dynamics. I do have a number of question and concerns which I've listed in the questions section.\nStrengths\nApproach is simple yet effective.\nWell written paper.\nExperiments on a large array of tasks and domains.\nStrong ablations to understand how, why and where this approach works.\nClear results.\nSections 4.5 and 4.6 are appreciated; drawing attention to utility of the method(s) and failure modes. This will be useful for practitioners using this. Limitations of the approach are clearly described in failure modes.\nWeaknesses\nThe approach requires a separate hold-out set to train on to describe the dynamics.\nRetraining is computation and time consuming.\nThe description of the synthetic dataset in 4.2 could be more readable, clear and precise.\nOriginality Novel approach to classifying examples.\nQuality: Theoretically grounded paper, with good intuitions and strong results.\nClarity: Well written paper. It provides a good section of related work, ablations and explanations on method. Figures were clear and useful.\nSignificance: Paper builds on the work of the likes of Toneva et al. classifying hard examples, and has implications for communities interested in e.g. curriculum learning.\nQuestions:\nIt seems quite natural that neurons firing for classifying a rare class would not be overwritten, given that they are rarely seen. Whereas mislabelled examples (likely more common occurring classes) would overwrite. So the training curves as they are seem intuitively correct. Any comment on this statement? Is there something more complex going on?\nSeems like it would be useful to have a number of synthetic examples to test the above? Say make classes (set a few to be mislabelled only) that are mislabelled as rare as the rare classes.\nLimitations:\nThe authors have addressed the limitations of their approach, however not in the relation to societal impact. This does not seem necessary here. Although perhaps a statement on how this approach might impact minority groups (classes).\nEthics Flag: No\nSoundness: 3 good\nPresentation: 4 excellent\nContribution: 4 excellent\n\nReviewer_5: Strengths\n1- A new method that can offer insight into datasets and can be useful in identifying and correcting noisy examples.\n2- The paper is well-executed and is a pleasure to read.\n3- The authors provide precise mathematical definitions in a simplified setting for the concepts they discuss in the paper; e.g. \"rare\", \"noisy\", and \"complex\" examples. However, these are not general definitions; they are specific to the construction they study analytically (i.e. separable examples with a linear decision boundary).\nWeaknesses\n1- The experiments are conducted on a few datasets, even though it should be straightforward to conduct some of the experiments on a larger set of datasets, e.g. for the identification of label noise in Table 2. In particular, the authors used CIFAR100 in other experiments but without reporting CIFAR100 results in Table 2. Since the claims made in the paper are empirical, the experiments should cover many datasets.\n2- The authors claim that SSFT can be used to improve generalization by identifying and removing noisy examples. But, in order for this to be indeed a useful application, the comparison should be against training the model on the entire data. In SSFT, the data is split into two subsets and I am not sure if the authors compare the impact on generalization by training on the entire set before and after removing the noisy examples. Please see my question below.\n3- As the authors acknowledge in their work, SSFT is sensitive to the choice of the learning rate. The authors claim that for \"reasonable\" choices, SSFT is robust but there is no evidence of this in the paper. At minimum, the authors can vary the learning rate and report the Pearson correlations (since they used it later to measure stability) in a 2D heat-map with the learning rate on the x and y axis.\n4- In some places, the mathematical notation is either imprecise or unclear (to me at least). For example, in Equation 3,\nP\n(\nx\n∼\nX\ng\n)\nshould be\nP\n(\nx\n∈\nX\ng\n)\n. In Equation 5,\nρ\n(\nt\n)\nis undefined. Also, when mentioning\nO\n(\n1\n)\nin Definition 5.1, this means that the number of rare examples is bounded by a constant as we vary another parameter. What is the other parameter? Is it the number of mixtures\nN\nor the total number of training examples (in which case\nO\n(\n1\n)\ncan be a function of\nN\n)?\n5- The experiments are done on a shallow architecture (ResNet-9). I think this is a big weakness of the work. At minimum, ResNet-18 should be used and (naturally) even deeper architectures would be more preferable.\nMinor comments\nI think the authors should use the \"number of seen examples\" instead of the number of epochs because the dataset can be quite large (near-infinite data regime) in which a single epoch is used. This is especially important here because the model is fine-tuned so one would expect it to converge quickly.\nIn line 164, the condition\ni\n≠\nj\nshould be added to the statement\nI\ni\n∩\nI\nj\n=\n∅\n.\nQuestions:\n1- When comparing the improvement on generalization after removing noisy examples, do you train the model on the entire set of data (both sets used in SSFT joined together) or one subset only? Can you please specify exactly how the comparison is done?\n2- Can you please generate a figure that shows that SSFT is indeed robust to changes in the learning rate? Please see my comment above.\n3- What is\nρ\n(\nt\n)\nin Equation 5?\n4- ResNet-9 is a small architecture. I believe ResNet-18 should be used at minimum since it's a common baseline model used in the literature. Is there a reason deeper architectures are not used?\n5- I find it hard to understand why \"complex examples\" are defined to be those that have a high signal-to-noise ratio. I would expect the opposite. Is this a typo? If not, can you please explain the intuition behind this definition?\nLimitations:\nPlease see my comments above for the limitations of the work. I don't see any potential negative societal impact.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 4 excellent\nContribution: 3 good\n\nReviewer_6: I particularly like this work, because the metrics introduced are simple (anyone familiar with some Deep Learning will understand them quickly), and are shown to be effective through an extensive set of experiments. In my opinion these types of simple metrics are easy to add to deep learning pipelines and can help investigate model behavior and dataset behavior, and guide the improvement of model training.\nMoreover, the paper is extremely well-written in my opinion, with very few spelling mistakes and good formalism and organisation of the paper. The narrative also reads nicely\nThe only experiment/addition that I could imagine which might be useful is the following: Since overparametrized deep models might not exhibit such nice curves, due to the effect mentioned in lines 228-292) it would be interesting to see if shallower architectures can be used to improve generalization for deeper models. For example, maybe repeat a similar type of experiment as in Figure 3, but where the data that is removed is chosen based on a shallower architecture, and the accuracy is reported on deeper architectures. However, this comment is not highly critical, so I don't encourage the authors to do this experiment during the rebuttal if there is no time.\nAdditional experiments could possibly be done on larger models to show the discrepancy with smaller models. However I don't consider this a major weakness.\nminor comments:\nthere is a space in front of the colon on line 109\non line 164, should\nI\ni\n∪\nI\nj\n=\nϕ\nalso have a condition that\ni\n≠\nj\n?\nAnother suggestion is to increase the font inside the figures slightly, as they are a bit difficult to read.\nQuestions:\nA question I have regarding rare examples: what is the added benefit of using FSLT to find rare examples? Isn't it easy to just see how many datapoints there are per class?\nLimitations:\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 4 excellent\nContribution: 3 good",
"abstractText": "Researchers investigating example hardness have increasingly focused on the dynamics by which neural networks learn and forget examples throughout training. Popular metrics derived from these dynamics include (i) the epoch at which examples are first correctly classified; (ii) the number of times their predictions flip during training; and (iii) whether their prediction flips if they are held out. However, these metrics do not distinguish among examples that are hard for distinct reasons, such as membership in a rare subpopulation, being mislabeled, or belonging to a complex subpopulation. In this paper, we propose second-split forgetting time (SSFT), a complementary metric that tracks the epoch (if any) after which an original training example is forgotten as the network is fine-tuned on a randomly held out partition of the data. Across multiple benchmark datasets and modalities, we demonstrate that mislabeled examples are forgotten quickly, and seemingly rare examples are forgotten comparatively slowly. By contrast, metrics only considering the first split learning dynamics struggle to differentiate the two. At large learning rates, SSFT tends to be robust across architectures, optimizers, and random seeds. From a practical standpoint, the SSFT can (i) help to identify mislabeled samples, the removal of which improves generalization; and (ii) provide insights about failure modes. Through theoretical analysis addressing overparameterized linear models, we provide insights into how the observed phenomena may arise.1",
"1 Introduction": "A growing literature has investigated metrics for characterizing the difficulty of training examples, driven by such diverse motivations as (i) deriving insights for how to reconcile the ability of deep neural networks to generalize [30] with their ability to memorize noise [15, 48]; (ii) identifying potentially mislabeled examples; and (iii) identifying notably challenging or rare sub-populations of examples. Some of these efforts have turned towards learning dynamics, with researchers noting that neural networks tend to learn cleanly labeled examples before mislabeled examples [17, 18, 33], and more generally tend to learn simpler patterns sooner—for several intuitive notions of simplicity [19, 35, 43]. Broadly, works in this area tend to characterize examples as belonging either to prototypical groups or memorized exceptions [7, 16, 25]. Adapting these intuitions to real datasets, Feldman [15] propose rating the degree to which an example is memorized based on whether its predicted class flips when it is excluded from the training set. These, and other works [8, 21, 35, 43, 47] have proposed many metrics for characterizing example difficulty with Carlini et al. [7] comparing five such metrics. However, while many of these works distinguish some notion of easy versus hard samples, they seldom (i) offer tools for distinguishing among different types of hard examples; (ii) explain theoretically why these metrics might be useful for distinguishing easy versus hard samples. Moreover, existing metrics tend to give similar scores to examples that are difficult for distinct reasons, e.g, membership in rare, complex, or mislabeled sub-populations.\n1Code for reproducing our experiments can be found at https://github.com/pratyushmaini/ssft.\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nar X\niv :2\n21 0.\n15 03\n1v 1\n[c s.L\nG ]\n26 O\nct 2\n02 2\nIn this paper, we propose to additionally consider a new metric, Second-Split Forgetting Time (SSFT), calculated based on the forgetting dynamics that arise as training examples are forgotten when a neural network continues to train on a second, randomly held out data partition. SSFT is defined as the fine-tuning epoch after which a first-split training example is no longer classified correctly. We find that SSFT identifies mislabeled examples remarkably well but does little to separate out under- versus over-represented subpopulations. Conversely, metrics based on the (first-split) training dynamics are more discriminative for separating these populations but less useful for detecting mislabeled examples. We leverage the complementarity of first- and second-split metrics, showing that by jointly visualizing the two, we can produce a richer characterization of the training examples.\nIn our experiments, we operationalize several notions of hard examples, namely: (i) mislabeled examples, for which the original label has been flipped to a randomly chosen incorrect label; (ii) rare examples, which belong to underrepresented subpopulations; and (iii) complex examples, which belong to subpopulations for which the classification task is more difficult (details in Section 3.2). We perform specific ablation studies with datasets complicated by just one type of hard example (Section 4.3), and show how SSFT can help to distinguish among these categories of examples. We observe that during second-split training, neural networks (i) first forget mislabeled examples from the first split; (ii) only slowly begin to forget rare examples (e.g., from underrepresented subpopulations) unique to the first training set; and (iii) do not forget complex examples.\nThis separation of hard example types has multiple practical applications. First, we can use the method to identify noisy labels: On CIFAR-10 with 10% added class noise, SSFT achieves 0.94 AUC for identifying mislabeled samples, while the first-split metrics range in AUC between 0.58 to 0.90. Second, the method can also help improve generalization in noisy data settings: while the removal of hard examples according to first-split learning time degrades the performance of the classifier, the removal of hard examples according to SSFT can actually improve generalization. This is especially beneficial when e.g., training on synthetic data (produced by a generative model) or mislabeled data. Third, we show how SSFT can identify failure modes of machine learning models. For example, in a simplified task classifying between horses and airplanes in the CIFAR-10 dataset, we find that training examples containing horses with sky backgrounds and airplanes with green backgrounds are among the earliest forgotten—indicating that the model relies on the background as a spurious feature. Last, we also add that our metric is robust across multiple seeds, and the earliest forgotten examples are robust across architectures. Across multiple optimizers, SSFT distinguishes mislabeled samples, whereas first-split metrics appear more sensitive to the choice of optimizer.\nFinally, we investigate second-split dynamics theoretically, analyzing overparametrized linear models [46]. We introduce notions of mislabeled, rare, and complex examples appropriate to this toy model. Our analysis shows that mislabeled examples from the first split are forgotten quickly during secondsplit training whereas rare examples are not. However, as we train for a long time, rare examples from the first split are eventually forgotten as the model converges to the minimum norm solution on the second split while predictions on complex examples remain accurate with high probability.",
"2 Related Work": "Example Hardness. Several recent works quantify example hardness with various training-time metrics. Many of these metrics are based on first-split learning dynamics [8, 25, 27, 35, 43]. Other works have resorted to properties of deep networks such as compression ability [21] and prediction depth [5]. Carlini et al. [7] study metrics centered around model training such as confidence, ensemble agreement, adversarial robustness, holdout retraining, and accuracy under privacy-preserving training. Closest in spirit to the SSFT studied in our paper are efforts in [7, 47]. Crucially, Carlini et al. [7] study the KL divergence of the prediction vector after fine-tuning on a held-out set at a low learning rate, and do not draw any direct inference of the separation offered by their metric. Focusing on (firstsplit) forgetting dynamics, Toneva et al. [47] defined a metric based on the number of forgetting events during training and identified sets of unforgettable examples that are never misclassified once learned. In our work, we find complementary benefits of analysis based on first- and second-split dynamics.\nMemorization of Data Points. In order to capture the memorization ability of deep networks, their ability to memorize noise (or randomly labeled samples) has been studied in recent work [3, 48]. As opposed to the memorization of rare examples, the memorization of noisy samples hurts generalization and makes the classifier boundary more complex [15]. On the contrary, a recent line of works has argued how memorization of (atypical) data points is important for achieving optimal generalization performance when data is sampled from long-tailed distributions [6, 11, 15].\nSimplicity Bias. Another line of work argues that neural networks have a bias toward learning simple features [43], and often do not learn complex features even when the complex feature is more predictive of the true label than the simple features. This suggests that models end up memorizing (through noise) the few samples in the dataset that contain the complex feature alone, and utilize the simple feature for correctly predicting the other training examples [1, 32].\nLabel Noise. Large-scale machine learning datasets are typically labeled with the help of human labelers [12] to facilitate supervised learning. It has been shown that a significant fraction of these labels are erroneous in common machine learning datasets [39]. Learning under noisy labels is a long-studied problem [2, 26, 31, 37]. Various recent methods have also attempted to identify label noise [10, 23, 38, 40]. While the focus of our work is not to propose a new method in this long line of work, we show that the view of forgetting time naturally distills out examples with noisy labels. Future work may benefit by augmenting our metric with SOTA methods for label noise identification.",
"3 Method": "The primary goal of our work is to characterize the hardness of different datapoints in a given dataset. Suppose we have a dataset SA = {xi,yi}n such that (xi,yi) ⇠ D. For the purpose of characterization, we augment each datapoint (xi,yi) 2 SA with parameters (fslti, ssfti) where fslti quantifies the first-split learning time (FSLT), and ssfti quantifies the second-split forgetting time (SSFT) of the sample. To obtain these parameters, we next describe our proposed procedure. Procedure We train a model f on S to minimize the empirical risk: L(S; f) = P\ni `(f(xi),yi). We use fA to denote a model f (initialized with random weights) trained on SA until convergence (100% accuracy on SA). We then train a model initialized with fA on a held-out split SB ⇠ Dn until convergence. We denote this model with fA!B . To obtain parameters (fslti, ssfti), we track perexample predictions (ŷti) at the end of every epoch (tth) of training. Unless specified otherwise, we train the model with cross-entropy loss using Stochastic Gradient Descent (SGD).\nDefinition 1 (First-Split Learning Time). For {xi,yi} 2 SA, learning time is defined as the earliest epoch during the training of a classifier f on SA after which it is always classified correctly, i.e.,\nfslti = argmin t⇤\n(ŷti,(A) = yi 8t t ⇤) 8{xi,yi} 2 SA. (1)\nDefinition 2 (Second-Split Forgetting Time). Let ŷti,(A!B) to denote the prediction of sample {xi,yi} 2 SA after training f(A!B) for t epochs on SB . Then, for {xi,yi} 2 SA forgetting time is defined as the earliest epoch after which it is never classified correctly, i.e.,\nssfti = argmin t⇤\n(ŷti,(A!B) 6= yi 8t t ⇤) 8{xi,yi} 2 SA. (2)",
"3.1 Baseline Methods": "We provide a brief description of metrics for example hardness considered in recent comparisons [25].\nNumber of Forgetting Events: (nf ). An example (xi,yi) 2 S undergoes a forgetting event when the accuracy on the example decreases between two consecutive updates. Toneva et al. [47] analyzed the total number of such events nf during the training of a neural network to identify hard examples.\nCumulative Learning Accuracy: (accl). Jiang et al. [25] suggest that rather than using the learning time (Definition 1), using the number of epochs during training when a machine learning model correctly classifies a given sample is a more stable metric for predicting example hardness.\nCumulative Learning Confidence: (confl). Similar to accl, confl measures the cumulative softmax confidence of the model towards the correct class over the course of training.",
"3.2 Example Characterization": "We characterize example hardness via three sources of learning difficulty: (i) Mislabeled Examples: We refer to mislabeled examples as those datapoints whose label has been flipped to an incorrect label uniformly at random. (ii) Rare Examples: We assume that rare examples belong to subpopulations of the original distribution that have a low probability of occurrence. In particular, there exist O(1) examples from such sub-populations in a given dataset. In Section 4.3 we describe how we operationalize this notion in the case of the CIFAR-100 dataset. (iii) Complex Examples: These constitute samples that are drawn from sub-groups in the dataset that require either (1) a hypothesis class of high complexity; or (2) higher sample complexity to be learnt relative to examples from rest of the dataset. We leave the definition of complex samples mathematically imprecise, but with the same intuitive sense as in prior work [3, 43]. For instance, in a dataset composed of the union of MNIST and CIFAR-10 images, we would consider the subpopulation of CIFAR-10 images to be more complex.",
"4.1 Experimental Setup": "Datasets We show results on a variety of image classification datasets—MNIST [13], CIFAR10 [29], and Imagenette [22]. For experiments in the language domain, we use the SST-2 dataset [45]. For each of the datasets, we split the training set into two equal partitions (SA,SB). For experiments\nwith mislabeled examples, we simulate mislabeled examples by randomly selecting a subset of 10% examples from both the partitions and changing their label to an incorrect class.\nTraining Details Unless otherwise specified, we train a ResNet-9 model [4] using SGD optimizer with weight decay 5e-4 and momentum 0.9. We use the cyclic learning rate schedule [44] with a peak learning rate of 0.1 at the 10th epoch. We train for a maximum of 100 epochs or until we have 5 epochs of 100% training accuracy. We first train on SA, and then using the pre-initialized weights from stage 1, train on SB with the same learning parameters. All experiments can be performed on a single RTX2080 Ti. Complete hyperparameter details are available in Appendix B.1.",
"4.2 Learning-Forgetting Spectrum for various datasets": "Synthetic Dataset We consider data (x,y) sampled from a mixture of multiple distributions Dg , s.t. x 2 Rd. Dg denotes the gth group and has a sampling frequency of ⇡g . Each group Dg ⌘ (Xg, {yg}), i.e., the true label for all the samples drawn from a given group is the same, and the examples in each group are non-overlapping. Each group is parametrized by a set of k ⌧ d unique indices Ig ⇢ [d] such that Ii \\ Ij = for i 6= j. The discriminative characteristic of each group is the vector ug, such that, [ug]i = 1 if i 2 Ig else 0 8i 2 [d]. Then for any sample (x,y) 2 S:\nP (x 2 Xg) = ⇡g; x|Xg ⇠ N (0, 2Id) + µg.\nFor our simulation, we consider a 10 class-classification problem, with µg = 5 for typical groups, and µg = 4 for complex groups (higher signal to noise ratio). For any sample drawn from a rare group, we have O(1) samples from that group in the entire dataset (SA [ SB). Mislabeled samples are only generated from the majority typical groups. In Figure 2a, we show the rate of learning and forgetting of examples from each of these categories. We note that in the second-split training, the mislabeled examples are quickly forgotten, and the complex examples are never forgotten. The rare examples are forgotten slowly. In Section 5 we will theoretically justify the observations in the synthetic dataset and show that the rare examples are expected to be forgotten as we train for an infinite time.\nImage Domain In Figure 2b, we show representative examples in the four quadrants of the learningforgetting spectrum. More specifically, we find that the examples forgotten fastest and learned last are mislabeled. And the ones learned early and never forgotten once learned are characteristic simple examples of the MNIST dataset. Examples in the first and third quadrant are seemingly atypical and ambiguous respectively. Similar visualizations for other image datasets can be found in Appendix B.2.\nOther Modalities The forgetting and learning dynamics occur broadly across modalities apart from images. We repeat the same problem setup on the SST-2 [45] dataset for sentiment classification. We fine-tune a pre-trained BERT-base model [14] successively on two disjoint splits of the dataset. In Table 1, we provide a list of the earliest forgotten samples when we train a BERT model on the second split of SST-2 dataset. The results suggest that SSFT is able to identify mislabeled samples.",
"4.3 Ablation Experiments": "We design specific experimental setups to capture the three notions of hardness as defined in Section 3.\nMislabeled Examples We sample 10% datapoints from both the first and second split of the CIFAR10 dataset, and randomly change their label to an incorrect label. Figure 3a shows the learningforgetting spectrum for the dataset. In the adjoining density histograms, note that a large fraction of the mislabeled and correctly labeled examples are learned at the same time. However, during secondsplit training, the mislabeled examples are forgotten quickly whereas a large fraction of the clean examples are never forgotten, allowing SSFT to succeed in distinguishing mislabeled samples.\nComplex Examples We generate a joint dataset that contains the union of both MNIST and CIFAR10 examples. This is motivated by work in simplicity bias [43] that argue that neural networks learn simpler features first. We also add 10% labeled noise to each of the datasets in the union to understand the learning and forgetting time relationship of a sample that is complex or mislabeled together. In Figure 3b, we show the FSLT and SSFT for MNIST and CIFAR-10 samples. We note that a high fraction of the CIFAR-10 (complex) samples learn at the same speed as the mislabeled samples. However, when looking at the SSFT, we are able to draw a strong separation between the mislabeled samples and complex samples. This indicates that the complexity of a sample has low correlation with its tendency to be forgotten once learnt, but a high correlation with being learned slowly.\nRare Examples The CIFAR-100 [29] dataset is a 100-class classification task. The dataset contains 20 superclasses, each containing 5 subclasses. We create a 20-class classification dataset with long tails simulated through the 5 sub-classes within each superclass. More specifically, the number of examples in each subgroup for a given superclass is given by {500, 250, 125, 64, 32} respectively (exponentially decaying with a factor of 2). This is done to simulate the hypothesis of dataset subgroups following a Zipf distribution [49] as argued for by Feldman [15]. This dataset is further divided into two equal splits to analyze the learning-forgetting dynamics. In order to remove any other effects of example hardness (either within a subgroup, or among subgroups), we randomize both the chosen subset of examples and the ordering of the majority and minority groups between each superclass, by training the model on 20 such random splits and aggregating learning and forgetting statistics over these runs. In Figure 3c, we show a scatter plot for the FSLT and SSFT, colored by the frequency of the group a particular example belongs to. We observe that FSLT strongly correlates with the size of the subgroup, whereas the SSFT has a very low correlation with the rareness of a sample.\nWe provide further ablations to show that FSLT is able to identify hard and rare examples, but SSFT shows nearly no discriminative power at finding the two in Appendix C.",
"4.4 Dataset Cleansing": "Identifying Label Noise We present AUC scores for detection of label noise via various popular methods in example difficulty literature, across various datasets in Table 2. We note that (i) cumulative predictions over the course of training help stabilize both the learning time and forgetting time metrics;\n(ii) for simple datasets such as MNIST with few ambiguous images, all of the baseline methods have very high AUC (greater than 0.99) in finding noisy inputs. However, in datasets such as CIFAR-10 and Imagenette, we find that second-split forgetting metrics do better than first-split training metrics. Finally, we also compare the use of both forgetting and learning time to find noisy samples, and we find a small improvement in the results of just using the forgetting time. While we do not make explicit comparisons with other state of art methods dedicated to finding label noise, our results suggest that augmenting second split forgetting time information may help improve their results. As also observed in recent work [25], we find that the number of forgetting events (nf ) [47] is an unreliable indicator of mislabeled samples. We hypothesize that this is because of the fact that mislabeled examples may often be (first) learnt very late, hence their count of total forgetting events is also low.\nCleaning synthetically generated datasets Generative models are capable of mimicking the distribution of a given dataset. We generate synthetic datasets of CIFAR10-like samples using (i) DDPM (denoising diffusion model [20]); and (ii) DCGAN (Deep Convolutional GAN [41]). In both cases, we assign pseudo-labels using the BiT model [28] as in prior work [36]. We collect a sample of 50,000 training examples and record the generalization performance on CIFAR-10 as we remove ‘hard’ samples, as evaluated by various metrics. In Figure 4, we can see that removing the most easily forgotten examples can benefit by up to 10% generalization accuracy on the clean test set of CIFAR-10. In case of the synthetic data generated using DDPM, the gains in generalization performance are under 2%. We hypothesize that this is because the samples generated by DDPM are more representative of the typical distribution of CIFAR-10 than those generated by DCGAN.\nNote: The ability to train on a second split allows SSFT the unique opportunity to train on a clean split of CIFAR-10 in order to assess the alignment of the synthetic samples with the oracle samples. As a result, the SSFT is much more effective in filtering out ambiguous first-split synthetic examples.",
"4.5 Evaluating Example Utility": "Recent works [16, 47] have argued for removing a large fraction of the less memorized examples, and keeping the memorized ones. We will analyze the change in model generalization upon removing varying sizes of examples from the training set, as ranked by lowest SSFT and highest FSLT (Figure 4). In the presence of noisy examples, removing samples based on the SSFT helps improve generalization, whereas FSLT does not do much better than random. We draw the following inferences:\nFSLT finds important samples As we remove more samples from the dataset, the accuracy of the model trained after samples are removed based on the highest FSLT is significantly lower than random guessing. This suggests that the utility of these samples is higher than random samples. Put in line with the hypothesis of memorization of rare example as proposed in [15], we see that empirically, the examples that are slow to learn are important for the model’s test set generalization.\nSSFT removes pathological samples On the contrary, removing examples based on the SSFT helps improve model generalization (especially when there is label noise). Even in the setting when there is no label noise, in contrast to FSLT, we find that removing examples that were easily forgotten has a lower negative impact on the model’s generalization as opposed to removing random samples. This suggests that the examples that are forgotten in the early epochs of second-split training hurt a model’s generalization, and may not be characteristic samples of their particular class.\nPractitioner’s view From the AUC numbers in Table 2, it may appear that removing examples via learning-based metrics such as learning time and cumulative learning accuracy also provides a high rate of removal of noisy samples. However, when we observe the example utility graphs in Figure 4, we draw the inference that the examples that are learned late, are often important examples (such as rare memorized examples). However, even when SSFT fails to capture the correct noisy examples, it still removes unimportant samples and does not hurt generalization. Similar graphs for other metrics described in Table 2 can be found in Appendix B.\n4.6 Characterizing Potential Failure Modes\nRecent works have attempted to train classifiers on datasets that contain spurious features [24, 42] (example Waterbirds, CelebA [34] dataset). However, a fundamental challenge is to first identify the spurious correlation that the classifier may be relying on. Only then can recent methods be trained to remove the reliance on spurious patterns. We train a ResNet-9 model to classify CIFAR-10 images of horses and airplanes. In Figure 5, we observe that the model forgets planes with green backgrounds and horses with blue backgrounds. This suggests that the model relied on the background as a spurious feature. By analyzing the forgotten examples we can further investigate the examples that the classifier fails to generalize to.\nStability of SSFT We note that SSFT is stable across multiple seeds (Pearson correlation of 0.81), and across architectures (Pearson correlation of 0.63). While the overall correlation for samples ranked by SSFT may be low across architectures, the top-ranked examples have a high correlation (0.85), suggesting the most forgotten examples are consistent across architectures. In contrast, FSLT has a Pearson correlation of 0.52 across seeds. Most interestingly, the learning time metric is brittle to the choice of hyperparameters. As shown by Jiang et al. [25], when using Adam optimizer, examples of different hardness get learned together. In our experiments, we observe the same phenomenon during learning, however, SSFT is robust to the choice of the optimizer. Detailed results in Appendix C.1.\nLimitations One limitation of the proposed metric is that it is brittle to the choice of the learning rate for the second-split training. If we use a very small learning rate, then overparametrized deep models are capable of learning the new dataset without forgetting examples from the first split. Alternately, if we use a very large learning rate, the model may diverge and undergo catastrophic forgetting. However, under ‘reasonable’ choices of learning rate (like that for first-split training), we find SSFT is robust. We provide a detailed anaylsis of the same in Appendix C.1.",
"5 Theoretical Results": "Through our theoretical analysis, we will characterize the forgetting dynamics of mislabeled, rare and complex examples in a simplified version of the framework used for our synthetic experiments in Figure 2a. Recall, our setup contains two dataset splits SA,SB , where we train on the first split until achieving perfect accuracy on all training points, and then with these weights train on SB for infinite time. In particular, we will prove that both mislabeled and rare examples are forgotten upon training for infinite time, with mislabeled examples being forgotten much faster. Further, we will show that complex examples from the first split do not get forgotten if not continually trained on. We assume in our analysis that SB has no mislabeled or rare examples, and SA contains one example of each type.\nWe consider a dataset S = {xi,yi}n such that (xi,yi) 2 X ⇥ Y , and xi = µg + zi where zi ⇠ N 0, 2Id , and ||µg|| 2 2 = kµ\n2 (as in Section 4.2). Let w 2 Rd represent the weight vector of an overparametrized linear model. We analyze the learning and forgetting dynamics by minimizing the empirical risk: L(S;w) = P i `(w\n>yixi), where ` is the exponential loss. Following Chatterji and Long [9], we make the following assumptions about the problem setup:\n(A.1) The failure probability satisfies 0 1/C, (A.2) The number of samples satisfies n C log (1/ ), (A.3) The input dimension d Cmax{n2 log(n/ ), n(k·µ2/ 2)}, and k·µ2/ 2 C log (n/ ), where k·µ2/ 2 represents the signal to noise ratio and C is a large constant. Now we formalize the notions of rare, mislabeled and complex examples for our theoretical analysis. Definition 1 (Rare Examples, R [15]). Consider a dataset S sampled from a mixture of distributions {D1, . . . ,DN} with frequency {⇡1, . . . ,⇡N} respectively. Let R ✓ S be the set of rare examples. Then, for all (xi,yi) 2 R, if (xi,yi) ⇠ Dj , then there are O(1) samples from Dj in S .\nDefinition 2 (Mislabeled Examples, M). Consider a k class classification problem with Y = {1, 2, . . . , k}. Let M ⇢ S be the set of mislabeled examples. Then for any (x,y) ⇠ D, a corresponding mislabeled example is given by (x, ỹ) 2 M such that ỹ 2 Y \\ {y}.2\nDefinition 3 (Complex Examples, C). Let C ⇢ S be the set of examples sampled from complex distributions. Let (xi,yi) 2 C such that (xi,yi) ⇠ Dg (complex group), then µg = µt , > 1 where\nµt is the coordinate-wise mean for samples drawn from any simple distribution Dt (Section 4.2).\nOptimization We perform gradient descent with fixed learning rate ⌘,\nw(t+ 1) = w(t) ⌘rL(w(t)) = w(t) ⌘ X\ni\n` 0(w>yixi) · yixi. (3)\nSolution dynamics For sufficiently small learning rate ⌘, and (bounded) starting point w(0), Soudry et al. [46] showed that:\nw(t) = ŵ log t+ ⇢(t), (4)\nwhere ⇢(t) is a bounded residual term, and ŵ is the solution to the hard margin SVM:\nŵ = argmin w2Rd\n||w||22 s.t. w >yixi 1, (5)",
"5.1 First-split Learning": "For stage 1, we consider that we train the model for a maximum of T epochs (until we achieve 100% accuracy on the first training dataset SA). This means that the learned weight vectors are close to, but have not converged to the max margin solution. The solution at the end of t epochs is given by wA(t). At sufficiently large T , we have:\nwA(T ) = ŵA log T + ⇢A(T )\nwA(T ) >yixi 1 8(xi,yi) 2 SA\n(6)\n2For binary classification, Y = { 1,+1}. The labels are reversed for mislabeled examples.",
"5.2 Second-split Forgetting": "We initialize the weights for second stage of training with wA(T ) from first training stage, and then train on SB . We provide the formal theorem statement and complete proofs in Appendix A, but provide informal theorem statements and an intuitive proof sketch below: Theorem 1 (Asymptotic Forgetting (informal)). For sufficiently small learning rate, given datasets SA,SB ⇠ D n . After training for T 0 ! 1 epochs, the following hold with high probability:\n1. Mislabeled and Rare examples from SA are forgotten.\n2. Complex examples from SA are not forgotten.\nProof Sketch. We use the result from Soudry et al. [46] that for any bounded initialization, when trained on a separable data, the model converges to the same min-norm solution. As a result, we can ignore the impact of SA at infinite time training. Then, we use generalization bounds from Chatterji and Long [9] to argue about the accuracy on mislabeled and complex examples. For the case of rare examples, we show that the probability of correct model prediction can be approximated by a Gaussian CDF with mean 0 and O(1/ p n) variance.\nTheorem 2 (Intermediate-Time Forgetting (informal)). For sufficiently small learning rate, given two datasets SA,SB ⇠ D n . For a model initialized with weights, wB(0) = wA(T ) and trained for T 0 = f(T) epochs, the following hold with high probability:\n1. Mislabeled examples from SA are no longer incorrectly predicted.\n2. Rare examples from SA are not forgotten.\nProof Sketch. SB contains examples from the same majority distributions as SA. The mislabeled example also belongs to one of these distributions, but has the opposite label. However, SB does not have samples from rare groups found in SA. Using representer theorem, we decompose the model updates into a weighted sum of each training data point in SB . Then, we analyze the change in prediction on rare and mislabeled examples, which is a dot product of the weight update with xm or xr. Per our assumptions, the the mean of each group µg is orthogonal to the other. As a result, the rare example finds negligible coupling with any example in SB , and the variance of its prediction keeps increasing due to the noise term contributed in the model weights from each example in SB . On the contrary, the mislabeled examples have a strong coupling with all the examples in its group. Due to its incorrect label, the mean of its predictions moves towards the correct label, with variance increasing at a similar rate. The final step is to jointly analyze the rate of change of prediction of both the examples, and find an optimal time T 0 when the prediction on the mislabeled example is flipped and the rare example still retains its prediction with high probability.",
"6 Conclusion": "While many prior works investigate training time dynamics to characterize the hardness of examples, we enrich this literature with a complementary lens focused on the second-split forgetting time. We demonstrate the potential of SSFT to distinguish among rare, mislabeled, and complex examples; and also show the differences in the example properties captured by first-split and second-split metrics.\nOur work opens new lines of inquiry in future work that can utilize the separation of hard examples. First, we expect state of art methods in label noise identification to benefit by augmenting our approach. Further, we believe our ablations showing that complex, noisy, and mislabeled samples may all be learned slowly inspire future work that can unite different takes on the memorizationgeneralization research—early learning, simplicity bias, and singleton memorization.",
"Acknowledgements": "We would like to thank Aakash Lahoti and Jeremy Cohen for their insightful comments on this work. SG acknowledges Amazon Graduate Fellowship and JP Morgan PhD Fellowship for their support. ZL acknowledges Amazon AI, Salesforce Research, Facebook, UPMC, Abridge, the PwC Center, the Block Center, the Center for Machine Learning and Health, and the CMU Software Engineering Institute (SEI) via Department of Defense contract FA8702-15-D-0002, for their generous support.",
"Reviewer Summary": "Reviewer_3: This paper studies the training time dynamics and the hardness of examples. It proposes a new metric to complement existing metrics called second-split forgetting time (SSFT). The paper studies three types of hard examples, including mislabeled examples, rare examples, and complex examples. The paper empirically shows that SSFT and FSLT together can be effective in identifying noisy labels, improving generalization in noisy data settings, identifying failure modes, and being robust to random seeds. In addition, the paper studies SSFT in theory based on a toy example.\n\nReviewer_4: The paper proposes a new approach to analyse the learning and forgetting of examples in training deep neural networks, working on range of domains from vision to language. This approach shows promise in identifying hard examples, and differentiating rare classes mislabelled examples in particular.\n\nReviewer_5: The authors propose a new method for quantifying how \"hard\" training examples are. Previous works considered methods, such as based on the number of times an example is seen before it is classified correctly or the number of times its predicted label is flipped during training. In this work, the authors propose second-split forgetting time (SSFT), which is the time needed to forget a training example if the model is fine-tuned on a different subset of data. The authors argue that SSFT can distinguish between examples that are hard because the labels are noisy vs. examples that are hard because they come from a rare subpopulation. They also discuss other applications of SSFT. Finally, the authors argue that SSFT is robust to the choice of architecture, optimizer, and so on.\n\nReviewer_6: This paper proposes a new (combination) of metrics to characterize datapoints in dataset, namely the First-Split Learning Time (FSLT) and the Second Split Forgetting Time (SSFT). These measure respectively for all the datapoints in the dataset used during training how fast they are learned, and how quickly they are forgotten, when retrained on a held out portion of the training set. The paper evaluates/investigates the relationship between FSLT and SSFT with respect to mislabeled samples, rare samples, and complex samples. This is done on multiple modalities (various (altered/syntetic) image datasets and a sentiment classification dataset. These metrics can effectively distinguish between these different examples, based on inspecting the effect on one/both metrics.\nMoreover, the paper also introduces some theoretical results to support their experiments"
}