| { |
| "File Number": "1103", |
| "Title": "Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data", |
| "Limitation": "Limitations and future opportunities. 1⃝ While Data-IQ characterizes examples; the current formulation does not allow us to understand which attributes are responsible for the characterization per example. This would be an interesting extension around dataset explainability, allowing practitioners to better probe their data. 2⃝ In high-stakes settings such as healthcare, to mitigate possible adverse effects (e.g. difficulty of Easy vs Hard), Data-IQ should be used with a “human-in-the-loop”, allowing experts to complement and validate findings with domain knowledge.", |
| "Reviewer Comment": "Reviewer_3: Originality: the idea of using prediction outcomes to quantify the uncertainty of data samples is not new, e.g. in some active learning approaches the next queried data point would be the sample with predicted probability close to 0.5 as it is the most uncertain. The novelty of Data-IQ lies in that it quantifies the aleatoric uncertainty of data samples by evaluating the variance of the prediction outcome which intends to capture the inherent uncertainty of the data samples rather than the uncertainty with regard to the model parameters. Therefore, Data-IQ is capable of providing guidance to improve data quality in new directions with regard to the dataset itself, e.g. new feature acquisition.\nQuality: the evaluation of the variance of the prediction outcome is straightforward, by noting that the classification outcome is a Bernoulli random variable, the variance is\np\n(\nx\n,\nθ\ne\n)\n(\n1\n−\np\n(\nx\n,\nθ\ne\n)\n)\nin which\np\n(\nx\n,\nθ\ne\n)\nis the predicted probability using the model parameters\nθ\ne\nat training epoch\ne\n. Then the aleatoric uncertainty is quantified as the average variance over all training epochs and categorization of samples is based on thresholds chosen by heuristics. The analysis of the categorization on real-world datasets is quite comprehensive, e.g. how to use Data-IQ to assess the quality of a new feature and how the Ambiguous samples will impact model generalization etc., but with regard to methodological perspectives, the technical novelty / contribution is not significant.\nClarity: the paper is well written and easy to follow. All the figures and tables are very clear.\nSignificance: I think the research topic in this paper is of high practical value as the assessment of data quality (uncertainty) is very important for training high-quality machine learning models and obtaining reliable predictions on unseen data. However, as stated in Quality, the methodological / theoretical novelty of the method is not very high.\nQuestions:\nThe aleatoric uncertainty is quantified as the average variance over all training epochs. I am wondering how the distribution of the variance of a single data sample over the training epochs looks like? If the distribution of one training example is skewed, e.g. the variance is high at the beginning but it becomes much lower as the model parameters are refined during training, and the variance of another training example remains at a medium level during training, these two training examples might end up with similar average uncertainty but the training dynamics are very different. In this case, would the average variance still be a good metric to evaluate the uncertainty of an individual data sample?\nThe evaluation are mainly conducted on neural networks, which demonstrates that Data-IQ is robust to different parametrizations of the same model type. Would it be possible to compare the results across different model types? Without comparison across model types, e.g. neural networks v.s. gradient boosting, the claim that Data-IQ is robust to model variation is less convincing as the aleatoric uncertainty might still be pertinent to the model class (though not to different parameters under the same model type).\nLimitations:\nThe authors have addressed the limitations and I think another potential limitation is how robust Data-IQ will be across different model types as stated in Q2 in Questions.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 4 excellent\nContribution: 3 good\n\nReviewer_4: Overall I like this paper, even though there are a few weaknesses.\nStrengths\nThe paper describes provides several reasons to model model uncertainty and conducts several ablation experiments to justify each use case.\nThe multiple ablation studies, including the change in estimated uncertainty with addition of features (Figure 6) as well as domain shift (Figure 7) are welcome.\nThe comparison of output of the model along a with Group DRO objective to improve subgroup level performance adds to the completeness of an already exhaustive set of experiments.\nWeaknesses\nThe way the paper proposes to estimate the variance around the model outcomes is by assuming the model parameters at the end of each epoch to be sampled from a distribution. While interesting such a choice is not characteristic of an empirical distribution in the strictest sense. In this case the model parameters are going to be highly correlated which would break the assumption of sampling from an IID distribution.\nQuestions:\nHave the authors considered other potential sources of incorporating randomness for the aleatoric uncertainty they are trying to capture over the distribution of\nv\n, for example perhaps the use of a bayesian prior on\nv\nor dropout masks?\nLimitations:\nAs indicated above the distribution of\nv\nis intimately tied to the model choice and the optimization approach. Even when the model class is the same, the hyper parameter choices for the optimizer like step size, momentum, might lead to vastly different distributions in the weights.\nSince the authors propose to use the estimated groups as 'protected groups', it would be welcome to see more extensive discussion of how these choices effect recovery of the estimated groups.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_5: The are several interesting aspects of this paper\nthe proposed method of identifying the sub-groups using the training performance of the model using the data uncertainty aspect is while simple, is intuitive and more importantly the presented analyses supports the importance of this formulation.\nThe identified 4 paradigms for a useful sub-group discovery is interesting, especially from a MLops perspective. The paper is masterfully presented that describes and motivates each of these paradigm and analyzes how the proposed method addresses each of these requirements. Especially, the plug and play aspect of this method is very appealing. The results presented in the appendix comparing the classical GBT models is also interesting (perhaps could be included in the main paper).\nThe insights about the importance of the Ambiguous groups is perhaps the most interesting aspect of the paper. Pertinently the novel usage of this identified sub-group for aspects such as identifying the most important features to comparing the value of a dataset is interesting.\nWhile the paper itself is quite interesting, there are a few aspects to improve upon:\nthe Figures are very low resolution and hinders readability. While acknowledging the space limitations, some of the sections/paragraph could be moved to supplementary section. For example, Figure 4 and the corresponding paragraph comparing agains DataMaps is quite intuitive and may not need to be part of the main paper. The y-axis is closely related to the metric for\nv\na\nl\nand the shape is quite expected. Similarly, Fig 8 can be argued to be intuitive and a direct consequence of the selected measure.\nTable 1 and the corresponding paragraph may need further context. The discussion ignores the context around the fidelity of the synthetic data to actual ground truth. E.g. guarding against trivially Easy data\nFinally, while the paper is quite well written, the actual method to calculate the Easy, Ambiguous, and Hard groups is not presented properly - it should be present at least as an algorithm in the supplementary section.\nQuestions:\nSome questions for the authors:\nwhile the authors rely on the training dataset performance, its not evident that their results and/or formulation is dependent on it. For example, the results in the supplementary section shows how the\nv\na\nl\nstabilizes after a few epochs. One can also draw corollary from the steady state dynamics for state space model and so on. Equation 1 also approximates the aleatoric uncertainty as an expectation. One can argue that for converged models, the steady state value may be enough. In contrast would a burn-in period have an impact on this estimate?\nIt seems that the entire formulation is true while comparing models of similar performance. How does this formulation play into Eqns (1) and (2).\nSection 4.3: How many ambiguous datapoints were in each dataset? The absolute number is important while analyzing the importance of the effect of removing such datapoints on model performance\nLimitations:\nN/A\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 3 good\nContribution: 4 excellent\n\nReviewer_6: Strengths:\nthe paper deals with a significant problem towards a better understanding of how difficult a data point is to classifiers in terms of classify it accurately.\nThe proposed approach that decomposes the accuracy of a given example into a summation of epistemic uncertainty and aleatoric uncertainty is clean and informative of the difficulty of classifying a given example.\nThe presentation of the paper is clear. Experiments are illustrative.\nWeakness:\nGiven the generality of the proposed method, the paper may consider evaluating the proposed method beyond tabular data.\nQuestions:\nIn equation (2), is the estimate based on the empirical process a good estimate for the two uncertainty quantities given that the samples in the empirical process might be correlated with each other rather than iid?\nLimitations:\nThe paper discusses the limitation of the proposed method. Finding the attributes that are responsible for the hardness of the data points seems to be particularly interesting and relevant for the interpretability of machine learning algorithms.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 4 excellent\nContribution: 3 good", |
| "abstractText": "High model performance, on average, can hide that models may systematically underperform on subgroups of the data. We consider the tabular setting, which surfaces the unique issue of outcome heterogeneity this is prevalent in areas such as healthcare, where patients with similar features can have different outcomes, thus making reliable predictions challenging. To tackle this, we propose Data-IQ, a framework to systematically stratify examples into subgroups with respect to their outcomes. We do this by analyzing the behavior of individual examples during training, based on their predictive confidence and, importantly, the aleatoric (data) uncertainty. Capturing the aleatoric uncertainty permits a principled characterization and then subsequent stratification of data examples into three distinct subgroups (Easy,Ambiguous,Hard). We experimentally demonstrate the benefits of Data-IQ on four real-world medical datasets. We show that Data-IQ’s characterization of examples is most robust to variation across similarly performant (yet different) models, compared to baselines. Since Data-IQ can be used with any ML model (including neural networks, gradient boosting etc.), this property ensures consistency of data characterization, while allowing flexible model selection. Taking this a step further, we demonstrate that the subgroups enable us to construct new approaches to both feature acquisition and dataset selection. Furthermore, we highlight how the subgroups can inform reliable model usage, noting the significant impact of the Ambiguous subgroup on model generalization.", |
| "1 Introduction": "Most machine learning models are optimized using empirical risk minimization (ERM), to maximize average performance during training [1]. However, in real-world settings, while models may perform well on average, they might underperform on specific subgroups of data [2–4]. Most of the current literature has focused on this problem in computer vision, where the underperforming subgroups are typically associated with data examples that have spurious correlations [1, 5] or mislabelling [6].\nIn this paper, we focus on tabular data, the most ubiquitous format in medicine and finance, where data is based on relational databases [7, 8]. Specific to the tabular setting, we formalize an understudied source of underperformance, namely heterogeneity of outcomes. This phenomenon is vital in healthcare, where patients with similar features can have different outcomes [9–11]. For example, [12] showed that prognostic models for risk prediction perform well on average, but underperform on\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nspecific cancer types due to heterogeneity of risk (outcome). Prior works have audited subgroups belonging to sensitive attributes (e.g. demographics, race or gender), as it is well-known that ML models generally underperform on these subgroups [13, 14]. However, this approach is limiting, as it needs the sensitive attributes to be specified, and it also does not capture the case where complex feature interactions may lead to underperformance.\nWe take a different approach to automatically stratify data into subgroups, usable with any ML model trained in stages (epochs/iterations); e.g., neural networks, gradient boosting etc. Specifically, we study the behavior of individual examples during training, called training dynamics. This allows us to formalize that examples can lie on the spectrum from easy to hard to predict. More concretely, let’s consider the task of patient mortality prediction. Based on their features, sicker patients more often have a mortality event. Thus, they are easy to learn for any model and will be predicted correctly with high confidence (Easy). However, a subgroup might have a heterogeneous outcome: survival despite their poor prognosis. This heterogeneity could result from randomness, making it practically impossible for a model to learn. These examples will be predicted incorrectly yet with high confidence (or equivalently have low confidence for the correct class) (Hard). In tabular data, there are also examples with inherent ambiguity where the predicted probability for the correct class remains low. They appear where the current features are insufficient to distinguish the example correctly, regardless of the model used [15, 16] (Ambiguous). These subgroups naturally arise in real data; see Fig.1 (ii).\nIdentifying these subgroups is practically valuable, as improving accuracy and robustness often depends on the data’s characteristics and quality [17–20]. As mentioned in [20, 21], the “data” work is often undervalued as merely operational, yet failing to account for it can have immense practical harm [12, 20]. Consequently, our goal is to build a systematic framework with the following desired properties (P1-P4), motivated by the considerations of practitioners at various stages of the ML pipeline. In satisfying P1-P4, we seek to address the “dire need for an ML-aware data quality that is not only principled, but also practical for a larger collection (. . .) of ML models” [19]:\n(P1) Robust data characterization: the characterization of data examples should be robust, such that it is consistent across similar performing models, that have different architectures/parametrizations. (P2) Principled data collection: the characterization should be informative and actionable, providing practitioners insights that enable both quantitative feature collection and selection between datasets. (P3) Reliable model deployment: the characterization should enable reliable model usage, both by unmasking unreliable subgroups or using the subgroups to tailor the data for better performance. (P4) Plug & play: the characterization should be applicable to a variety of ML models widely used on tabular data, including neural networks, gradient boosting (and variants) etc.\nTo fulfill P1-P4, we propose Data-IQ, a systematic framework that characterizes examples based on the inherent qualities (IQ) of the data; at both training and deployment time. As outlined in Fig.1, Data-IQ leverages confidence and in the “data-centric AI” spirit focused on the data: aleatoric uncertainty (i.e. uncertainty inherent to the data). This permits Data-IQ to provide ML-aware data quality that is principled and practical for a variety of ML models, making the following contributions:\nContributions: 1⃝ Data-IQ models the aleatoric (data) uncertainty, which permits subgroup identification that is most robust to variation across different yet similar performing models/parameterizations, compared to other baselines, i.e. P1. 2⃝ Data-IQ aids with principled data collection P2 in two ways: Firstly, it permits to quantify the value of an acquired feature by measuring how the feature reduces the aleatoric uncertainty of the example. This information enables a more principled approach to feature acquisition. Secondly, it permits to compare datasets based on the proportion of ambiguous examples. We demonstrate that the proportions link to how well a model trained with the dataset generalizes. 3⃝ Experimentally, the subgroups identified by Data-IQ can inform reliable model deployment, i.e. P3. We highlight cases, where assessment on average might mask unreliable performance, including data sculpting, model robustness, and uncertainty estimation methods. 4⃝ Data-IQ by construction is “plug-and-play” i.e. P4 with any ML model that can be checkpointed, granting practitioners flexibility to apply Data-IQ to their model of choice.", |
| "2 Related work": "This paper primarily engages with the literature on data characterization and contributes to the nascent area of data-centric AI [22, 23]. An extended discussion of related work is found in Appendix A.1.\nData characterization. The literature to characterize data samples has used a myriad of different metrics. However, their goals have typically been different, such as spurious correlation or mislabelling, compared to Data-IQ, whose goal is to characterize subgroups with respect to the outcome predictions. Furthermore, none of these methods completely addresses all the desired properties (P1-P4). The closest to our work on data quality is Data Maps [24]. A key contrast to Data-IQ is that Data Maps use confidence and prediction variability to flag instances. In Sec. 3, we show that this prediction variability corresponds to the model uncertainty (i.e. epistemic uncertainty). Alternatively, Data-IQ takes a different and more principled approach, capturing the inherent data uncertainty (known as aleatoric uncertainty) [25]. Epistemic uncertainty is reducible by collecting more data. In comparison, aleatoric uncertainty is irreducible even with more samples. This is due to the fact that it captures properties inherent to the data [25–27]; only better features can reduce the aleatoric uncertainty [27]. Later in Fig. 4, we show on real data that capturing the aleatoric uncertainty allows Data-IQ to be more robust to variation across different models, compared to Data Maps (P1). This allows practitioners to characterize their data in such a way that the insights are more consistent. We further show theoretically in Sec. 3.3, why the characterization by Data-IQ indeed provides a more principled definition for Ambiguous examples, compared to Data Maps.\nBesides Data Maps, other related methods address specific computer vision problems: identifying mislabelled images using area under the margin (AUM) [6], gradient norm to identify “important examples” to aid pruning during training [28], or underperformance due to spurious image correlations [1]. The tabular setting considered in this paper requires new methods, due to the specific problem of heterogeneous outcomes for examples with similar features (i.e. “feature collision”). The ambiguity in the tabular, “feature collision” sense, is different or non-existent in modalities such as images.\nData-Centric AI. The assessment of data quality is a critical but often overlooked problem in ML [20]. While the focus in ML is typically on optimizing models, the task of ensuring high quality data (or even improving one’s data) can be equally valuable to improving performance [17, 20]. Even when it is considered, the process of assessing datasets is adhoc or artisinal [20, 22, 29, 30]. The recent growth of the data-centric AI space aims to build systematic tools for “data collection, labeling, and quality monitoring processes for datasets to be used in machine learning” [23, 30]. Data-IQ contributes to this nascent body of work, specifically around ML-aware data quality monitoring [19].", |
| "3 Formulation": "This section gives a detailed formulation of Data-IQ and motivates our proposed example stratification that uses aleatoric uncertainty and confidence. We then describe how Data-IQ stratifies examples into subgroups at both training and testing time. Finally, we show Data-IQ’s formulation permits usage with any ML model trained in stages, e.g. neural networks, GBDTs etc, unlike other approaches.", |
| "3.1 Preliminaries": "We consider the typical supervised learning setting, where the aim is to assign an input x ∈ X ⊆ RdX to a class y ∈ Y ⊂ N. We have a dataset D with N ∈ N∗ examples, i.e. D = {(xn, yn) | n ∈ [N ]} drawn IID from an unknown distribution. Our goal is then to learn a model fθ : X → Y , parameterized by θ ∈ Θ. Typically, the parameters θ are learned to minimize empirical risk, by minimizing the average training loss , i.e. ERM(θ) = 1n ∑n i=1 ℓ(xi, yi; θ), with a loss function ℓ : X ×Y×Θ → R+.\nThis brings us to the essence of the problem: “not all examples are created equally”. e.g. patients with similar features might have heterogeneous outcomes, reflected in their labels y being different. These correspond to subgroups within Dtrain on which a predictive model might systematically underperform. We formalize this concept of hidden heterogeneous subgroups by assigning to each example xn a hidden subgroup label gn ∈G, where G = {Easy,Ambiguous,Hard}. Before giving a precise description of how those group labels are assigned, it is useful to detail the context. Several works have established that the training dynamics of a model, contains signal about the quality of the data itself [31–33]. For instance, it takes more epochs/iterations for a model to assign the correct label to noisier/more difficult training examples. With Data-IQ, we build on those observations and assign a label gn to each example xn by studying its training dynamic, which is then used to estimate the aleatoric uncertainty and predictive confidence of each example. The following sections detail how this is done and how this contrasts with existing approaches.", |
| "3.2 Uncertainty decomposition during training": "Recall that practitioners desire flexibility in the choice of the model. Hence, we focus on any ML model that is trained in stages and can be checkpointed during training, fθ : X → Y parameterized by θ and on a given example from the training set (x, y) ∈ Dtrain. Assume that the model fθ corresponds to a conditional categorical distribution, assigning a probability to each class given the input x: fθ(x) = P (Y | X = x, ϑ = θ). During iterative training, the model parameters θ vary, where over E ∈ N∗ epochs/iterations, these parameters take E different values at each checkpoint, i.e. θ1, θ2, . . . , θE . Since our analysis relies on the model’s training dynamics, we want to take those different parameters into account. For the sake of notation, we introduce a random variable ϑ that has an empirical distribution over this set of parameters captured through the training process ϑ ∼ Pemp({θe | e ∈ [E]}). The variability of the model’s parameters at training time is then reflected by the variance Vϑ [·]. The uncertainty we model is based on the random variable Y | X = x that represents the possible labels given the input x. Since the ground-truth label y is available for training examples, we would like to distinguish between 2 cases: 1⃝ the predicted label corresponds to the ground-truth label Y = y and 2⃝ the predicted label is different from the ground-truth label Y ̸= y. To this end, we introduce a binary random variable Ỹ that is set to one when the predicted label equals the ground-truth label (Ỹ = 1 if Y = y) and that is zero otherwise (Ỹ = 0 if Y ̸= y). As discussed earlier, we are interested in the uncertainty on the predictive random variable Ỹ | X = x. This uncertainty is modeled by the variance v(x) = VỸ |X [ Ỹ | X = x ] . We will now show that this quantity can be evaluated with the model predictions.\nWe start by noting that the definition of Ỹ implies that Ỹ | X = x, ϑ = θ is a Bernoulli random variable with parameter1 P(x, θ) = P (Y = y | X = x, ϑ = θ) = [fθ(x)]y. From this observation, we can decompose v(x) with the law of total variance and make each term explicit:\nv(x) = Vϑ [ EỸ |X,ϑ [ Ỹ |X = x, ϑ ]] ︸ ︷︷ ︸\nBernoulli Mean: P(x, ϑ)\n+Eϑ [ VỸ |X,ϑ [ Ỹ |X = x, ϑ ]] ︸ ︷︷ ︸ Bernoulli Var: P(x, ϑ)(1 − P(x, ϑ))\n= Vϑ [P(x, ϑ)]︸ ︷︷ ︸ Epistemic uncertainty: vep(x) +Eϑ [P(x, ϑ)(1− P(x, ϑ))]︸ ︷︷ ︸ Aleatoric uncertainty: val(x)\n. (1)\nIn Eq. (1), we have split the overall uncertainty into two components: epistemic and aleatoric uncertainty. This type of decomposition is similar to those in the context of Bayesian neural\n1 In this case, [fθ(x)]y denotes the component y of the probability vector fθ(x).\nnetworks [34, 35]. To understand the distinction between uncertainties, it is useful to closely examine the variances in the first equality of Eq. (1). For epistemic uncertainty vep, variance is evaluated on the model parameters ϑ. Hence, epistemic uncertainty originates from the fact that a model’s predictions oscillate when we change its parameters. For the aleatoric uncertainty val, the variance is evaluated on the predicted label Ỹ | X,ϑ. Hence, the variability originates from the inability to predict the correct label with high confidence. While existing works use epistemic uncertainty to stratify examples, we argue that aleatoric uncertainty is a better principled choice to capture the inherent data uncertainty.", |
| "3.3 Stratification based on data uncertainty": "We now explain how the above notion of uncertainty permits to assign a group label g ∈ G to each training example x. First, we use the empirical distribution ϑ ∼ Pemp({θe | e ∈ [E]}) to explicitly write the two types of uncertainties in Eq. (1),where P̄(x) = 1/E ∑E e=1 P(x, θe):\nvep(x) = 1\nE E∑ e=1 [ P(x, θe)− P̄(x) ]2 val(x) = 1 E E∑ e=1 P(x, θe)(1− P(x, θe)), (2)\nStratification at training time. Before giving a precise definition of the group labels, let us give an intuitive definition for each group. 1⃝ Easy: examples that have low data uncertainty that the model can correctly predict with high confidence, 2⃝ Ambiguous: examples that have high data uncertainty, hence the model is unable to predict with confidence and 3⃝ Hard: examples that have low data uncertainty that the model is unable to predict (i.e. predicted incorrectly yet with high confidence or equivalently have low confidence for the correct class). We note that we need the model’s prediction for the ground-truth class to delineate Easy and Hard examples. In practice, we use the model’s average confidence for the ground-truth class P̄(x) defined previously for this purpose. We make use of this concept to detail how labels are assigned to training examples (x, y) ∈ Dtrain:\ng(x,Dtrain) = Easy if P̄(x) ≥ Cup ∧ val(x) < P50 [val(Dtrain)] Hard if P̄(x) ≤ Clow ∧ val(x) < P50 [val(Dtrain)] Ambiguous otherwise\n(3)\nwhere Cup and Clow are upper and lower confidence threshold resp. and Pn the n-th percentile. We provide a practical method to set Cup and Clow, applicable to any dataset in Appendix A.\nIn contrast to Data-IQ which uses Aleatoric uncertainty val(x); Data Maps [24] identifies ambiguous training examples (x, y) ∈ Dtrain as those with high epistemic uncertainty vep(x). We consider a typical scenario to see how this characterization might cause problems – illustrated in Fig. 2. Consider an example x in which the model cannot classify confidently during the entire training P(x, θe) = 0.5 ∀e ∈ [E]. In this case, the epistemic uncertainty vep(x) vanishes, as the prediction is consistently unconfident (i.e. low variability of the model predictions). This implies that Data Maps would consider this example as non-ambiguous, despite the ambiguous model prediction for this example. This problem can be traced back to the definition of epistemic uncertainty, which\nmeasures the sensitivity of a model prediction with respect to the model’s parameters.\nA more principled definition for ambiguous examples should capture examples for which the model cannot predict the appropriate label with high confidence (i.e. data uncertainty). This is precisely what the aleatoric uncertainty val(x) captures (Data-IQ). Furthermore, it is easy to verify that the previous example P(x, θe) = 0.5 ∀e ∈ [E] maximizes the aleatoric uncertainty (see Fig. 2). Since high aleatoric uncertainty captures ambiguous examples for various values of the model’s parameters, we believe that it better reflects the inherent quality of the data. In that sense, we expect this quantity to be more stable and robust to variation for different ML model parameters/architecture changes (P1). We experimentally validate the consistency in Sec. 4.\nStratification at inference time. Most previous methods are only applicable at training time. To address this limitation and improve the practical utility of our method, we also stratify examples into subgroups at deployment time. However, if we try to apply the above stratification for incoming data at deployment time, we face a problem: P̄(x) requires the ground-truth class y. For this reason, we follow an alternative approach based on representation learning that does not require access to ground-truth labels. The idea is the following: we construct a low-dimensional UMAP embedding [36] h : X → H of the training set’s examples xtrain ∈ Dtrain. In doing this, we note two things (see Appendix C): 1⃝ Ambiguous examples have distinctive features and are clustered in embedding space. Thus, it is possible to distinguish the Ambiguous examples using the embedding. 2⃝ It is not possible to reliably distinguish Easy examples from Hard examples based on the embedding, because Hard examples are a minority with outcome randomness that have similar features, as the Easy examples. Combining these observations, we note it is possible to identify Ambiguous test examples. This label is assigned by computing the related embedding h(xtest) and comparing this embedding to the nearest neighbor embedding from the training set, i.e. d[h(xtest), h(xtrain)]∀xtrain ∈ Dtrain. For models like neural networks with an implicit representation space, the same analysis can be done using the model’s representation space.", |
| "3.4 Using Data-IQ with a variety of models, beyond Neural Networks (P4)": "The baseline methods discussed are primarily applicable only to neural networks. However, practically in tabular settings (e.g. healthcare/finance etc), practitioners often use other highly performant iterative learning algorithms such as Gradient Boost Decision Trees (GBDTs) or variants [7, 37]. Data-IQ’s formulation by construction is naturally adaptable to any ML model trained in stages, that can be checkpointed. This satisfies P4, which allows practitioners the flexibility to use Data-IQ with their application-specific model of choice. Appendix A provides guidelines, space and time considerations, as well as, discussing the specifics of how Data-IQ is easily adapted, for example to GBDTs.", |
| "4 Experiments": "This section presents a detailed empirical evaluation demonstrating that Data-IQ 23 satisfies (P1) Robust data characterization, (P2) Principled data collection and (P3) Reliable Model Deployment, introduced in Sec.1. Recall that (P4) Plug and play is satisfied by construction of Data-IQ.\nDatasets. We conduct experiments on four real-world medical datasets, with diverse characteristics (different sizes, binary/multiclass, varying degrees of task difficulty etc) and highlight real-world applicability with heterogeneous patient outcomes: (1) Covid-19 dataset of Brazilian patients [38], (2) Prostate cancer datasets from both the US [39] and UK [40], (3) Support dataset of seriously ill hospitalized adults [41], (4) Fetal state dataset of cardiotocography [42]. We describe the datasets in greater detail in Appendix B, along with further experimental details. We observe similar performance across different datasets, but given the space limitations, we typically show pertinent results for a single dataset, and include results for the other datasets in Appendix C.", |
| "4.1 (P1) Robust data characterization": "Robustness to variation. As per P1, we desire that Data-IQ identifies subgroups in a manner robust to variation across different models. This would allow a practitioner to obtain consistent insights about their data even when using different model architectures/parameterizations. When comparing the different methods from Sec. 2,\nwe note that each method has its own specific metric used to characterize examples (see Appendix B). To assess robustness to variation, we compare the consistency of the different characterization metrics, evaluated on models with different architectures/parameterizations. All models are trained to convergence, with early stopping on a validation set.\n2 https://github.com/seedatnabeel/Data-IQ 3 https://github.com/vanderschaarlab/Data-IQ\nQuantitatively, we compute the Spearman rank correlation between all model combinations, see Fig. 3. We observe that Data-IQ is the most consistent and robust to variation across different models, having the highest score on all datasets, satisfying P1. Further, the baseline methods themselves are also not consistent in performance ordering across datasets, which is undesirable. Ultimately, the robustness means practitioners can feel confident in the consistency of data insights, derived using Data-IQ.\nTo further compare Data-IQ and Data Maps[24], we examine 3 distinct models that achieve similar performance on the Covid-19 [38] tabular dataset, and we produce a characterization of the training set using each model in Fig. 4. We note that Data Maps groups can be recovered from (3) by replacing the aleatoric uncertainty val from Data-IQ with its epistemic counterpart vep. The y-axis is the same for both methods and corresponds to P̄(x). The x-axis corresponds to val(x) for Data-IQ and to vep(x) for Data Maps. Each model is assigned a color in Fig. 4. We note three things 1⃝ Data-IQ’s characterization of the data is significantly more stable across models. 2⃝ Linked to the points in Sec.3, Data Map’s high\nand low confidence examples in fact have high epistemic uncertainty vep, which can lead to incorrect conclusions when attempting to use Data Maps to characterize data. 3⃝ Data-IQ always distributes the data around a bell shape, which standardizes its interpretation. We provide a theoretical analysis to explain this bell shape observation in Appendix A.\nData-IQ: Neural Networks vs Other model classes. Data-IQ can be used with any ML model trained in stages linked to P4: Plug and Play. Methods such XGBoost, LightGBM and CatBoost methods are widely used by practitioners on tabular data, often more so than neural networks [7]. Ideally, based on P1, we desire that the characterization of examples be consistent for similar performing models, irrespective of whether the model is a neural network or an XGBoost model.\nTo assess the robustness of both Data-IQ and Data Maps, we train a neural network, XGBoost, LightGBM and CatBoost models to achieve the same performance and then perform the characterization for all models. We can clearly see in Fig. 5 (Support) that Data-IQ has a similar characterization across all four models. Contrastingly, for Data Maps, the char-\nacterizations are significantly different for the different model classes. The implication of this result is that by Data-IQ capturing the uncertainty inherent to the data (aleatoric uncertainty), it leads to a more consistent and stable characterizations of the data itself. Especially, this highlights that Data-IQ characterizes the data in a manner that is not as sensitive to the choice of model when compared to Data Maps. For more, see Appendix C.\nData insights from subgroups. Given the distinct differences between the subgroups, we seek to understand what factors make these subgroups different and how they can provide insight into the dataset. Such insights are especially useful in clinical settings. Results for the prostate cancer dataset are illustrated in Fig. 6 (with other datasets in Appendix C). To visualize the different groups of patients within each subgroup, we cluster each subgroup (Easy, Ambiguous and Hard) using a Gaussian Mixture Model (GMM) similar to [5], selecting the optimal clusters based on the Silhouette score. We assess cluster quality vs alternatives in Appendix C.\nIn general, across datasets, the subgroups are: (1) Easy: Severe patients with a death outcome, and less severe patients with a survival outcome. (2) Ambiguous: Patients with similar features, but different outcomes. This could suggest that the features, we have at hand are insufficient to separate the differences in outcomes. (3) Hard: Severe patients with a survival outcome, and less severe patients with a death outcome. i.e. opposite outcomes as expected due to randomness in the outcomes.", |
| "4.2 (P2) Principled data collection": "Principled feature acquisition. As per Sec.4.1, the Ambiguous subgroup has examples with similar features, yet different outcomes. Recall that this case of ambiguity in the tabular setting is very different from ambiguity in other modalities, such as images. The ambiguity is due to insufficient features to adequately separate the examples. We link this to the concept that the Ambiguous subgroup has a high aleatoric uncertainty that is irreducible, even if we collect more data examples. Rather, aleatoric uncertainty can only be reduced by acquiring better features [27]. We leverage this idea and show that Data-IQ’s example characterization provides a principled approach to assessing the benefit of acquiring a specific feature.\nThis is different from feature selection, where all features are present and we select the most “important feature”. Additionally, this is different from active learning which quantifies the value of acquiring examples, not features.\nWith the above in mind, a valuable feature should decrease the example ambiguity (i.e. aleatoric uncertainty). Hence, a decrease in the proportion of Ambiguous examples can serve as a proxy for the feature’s potential value to the dataset. Understanding the value of features is useful in settings such as healthcare, where feature acquisition comes at a cost. To showcase the potential, we construct a semi-synthetic experiment, where we rank sort the features based on correlation with the target.\nWe then train different models, where we sequentially “acquire” features of increasing value (based on correlation). Fig. 7. shows results for the Support dataset. For Data-IQ, Fig 7 (a) shows that as we acquire “valuable” features, the proportion of the Ambiguous subgroup drops, whilst the Easy subgroup increases, with significant changes for the important features. This shows that Data-IQ’s subgroup characterization can be used to quantify a feature’s value, by its ability to decrease ambiguity. In contrast, Data Maps, Fig 7 (c), shows minimal response to feature acquisition, suggesting it may not be sensitive enough to capture the feature’s value.\nFurther, for Data-IQ we see that the examples that remain as Ambiguous, after features are collected, maintain a consistent aleatoric uncertainty. This is desired as it demonstrates for those examples which remain Ambiguous, that indeed the features collected are not informative enough to reduce their inherent (aleatoric) uncertainty, i.e. those remaining still need better features (see Fig 7 (b)). While for Data Maps the added features, in fact increase the variability for all subgroups making it harder to stratify (see Fig 7 (d)). This links to the fact that Data Maps subgroups can’t capture the value of the acquired features. We further show experimentally in Appendix C that for Ambiguous examples, it is not simply a case of increasing the size of the dataset (i.e. more examples). In fact, this can increase the proportion of Ambiguous examples due to the increased probability of feature collisions as the dataset size increases. Ultimately, this motivates the usefulness of principled feature acquisition (which Data-IQ can guide), as a way to decrease dataset ambiguity.\nPrincipled dataset comparison. Extending beyond feature-level, an understudied scenario involves systematically selecting between entire datasets in two cases: (1) purchasing data from data markets [43–45] and (2) organizations where the data is siloed, with lengthy access processes [46, 47]. In both cases, synthetic versions of the real dataset has begun to be used [46]. For now, we ignore privacy concerns, and focus on data fidelity and quality, which compares the real and synthetic datasets using statistical measures. However, as per [48], the conclusions can vary across different metrics. In practice, competing “synthetic” datasets can be generated by different ML models or vendors. Thus, while they model the same underlying distribution, depending on the process used, one version might be superior. We now ask whether Data-IQ could permit us to systematically select between synthetic datasets? We consider the setting where the real data is not accessible. Hence, we can’t use existing evaluation metrics, yet still wish to compare the synthetic datasets (e.g. comparing vendors).\nTable 1: Comparison of accuracy performance rank and (dataset quality). Synthetic dataset w/ better quality (↑ easy)produces the best real data test performance.\nDataset (V1) CTGAN (V2) Gaussian Copula\nProstate Rank 1 (63% Easy) Rank 2 (30% Easy) Covid Rank 1 (70% Easy) Rank 2 (63% Easy) Support Rank 1 (59% Easy) Rank 2 (38% Easy) Fetal Rank 2 (40% Easy) Rank 1 (51% Easy)\nWe simulate this scenario by generating synthetic data using 2 different models, representing 2 synthetic data vendors: (V1) CTGAN [49] and (V2) Gaussian Copula [50]. We then characterize the dataset subgroup proportions using Data-IQ. We hypothesize that datasets with greater proportions of Easy examples generalize better. As is common, we validate fidelity by training with synthetic data and testing with real data [51, 52], where the best fidelity data produces the best model performance\non real data (test set). Table 1 shows that datasets with the highest quality as measured by Data-IQ indeed produces the best performance on real test data (i.e. Rank 1). Further, it shows that the same “vendor” does not always produce the best dataset, highlighting the value of comparative assessment. Ultimately, these two aspects demonstrate that when the real data is unavailable, Data-IQ is a useful tool in the hands of practitioners wishing to assess data quality, especially when selecting between different datasets.\n4.3 (P3) Reliable Model Deployment\nLess is more: data sculpting based on subgroups. What role do the data subgroups, specifically Ambiguous examples, play in ensuring model generalization? Using multi-country prostate cancer data, we train a baseline model on US data (SEER) and assess generalization when deployed on patients in the UK (CUTRACT) and vice versa. In Fig. 8, we see that test time generalization performance, monotonically increases as we decrease the proportion of Ambiguous training data (see Appendix C.13 for absolute num-\nbers). Ultimately, this illustrates the value of sculpting the training dataset, by removing ambiguous examples, as a way to improve the reliability of a deployed model.\nGroup-DRO: Not a silver bullet when used in tabular settings. Once subgroups of underperformance are identified, it is generally assumed that methods such as Group Distributionally Robust Optimization (DRO) [53] can be applied to improve model performance and robustness. We compare Group-DRO with groups identified by Data-IQ, and as baselines: George [5] and Just-Train-Twice (JTT) [1]. As per the previous experiments, the largest underperforming group (in proportion) is the Ambiguous examples. Similar to the literature, we evaluate the performance change from a baseline model, after Group-DRO is applied.\nThe results in Table 2 show that Group-DRO using Data-IQ’s groups both improves overall performance and improves performance on the Ambiguous group. Whilst, for the other baselines, the performance actually degrades.\nNevertheless, while using Data-IQ can boost performance, it is evident that simply applying GroupDRO is not a silver bullet to equalize subgroup performance, given the sometimes small improvement. The rationale is our tabular setting is different, from the spurious correlation setting in computer vision where Group-DRO typically shines. Ultimately, we believe, based on the feature acquisition results, that in tabular settings, practitioners would be well served acquiring better features to improve performance and reduce ambiguity.\nSubgroup-informed usage of uncertainty estimation. Uncertainty estimation methods are essential in safety-critical areas such as healthcare [54], yet are typically assessed on average. We ask the question: since subgroups have different performance properties, are uncertainty estimates equally reliable for each subgroup? As done in the literature [55, 56], if an uncertainty estimate is reliable and informative of predictive performance, it can be used to defer “uncertain examples”. This is done by rank sorting examples based on uncertainty and thresholding proportions of tolerated uncertainty [55–58]. Ideally, as the threshold proportion of examples increases (i.e. inclusion of more uncertain examples), we should see a monotonic decrease of accuracy. We assess this by training a Bayesian Neural Network (BNN)[59] to obtain uncertainty estimates. We then compute the performance across different threshold proportions τ ∈ {0.1, 0.2, . . . , 1} for the Ambiguous subgroup specifically, and as is commonly done, across the entire dataset (i.e. average).\nFig. 9 shows a specific example, wherein average examples exhibit the monotonic decreasing relationship, as expected. However, the Ambiguous examples categorized by Data-IQ , contrary to expectations, show an increase in accuracy as more uncertain examples are included. This suggests that uncertainty estimates in this case are not as informative of predictive performance for the Ambiguous examples. This shows the potential for practitioners to use Data-IQ at deployment time to understand which examples require auditing\nbefore deferring, as the “average” monotonic decreasing behavior, based on the uncertainty estimates, may not always hold. Ultimately, the result further highlights how subgroup characterization via Data-IQ could assist practitioners in unmasking unreliable performance, not evident on average.", |
| "5 Discussion": "In this paper, we introduce Data-IQ, a systematic framework that can be used with any ML model with checkpoints, to characterize examples into subgroups with respect to the outcome. Through several experiments, we demonstrate that the usage of aleatoric uncertainty, which captures properties more inherent to the data, is indeed more principled, being more robust to variation across models and/or parameterizations. Data-IQ’s consistency is unmatched by any compared baseline. Data-IQ should not automate and replace the intuition of a data scientist. Rather, as we have demonstrated, Data-IQ should serve as a systematic “data-centric ML” tool that assists and empowers data scientists with the “data” work at training time, whilst also guiding reliable model usage at deployment time.\nData-IQ beyond tabular settings. The main paper has primarily assessed the utility of Data-IQ in the tabular setting. That said, in Appendices C.7 and C.8, we evaluate the utility of Data-IQ on text data (NLP) and images (computer vision) respectively.\nLimitations and future opportunities. 1⃝ While Data-IQ characterizes examples; the current formulation does not allow us to understand which attributes are responsible for the characterization per example. This would be an interesting extension around dataset explainability, allowing practitioners to better probe their data. 2⃝ In high-stakes settings such as healthcare, to mitigate possible adverse effects (e.g. difficulty of Easy vs Hard), Data-IQ should be used with a “human-in-the-loop”, allowing experts to complement and validate findings with domain knowledge.", |
| "Acknowledgments": "The authors are grateful to Zhaozhi Qian, Yuchao Qin, Evgeny Saveliev and the anonymous NeurIPS reviewers for their useful comments & feedback. Nabeel Seedat is supported by the Cystic Fibrosis Trust, Jonathan Crabbe by Aviva, Ioana Bica by the Alan Turing Institute, EPSRC grant EP/N510129/1 and Mihaela van der Schaar by the Office of Naval Research (ONR), NSF 1722516.", |
| "Reviewer Summary": "Reviewer_3: This paper proposed Data-IQ, a framework compatible with any machine learning (classification) models whose training is conducted in stages (iterations/epochs), to assess the quality of data samples in tabular format by categorizing them into Easy, Ambiguous, Hard groups based on the aleatoric uncertainty of the data samples. The paper demonstrated the utility of Data-IQ in guiding feature acquisition, comparing training datasets and improving model generalization through experiments and analysis on multiple real-world datasets.\n\nReviewer_4: The authors make a strong case that population level metrics for model performance may not be representative of model performance on subgroups. The authors proposed DataIQ a framework to phenotype and subgroup patients based on model uncertainity on an individual level. The authors apply their proposed technique on multiple real world healthcare datasets and demonstrate ability of the model to 1) identify robust data 2) Data collection 3) Model deployment.\n\nReviewer_5: The authors present an approach that given a model can identify 3 general sub-groups in the dataset that differentiates the model performance across the dataset. In particular they analyzed tabular data to find such sub-groups that are useful for meaningful data-driven AI and satisfies requirements from an MLOps perspective. They conducted experiments and demonstrated various aspects of the proposed method. Overall, their proposed method aims at informing reliable model usage.\n\nReviewer_6: The paper proposes a framework to identify examples in the datasets that a classifier tends to distinguish correctly, wrongly, and randomly. The paper decomposes the accuracy of a given example into a summation of epistemic uncertainty and aleatoric uncertainty. The paper then uses aleatoric uncertainty to classify examples into the three aforementioned categories, as opposed to existing work that uses epistemic uncertainty as the criterion." |
| } |