papers / 20240123 /2301.02424v2.json
yilunzhao's picture
Add files using upload-large-folder tool
b964a13 verified
{
"title": "Conformal Loss-Controlling Prediction",
"abstract": "Conformal prediction is a learning framework controlling prediction coverage of prediction sets, which can be built on any learning algorithm for point prediction. This work proposes a learning framework named conformal loss-controlling prediction, which extends conformal prediction to the situation where the value of a loss function needs to be controlled. Different from existing works about risk-controlling prediction sets and conformal risk control with the purpose of controlling the expected values of loss functions, the proposed approach in this paper focuses on the loss for any test object, which is an extension of conformal prediction from miscoverage loss to some general loss. The controlling guarantee is proved under the assumption of exchangeability of data in finite-sample cases and the framework is tested empirically for classification with a class-varying loss and statistical postprocessing of numerical weather forecasting applications, which are introduced as point-wise classification and point-wise regression problems. All theoretical analysis and experimental results confirm the effectiveness of our loss-controlling approach.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "Introduction",
"text": "Prediction sets convey uncertainty or confidence information for users, which is more preferred than prediction points, especially for sensitive applications such as medicine, finance and weather forecasting [1 ###reference_1###] [2 ###reference_2###] [3 ###reference_3###]. One example is constructing prediction intervals with confidence for regression problems, where the statistical guarantee is expected such that the true labels are covered in probability [4 ###reference_4###]. Nowadays, many researches have been proposed to build set predictors. Bayesian methods [5 ###reference_5###] and Gaussian process [6 ###reference_6###] are straightforward ways of producing prediction sets based on posterior distributions. However, their prediction sets can be misleading if the prior assumptions are not correct, which is often the case since the prior is usually unknown in applications [7 ###reference_7###] [8 ###reference_8###]. Other statistical methods such as bootstrap-based methods [9 ###reference_9###] and quantile regression [10 ###reference_10###] are also able to output prediction sets for test labels, but their coverage guarantees can only be obtained in the asymptotic setting, and the prediction sets may fail to cover the labels frequently in finite-sample cases. Different from these works, conformal prediction (CP), a promising non-parametric learning framework aiming to provide reliable prediction sets, can provide the finite-sample coverage guarantee only under the assumption of exchangeability of data samples [11 ###reference_11###]. This property of validity has been proved both theoretically and empirically in many works and applied to many areas [12 ###reference_12###] [13 ###reference_13###]. Besides, many researches extend CP to more general cases, such as conformal prediction for multi-label learning [14 ###reference_14###] [15 ###reference_15###], functional data [16 ###reference_16###] [17 ###reference_17###], few-shot learning [18 ###reference_18###], distribution shift [19 ###reference_19###] [20 ###reference_20###] and time series [21 ###reference_21###] [22 ###reference_22###].\nHowever, the researches about set predictors mentioned above mainly make promise about the coverage of prediction sets, i.e., they only control the miscoverage loss of set predictors, which can not be applied to other broad applications concerning controlling general losses. For example, consider classifying MRI images into several diagnostic categories [23 ###reference_23###], where different categories cause different consequence. In this setting, the loss of the true label being not included in the prediction set should be dependent on , which is the problem of classification with a class-varying loss. Another example is tumor segmentation [24 ###reference_24###]. Instead of making prediction sets to overly cover the pixels of tumor, one may care more about controlling other losses such as false negative rate. Other practical settings include controlling the projective distance for protein structure prediction, controlling a hierarchical distance for hierarchical classification and controlling F1-score for open-domain question answering [23 ###reference_23###] [24 ###reference_24###]. In these applications, the prediction sets with the coverage guarantee are not useful, as they are not constructed with controlling these general losses in mind.\nTo tackle this issue, two works for extending the finite-sample coverage guarantee of CP have been proposed recently. One is the work of conformal prediction sets with limited false positives (CPS-LFP) [25 ###reference_25###]. It employs DeepSets [26 ###reference_26###] to estimate the expected value or the cumulative distribution function of the number of false positives, and then uses calibration data to control the number of false positives of prediction sets. Conformal risk control (CRC) [24 ###reference_24###] extends CP to prediction tasks of controlling the expected value of a general loss based on finding the optimal parameter for nested prediction sets. The spirit is to employ calibration data to obtain the information of the upper bound of the expected value of the loss function at hand and control the expected value for the test object, whose main idea was originally proposed from their pioneer work named risk-controlling prediction sets (RCPS) [23 ###reference_23###]. CRC and RCPS aim to control the expected value instead of the value of a general loss for set predictors. By contrast, CPS-LFP can control the value of the loss related to false positives, but it is not general enough.\nIn some applications, controlling the value of a general loss can be more preferred than controlling the expected value, since one may only care about the loss value for a specific test object, just like the coverage guarantee made by CP and the -FP validity acheived by CPS-LFP. Therefore, this paper extends CP to the situation where the value of a general loss needs to be controlled, which has been not considered in the literature to our best knowledge. Our approach is similar to CRC with the main difference being that we focus on finding the optimal parameter for nested prediction sets to control the loss. Therefore, we also concentrate on inductive conformal prediction [27 ###reference_27###] or split conformal prediction [28 ###reference_28###] process like CRC.\nRecall that inductive conformal prediction makes the coverage guarantee as follows,\nwhere is the significance level preset by users, is the set predictor made by CP based on calibration data , is the test feature-response pair, and the randomness is from both and .\nBy comparison, conformal loss-controlling prediction (CLCP), the learning framework proposed in this paper, provides the controlling guarantee as follows,\nwhere is a loss function satisfying some monotonic conditions as in [24 ###reference_24###], is the preset level of loss, is a set predictor usually constructed by an underlying algorithm and a parameter . The optimal is obtained based on , and calibration data. The controlling guarantee needs two levels and to be chosen by users, which is similar with that in [23 ###reference_23###], i.e., CLCP guarantees that the prediction loss is not greater than with high probability when is small such as . If is defined based on false positives for multi-label classification, the controlling guarantee above can be seen as the ()-FP validity defined in Definition 4.2 in [25 ###reference_25###].\nWe prove the controlling guarantee for distribution-free and finite-sample settings with the assumption of exchangeability of data samples. The main idea is that we find the to make the quantile of the loss values on calibration data not greater than , which is inspired by CRC focusing on making the mean of the loss values not greater than . Since the property of the set predictors and loss functions used in CLCP is the same as that used in CRC, CLCP can also be applied to many applications concerning controlling general losses. These applications include not only the areas about classification and image segmentation, but also the field of graph signal processing [29 ###reference_29###] [30 ###reference_30###], for example, protein structure prediction.\nThe proposed CLCP is a novel learning framework compared to existing researches. Different from those aiming to control the value of the miscoverage loss, CLCP is a more general approach for the purpose of controlling the value of a general loss. Besides, CLCP can be widely used for many situations whereas CPS-LFP is specifically designed for controlling the loss related to false negatives. Also, CLCP differs from CRC and RCPS as their purpose is to control the expected value instead. Therefore, in the experimental section, we concentrate on designing the experiments to verify the theoretical conclusion for different applications, as the idea of controlling general losses for set predictors is original. To be specific, we test our proposed CLCP in classification with a class-varying loss introduced in [23 ###reference_23###], and postprocessing of numerical weather forecasts, which we consider as point-wise classification and point-wise regression problems. The experimental results empirically confirm the theoretical guarantee we prove in this paper.\nIn summary, the main contributions of this paper are:\nA learning framework named conformal loss-controlling prediction (CLCP) is proposed for controlling the prediction loss for the test object. The approach is simple to implement and can be built on any machine learning algorithm for point prediction.\nThe controlling guarantee is proved mathematically for finite-sample cases with the exchangeability assumption, without any further assumption for data distribution.\nThe controlling guarantee is empirically verified by classification with a class-varying loss and weather forecasting problems, which confirms the effectiveness of CLCP.\nThe rest of this paper is organized as follows. Section II reviews inductive conformal prediction and conformal risk control. Section III introduces conformal loss-controlling prediction and its theoretical guarantee. Section IV conducts experiments to test the proposed method and the conclusions are drawn in Section V."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "II Inductive Conformal Prediction and Conformal Risk Control",
"text": "This section reviews inductive conformal prediction and conformal risk control. Throughout this paper, denotes data drawn exchangeably from on , where is the calibration dataset and is the test object-response pair. We use lower-case letter to represent the realization of .\nThe set-valued function and loss function considered in this paper are the same as those in [24 ###reference_24###] and [23 ###reference_23###], which we formally introduce as follows.\nLet be a set-valued function with a parameter , where represents some space of sets and is the set of real numbers. Taking single-label classification for example, can be the power set of . For binary image segmentation, can be equal to as the space of all possible results of image segmentation, where the sets here stand for all of the pixels of positive class for the image.\nWe also introduce the nesting property for prediction sets and losses as in [23 ###reference_23###] as follows. For each realization of input object , we assume that satisfies the following nesting property:\nFurthermore, with and being two subsets of , we assume that is a loss function respecting the following nesting property for each realization of response :\nwhere is the upper bound of the loss function."
},
{
"section_id": "2.1",
"parent_section_id": "2",
"section_name": "II-A Inductive Conformal Prediction",
"text": "Inductive conformal prediction (ICP) is a computationally efficient version of the original conformal prediction approach. It starts with any measurable function named nonconformity measure and obtains nonconformity scores as\nfor .\nThen, with the exchangeable assumption and a preset , one can conclude that\nwhere is the quantile of [19 ###reference_19###]. Therefore, the prediction set made by ICP is\nwhich satisfies\nThe nonconformity measure is often defined based on a point prediction model learned from some other training samples, each of which is also drawn from .\nHere is an example of constructing prediction sets with CP. For a classification problem with classes, one can first train a classifier with the th output being the estimation of the probability of the th class, and calculate the nonconformity scores as\nwhere is the th output of , if stands for the th class. Therefore, the corresponding prediction set for an input object is\nwhich indicates that if the estimated probability of th class is not less than ."
},
{
"section_id": "2.2",
"parent_section_id": "2",
"section_name": "II-B Conformal Risk Control",
"text": "Different from conformal prediction, CRC starts with a set-valued function with the nesting property, whose approach is inspired by nested conformal prediction [31 ###reference_31###] and was first proposed in the researches about risk-controlling prediction sets.\nAssume one has a way of constructing a set-valued function with the nesting property of formula (1). Given a loss function with the nesting property of formula (2), the purpose of CRC is to find such that\ni.e., the expected loss or the risk is not greater than .\nTo do so, CRC first calculates as\nwith the fact that is a monotone decreasing function of based on the nesting properties. Then, CRC searches for using the following equation,\nwhere is an estimation of the risk on calibration data and is introduced to make the estimation not overconfident.\nThese two steps of CRC are too simple that one may surprise about its theoretical conclusion that with the assumption of exchangeability of data samples, the prediction set\n obtained by CRC satisfies formula (3), which has been also proved empirically in [24 ###reference_24###]. CRC extends CP from controlling the expected value of miscoverage loss to some general loss, which can be applied to the cases where is beyond real numbers or vectors, such as images, fields and even graphs.\nAfter tackling the theoretical issue, the problem for CRC is how to construct . Here, we also give an example of a classification problem with classes. In fact, with the same notations of the example in Section II-A, CRC can construct the prediction set as\nTherefore, as long as satisfies formula (2), such as is the indicator of miscoverage, CRC guarantees to control the risk as formula (3)."
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "III Conformal Loss-Controlling Prediction And Its Theoretical Analysis",
"text": "This section introduces the approach of CLCP and its theoretical analysis. CLCP also has two steps like CRC, and the main difference between them is that CLCP focuses on whether the estimation of the quantile of the losses is not greater than while CRC concentrates on whether the mean of the losses not greater than . The controlling of the quantile of the losses makes CLCP able to control the value of a general loss by employing the probability inequation derived from the exchangeability assumption, which is also employed by ICP if the loss is seen as the nonconformity score.\nSuppose one has a way of constructing a set-valued function with the nesting property of formula (1), which can be the same as that used in CRC. Here, we assume that the parameter is selected from a discrete set , such as from to with a step size , which avoids us from the assumption of right continuous for the loss function in theoretical analysis, and is also reasonable since we actually search for with some step size in practice [24 ###reference_24###] [23 ###reference_23###]. Besides, the latest paper about risk-controlling prediction also makes this discrete assumption for general cases [32 ###reference_32###]. After determining and , CLCP first calculates on calibration data as formula (4). Then, for any preset and , CLCP searches for such that\nwith being the quantile of . The approach of CLCP is summarised in Algorithm 1, which is easy to implement.\nNext, we introduce the definition of -loss-controlling set predictors and then prove our theoretical conclusion about CLCP.\nGiven a loss function and a random sample , a random set-valued function whose realization is in the space of functions is a -loss-controlling set predictor if it satisfies that\nwhere the randomness is both from and .\nAfter all these preparations, we can prove in Theorem 1 that constructed by CLCP is a -loss-controlling set predictor.\nSuppose are data drawn exchangeably from on , is a set-valued function satisfying formula (1) with the parameter taking values from a discrete set , is a loss function satisfying formula (2) and is defined as formula (4). For any preset , if also satisfies the following condition,\nthen for any , we have\nwhere is defined as formula (5).\nLet be the quantile of , and\ndefine as\nSimilarly, let be the quantile of , and\nwe have\nAs and formula (6) holds, and are well defined.\nSince is the upper bound of , by definition, we have\nwhich leads to\nas and satisfy the nesting properties of formula (1) and (2).\nSince is dependent on the whole dataset , are exchangeable variables, which leads to\nas is just the corresponding quantile (See the proof of Lemma 1 in [19 ###reference_19###]).\nCombining the definition of , formula (8) and (9), we have\nwhich completes the proof.\n\u220e\nAt the end of this section, we show that CP can be seen as a special case of CLCP from the following viewpoint. Suppose is constructed by a nonconformity score , which is defined as\nand is the miscoverage loss such that\nwhere is the indicator function. In this case, can only be or as the loss can only be these two numbers. Besides, only is meaningful, which means that one wants to control the miscoverage. For CLCP, let be an arithmetic sequence whose common difference, minimum and maximum are , and respectively and set . By definition, can be written as\nwhere is the nonconformity score of the th calibration data for CP.\nIn comparison, referring to [24 ###reference_24###], the optimal for CP is\nTherefore, if for each , we have\nwhich implies that the prediction sets of CP and CLCP are nearly the same if is small enough. In summary, if and have special forms and includes the upper and lower bounds of nonconformity scores with being small enough to be ignored, CP can be seen as a special case of CLCP."
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "IV Experiments",
"text": "This section conducts the experiments to empirically test the approach of CLCP. First, we build CLCP for the classification problem with a class-varying loss introduced in [23 ###reference_23###]. Then, we focus on two types of weather forecasting applications, which can be seen as point-wise classification and point-wise regression problems respectively. All experiments were coded in Python [33 ###reference_33###]. The statistical learning methods used in Section IV-A were implemented using Scikit-learn [34 ###reference_34###] and the deep learning methods used in Section IV-B and Section IV-C were implemented with Pytorch [35 ###reference_35###]."
},
{
"section_id": "4.1",
"parent_section_id": "4",
"section_name": "IV-A CLCP for classification with a class-varying loss",
"text": "We collected binary or multiclass classification datasets from UCI repositories [36 ###reference_36###] whose information is summarized in Table I. The problem is to make the prediction sets of labels controlling the following loss\nwhere is the loss for being not in the prediction set . The loss for each label is generated uniformly on like [23 ###reference_23###]. Support vector machine (SVM) [37 ###reference_37###], neural network (NN) [38 ###reference_38###] and random forests (RF) [39 ###reference_39###] were employed as the underlying algorithms separately to construct prediction sets based on CLCP. The prediction set is constructed as\nwhere is the estimated probability of the observation being th class by the corresponding underlying algorithm. For each dataset, we used of the data for testing and and of the remaining data for training and calibration respectively. Based on the training data, we selected the meta-parameters with three-fold cross-validation and used the optimal meta-parameters to train the classifiers. The regularization parameter of SVM was selected from , and the learning rate and the epochs of NN were selected from and . The number of trees of RF were selected from and the partition criterion was either gini or entropy. After training, we used the trained classifiers and the calibration data to search for with Algorithm 1 and construct the final set predictors. All of the features were normalized to by min\u2013max normalization and for each dataset, the experiments were conducted times and the average results were recorded.\nThe bar plots in Fig. 1 and Fig. 2 show the experimental results for public datasets with and . The results in Fig. 1 concern about the frequency of the prediction losses being greater than on test set, which is the estimated probability of\nand should be near or lower than empirically due to formula (7).\nThe bar plots of Fig. 1 demonstrate that the frequency of the prediction losses being greater than is near or below , which verifies the conclusion of Theorem 1.\nThe bar plots of Fig. 2 show the average sizes of prediction sets for different , describing the informational efficiency of the prediction sets. Changing can effectively change the average size of prediction sets and changing may slightly change average size (such as the results for wine-quality-red). Although many prediction sets are meaningful with average sizes being near , the prediction sets for the dataset contrac may be not useful, since no matter how to change and , the average sizes of the prediction sets are all near or above , whereas the number of classes of contrac is . Thus, how to construct efficient prediction sets in the learning framework of CLCP is worth exploring for further researches.\nCombining Fig. 1 and Fig. 2, we observe that different classifiers can perform differently for different datasets, which indicates that the underlying algorithm affects the performance and the model selection approach is necessary for CLCP.\n###figure_1### ###figure_2###"
},
{
"section_id": "4.2",
"parent_section_id": "4",
"section_name": "IV-B CLCP for high-impact weather forecasting",
"text": "###figure_3### ###figure_4### ###figure_5### The remaining experiments apply CLCP to weather forecasting problems. Here we concentrate on postprocessing of the forecasts made by numerical weather prediction (NWP) models [40 ###reference_40###] [41 ###reference_41###]. NWP models use equations of atmospheric dynamics and estimations of current weather conditions to do weather forecasting, which is the mainstream weather forecasting technique nowadays especially for forecasting beyond hours. Many errors affect the performance of NWP models, such as the estimation errors of initial conditions and the approximation errors of NWP models, leading to the research topic about postprocessing the forecasts of NWP models. Most postprocessing methods are built on some learning process, which takes the forecasts of NWP models as inputs and the observations of weather elements or events as outputs.\nIn this paper, we use CLCP to postprocess the ensemble forecasts with the control forecast and perturbed forecasts issued by the NWP model from European Centre for Medium-Range Weather Forecasts (ECMWF) [42 ###reference_42###], which are obtained from the THORPEX Interactive Grand Global Ensemble (TIGGE) dataset [43 ###reference_43###]. We focus on -m maximum temperature and minimum temperature between the forecast lead times of nd hour and th hour with the forecasts initialized at UTC. The forecast fields are grided with the resolution of and the corresponding label fields with the same resolution are extracted from the ERA5 reanalysis data [44 ###reference_44###].\nThe area ranges from E to E in longitude and from N to N in latitude, covering the main parts of North China, East China and Central China, whose grid size is . The ECMWF forecast data and ERA5 reanalysis data are collected from to ( years).\nWe first consider high-impact weather forecasting, which is to forecast whether a high-impact weather exists for each grid and can be seen as a point-wise classification problem or image segmentation problem for computer vision. The high-impact weather we consider is whether the -m maximum temperature is above or the -m minimum temperature is below for each grid. These two cases are treated as high temperature weather or low temperature weather in China, which make meteorological observatories issue high temperature warning or low temperature warning respectively.\nThe prediction sets and the loss function used for high-impact weather forecasting are the same as those for image segmentation in [24 ###reference_24###].\nTaking the ensemble forecast fields of the NWP model as input , the corresponding label is a set of grids having high-impact weather, which can be seen as a segmentation problem for high-impact weather. Therefore, we first train a segmentation neural network , where is the estimated probability of the grid having high-impact weather. Then the set-valued function can be constructed as\nand the loss function is\nwhich measures the ratio of the prediction sets failing to do the warning. We use CLCP with the prediction set and the loss function above to do high temperature and low temperature forecasting respectively."
},
{
"section_id": "4.2.1",
"parent_section_id": "4.2",
"section_name": "IV-B1 Dataset for high temperature forecasting",
"text": "The reanalysis fields of -m maximum temperature were collected from ERA5 and the label fields were calculated based on whether the -m maximum temperature is above . To make the loss function take finite values, we only collected the data whose label fields have at least one high temperature grid to do this empirical study, which resulted in samples in total, i.e., ensemble forecasts from the NWP model of ECMWF and corresponding label fields calculated from ERA5. We name this dataset as HighTemp."
},
{
"section_id": "4.2.2",
"parent_section_id": "4.2",
"section_name": "IV-B2 Dataset for low temperature forecasting",
"text": "The dataset for testing CLCP for low temperature weather forecasting was constructed in a similar way. The reanalysis fields of -m minimum temperature were collected from ERA5 and the label fields were calculated based on whether the -m minimum temperature is below . We only collected the data whose label fields have at least one low temperature grid to do this empirical study, which resulted in samples in total. We name this dataset as LowTemp.\nFor each dataset, the same process was used to conduct the experiment as Section IV-A , i.e., all forecasts from the NWP model were normalized to by min\u2013max normalization, and we used of the data for testing and and of the remaining data for training and calibration respectively. We employed two fully convolutional neural networks [45 ###reference_45###] for binary image segmentation as our underlying algorithms. One was U-Net [46 ###reference_46###] with the same structure as that in [47 ###reference_47###], whose numbers of hidden feature maps were all set to . The other was the naive deep neural network (nDNN) with the same encoder-decoder structure as the U-Net without skip-connections, i.e., the U-Net removing skip-connections. We use these two neural networks to show that the design of the underlying algorithm is necessary for better performance, as U-Net fuses multi-scale features and nDNN does not. The data for training U-Net and nDNN were further partitioned into the validation part () for model selection and proper training part () for updating the parameters. Adam optimization [48 ###reference_48###] was used for training. The learning rate was set to and the number of epochs was set to . After training epochs, the model with lowest binary cross entropy on validation data was used for formula (10) to construct prediction sets, where is searched from to with step size . The experiments of using CLCP for the loss function as formula (11) were conducted times and the prediction results on test set are shown in Fig. 3, Fig. 4 and Fig. 5.\nFig. 3 also shows the bar plots of the frequencies of the prediction losses being greater than for and . Four columns stand for the cases where and respectively. It can be seen that for the two datasets HighTemp and LowTemp, all bars are near or below the preset , which verifies formula (7) empirically. Fig. 4 further shows the distributions of the losses for different and different using boxen plots, which contain more information than box plots by drawing narrow boxes for tails. It can be seen that larger and lead to larger losses, which is reasonable since large and relax the constraint on prediction losses. We measure the informational efficiency of the prediction set using its normalized size defined as , where and are the numbers of the vertical and the horizontal grids respectively. The distributions of normalized sizes in Fig. 5 show that U-Net is more informationally efficient than nDNN, which indicates that design of the underlying algorithm is important for CLCP. Different and lead to different normalized sizes, implying the trade-off among the preset loss level , confidence level and informational efficiency of the prediction sets. By choosing and properly, the prediction sets of CLCP can have reasonable sizes. Also, we can see that forecasting low temperature is somehow easier than high temperature with the fact that for the same and , the normalized sizes of forecasting low temperature are distributed lower than the ones of forecasting high temperature, indicating the need of design of the underlying algorithms to improve performance for forecasting high temperature."
},
{
"section_id": "4.3",
"parent_section_id": "4",
"section_name": "IV-C CLCP for maximum temperature and minimum temperature forecasting",
"text": "###figure_6### ###figure_7### ###figure_8### This section focuses on using CLCP to forecast the -m maximum temperature or minimum temperature value for each grid, which is a point-wise regression problem or image-to-image regression problem. To construct the prediction sets, we follow the procedure proposed in [49 ###reference_49###] and train the neural network with output channels jointly predicting the point-wise , and quantiles of the fields using quantile regression [10 ###reference_10###] [49 ###reference_49###], which are denoted by , and . Then the prediction set is equal to\nwhere\nand is a point-wise operator making and at least . This prediction set is a prediction band for the output field, whose prediction interval at grid is\nwith the point-wise width being an increasing function of . This construction was proposed in [49 ###reference_49###] for image-to-image regression and we use the same loss function in [49 ###reference_49###] measuring miscoverage rate of a prediction band for a field , which can be formalized as\nwhere is the prediction interval at grid for prediction band .\nAll of the data collected from to were used, leading to samples for each forecasting application and the datasets are named as MaxTemp and MinTemp respectively.\nThe experimental design is the same as that in Section IV-B, except that we also normalized the label for each grid to by min\u2013max normalization, used quantile loss for model selection and we searched for with two steps. First we found two values and from such that and . Then we searched for from values starting with and ending with using a common step size. The experimental results are recorded in Fig. 6, Fig 7 and Fig 8.\nAlthough the set predictors and the loss function used in this section are different from those in Section IV-B, the experimental results and conclusions are similar. From Fig. 6, we can see that the frequencies of the prediction losses being greater than are controlled by , which also verifies formula (7) empirically. Larger and lead to larger losses, which is shown in Fig. 7.\nHere we use the following average interval length\nto measure the informational efficiency of the prediction set and Fig. 8 also depicts the trade-off among the preset loss level , confidence level and informational efficiency of the prediction sets and indicates that better design of underlying algorithms leads to better performance."
},
{
"section_id": "5",
"parent_section_id": null,
"section_name": "Conclusion",
"text": "This paper extends conformal prediction to the situation where the value of a loss function needs to be controlled, which is inspired by risk-controlling prediction sets and conformal risk control approaches. The loss-controlling guarantee is proved in theory with the assumption of exchangeability and is empirically verified for different kinds of applications including classification with a class-varying loss and weather forecasting. Different from conformal prediction, conformal loss-controlling prediction approach proposed in this paper has two preset parameters and , which guarantees that the prediction loss is not greater than with confidence . Both parameters impose restrictions on prediction sets and should be set based on specific applications. Despite loss-controlling guarantee, informational efficiency of the prediction sets built by conformal loss-controlling prediction is highly related to underlying algorithms, which has been shown in empirical studies. Since this is a rather new topic, the underlying algorithms and the way of constructing set predictors are inherited from conformal risk control. This leaves the important question on how to build informationally efficient set predictors in an optimal way, which is one of our further researches in the future."
}
],
"appendix": [],
"tables": {
"1": {
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Datasets from UCI Repositories</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:156.1pt;height:264.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-33.5pt,56.7pt) scale(0.7,0.7) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.2\">Examples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.3\">Dimensionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.4\">Classes</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.2.1.1\">bc-wisc-diag</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.2\">569</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.3\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.3.2.1\">car</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.2\">1728</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.3\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.4\">4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.4.3.1\">chess-kr-kp</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.2\">3196</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.3\">36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.5.4.1\">contrac</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.2\">1473</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.3\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.6.5.1\">credit-a</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.2\">690</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.3\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.7.6.1\">credit-g</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.2\">1000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.3\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.8.7.1\">ctg-10classes</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.2\">2126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.3\">21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.4\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.9.8.1\">ctg-3classes</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.2\">2126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.3\">21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.10.9.1\">haberman</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.2\">306</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.3\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.11.10.1\">optical</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.2\">5620</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.3\">62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.4\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.12.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.12.11.1\">phishing-web</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.2\">11055</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.3\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.13.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.13.12.1\">st-image</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.2\">2310</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.3\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.4\">7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.14.13.1\">st-landsat</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.2\">6435</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.3\">36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.4\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.15.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.15.14.1\">tic-tac-toe</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.2\">958</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.3\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.16.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.16.15.1\">wall-following</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.2\">5456</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.3\">24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.4\">4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.17.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.17.16.1\">waveform</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.2\">5000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.3\">21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.18.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.18.17.1\">waveform-noise</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.2\">5000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.3\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.19.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.19.18.1\">wilt</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.2\">4839</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.3\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.20.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.20.19.1\">wine-quality-red</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.2\">1599</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.3\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.4\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.21.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T1.1.1.21.20.1\">wine-quality-white</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.2\">4898</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.3\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.4\">7</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
"capture": "TABLE I: Datasets from UCI Repositories"
}
},
"image_paths": {
"1": {
"figure_path": "2301.02424v2_figure_1.png",
"caption": "Figure 1: Bar plots of the frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for classification with a class-varying loss. The first row corresponds to \u03b1=0.1\ud835\udefc0.1\\alpha=0.1italic_\u03b1 = 0.1 and the second row corresponds to \u03b1=0.2\ud835\udefc0.2\\alpha=0.2italic_\u03b1 = 0.2. Different columns represent different classifiers. All bars are near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4, which confirms the controlling guarantee of CLCP empirically.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/classification_validity.jpg"
},
"2": {
"figure_path": "2301.02424v2_figure_2.png",
"caption": "Figure 2: Bar plots of the average sizes of prediction sets vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for classification with a class-varying loss. The first row corresponds to \u03b1=0.1\ud835\udefc0.1\\alpha=0.1italic_\u03b1 = 0.1 and the second row corresponds to \u03b1=0.2\ud835\udefc0.2\\alpha=0.2italic_\u03b1 = 0.2. Different columns represent different classifiers. The plots demonstrate the information in prediction sets. In general, large \u03b4\ud835\udeff\\deltaitalic_\u03b4 leads to small average size and different classifiers have different informational efficiency.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/classification_efficiency.jpg"
},
"3": {
"figure_path": "2301.02424v2_figure_3.png",
"caption": "Figure 3: Bar plots of the frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for high-impact weather forecasting. The first row corresponds to HighTemp and the second row corresponds to LowTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. All bars are near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4, which confirms the controlling guarantee of CLCP empirically.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_classification_err_rate.jpg"
},
"4": {
"figure_path": "2301.02424v2_figure_4.png",
"caption": "Figure 4: Boxen plots of the prediction losses vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for high-impact weather forecasting. The first row corresponds to HighTemp and the second row corresponds to LowTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. The loss distributions are controlled by \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 properly to obtain the empirical validity in Fig. 3.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_classification_loss.jpg"
},
"5": {
"figure_path": "2301.02424v2_figure_5.png",
"caption": "Figure 5: Boxen plots for the distributions of normalized sizes of prediction sets vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for high-impact weather forecasting. The first row corresponds to HighTemp and the second row corresponds to LowTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. U-Net performs better than nDNN, which indicates the importance of careful design of the underlying algorithm.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_classification_efficiency.jpg"
},
"6": {
"figure_path": "2301.02424v2_figure_6.png",
"caption": "Figure 6: Bar plots of the frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for maximum temperature and minimum temperature forecasting. The first row corresponds to MaxTemp and the second row corresponds to MinTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. All bars are near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4, which confirms the controlling guarantee of CLCP empirically.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_regression_err_rate.jpg"
},
"7": {
"figure_path": "2301.02424v2_figure_7.png",
"caption": "Figure 7: Boxen plots of the prediction losses vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for maximum temperature and minimum temperature forecasting. The first row corresponds to MaxTemp and the second row corresponds to MinTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. The loss distributions are controlled by \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 properly to obtain the empirical validity in Fig. 6.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_regression_loss.jpg"
},
"8": {
"figure_path": "2301.02424v2_figure_8.png",
"caption": "Figure 8: Boxen plots for the distributions of average interval length vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for maximum temperature and minimum temperature forecasting. The first row corresponds to MaxTemp and the second row corresponds to MinTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. U-Net performs better than nDNN, which indicates the importance of careful design of the underlying algorithm.",
"url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_regression_efficiency.jpg"
}
},
"validation": true,
"references": [],
"url": "http://arxiv.org/html/2301.02424v2"
}