Buckets:

|
download
raw
93.5 kB

Title: How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control

URL Source: https://arxiv.org/html/2302.03791

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF 1Introduction 2Background 3How to Trust Your Diffusion Model 4Experiments 5Conclusions License: CC BY 4.0 arXiv:2302.03791v3 [stat.ML] 27 Dec 2023 How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control Jacopo Teneggi1 2 3 Matthew Tivnan4 J. Webster Stayman4 Jeremias Sulam 1 4 3 Abstract

Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-box predictor. In this work, we focus on image-to-image regression tasks and we present a generalization of the Risk-Controlling Prediction Sets (RCPS) procedure, that we term ๐พ -RCPS, which allows to (i) provide entrywise calibrated intervals for future samples of any diffusion model, and (ii) control a certain notion of risk with respect to a ground truth image with minimal mean interval length. Differently from existing conformal risk control procedures, ours relies on a novel convex optimization approach that allows for multidimensional risk control while provably minimizing the mean interval length. We illustrate our approach on two real-world image denoising problems: on natural images of faces as well as on computed tomography (CT) scans of the abdomen, demonstrating state of the art performance.

1Introduction

Generative modeling is one of the longest standing tasks of classical and modern machine learning [12]. Recently, the foundational works by Song and Ermon [50], Song et al. [52], Pang et al. [43] on sampling via score-matching [27] and by Ho et al. [23] on denoising diffusion models [49] paved the way for a new class of score-based generative models, which solve a reverse-time stochastic differential equation (SDE) [53, 2]. These models have proven remarkably effective on both unconditional (i.e., starting from random noise) and conditional (e.g., inpainting, denoising, super-resolution, or class-conditional) sample generation across a variety of fields [61, 17]. For example, score-based generative models have been applied to inverse problems in general computer vision and medical imaging [29, 32, 31, 59, 55, 54], 3D shape generation [62, 60, 42], and even in protein design [25, 16, 58, 28].

These strong empirical results highlight the potential of score-based generative models. However, they currently lack of precise statistical guarantees on the distribution of the generated samples, which hinders their safe deployment in high-stakes scenarios [26]. For example, consider a radiologist who is shown a computed tomography (CT) scan of the abdomen of a patient reconstructed via a score-based generative model. How confident should they be of the fine-grained details of the presented image? Should they trust that the model has not hallucinated some of the features (e.g., calcifications, blood vessels, or nodules) involved in the diagnostic process? Put differently, how different will future samples be from the presented image, and how far can we expect them to be from the ground truth image?

In this work we focus on image-to-image regression problems, where we are interested in recovering a high-quality ground truth image given a low-quality observation. While our approach is general, we focus on the problem of image denoising as a running example. We address the questions posed above on the reliability of score-based generative models (and, more generally, of any sampling procedure) through the lens of conformal prediction [44, 57, 38, 48, 3] and conformal risk control [10, 4, 5] which provide any black-box predictor with distribution-free, finite-sample uncertainty guarantees. In particular, the contribution of this paper is three-fold:

1.

Given a fixed score network, a low-quality observation, and any sampling procedure, we show how to construct valid entrywise calibrated intervals that provide coverage of future samples, i.e. future samples (on the same observation) will fall within the intervals with high probability;

2.

We introduce a novel high-dimensional conformal risk control procedure that minimizes the mean interval length directly, while guaranteeing the number of pixels in the ground truth image that fall outside of these intervals is below a user-specified level on future, unseen low-quality observations;

3.

We showcase our approach for denoising of natural face images as well as for computed tomography of the abdomen, achieving state of the art results in mean interval length.

Going back to our example, providing such uncertainty intervals with provable statistical guarantees would improve the radiologistโ€™s trust in the sense that these intervals precisely characterize the type of tissue that could be reconstructed by the model. Lastly, even though our contributions are presented in the context of score-based generative modeling for regression problemsโ€”given their recent popularity [33, 61, 18]โ€”our results are broadly applicable to any sampling procedure, and we will comment on potential direct extensions where appropriate.

1.1Related work Image-to-Image Risk Control

Previous works have explored conformal risk control procedures for image-to-image regression tasks. In particular, Angelopoulos et al. [6] show how to construct set predictors from heuristic notions of uncertainty (e.g., quantile regression [34, 45]) for any image regressor, and how to calibrate the resulting intervals according to the original RCPS procedure of Bates et al. [10]. Kutiel et al. [35] move beyond set predictors and propose a mask-based conformal risk control procedure that allows for notions of distance between the ground truth and predicted images other than interval-based ones. Finally, and most closely to this paper, Horwitz and Hoshen [26] sketch ideas of conformal risk control for diffusion models with the intention to integrate quantile regression and produce heuristic sampling bounds without the need to sample several times. Horwitz and Hoshen [26] also use the original RCPS procedure to guarantee risk control. Although similar in spirit, the contribution of this paper focuses on a high-dimensional generalization of the original RCPS procedure that formally minimizes the mean interval length. Our proposed procedure is agnostic of the notion of uncertainty chosen to construct the necessary set predictors.

2Background

First, we briefly introduce the necessary notation and general background information. Herein, we will refer to images as vectors in โ„ ๐‘‘ , such that ๐’ณ โŠ‚ โ„ ๐‘‘ and ๐’ด โŠ‚ โ„ ๐‘‘ indicate the space of high-quality ground truth images, and low-quality observations, respectively. We assume both ๐’ณ and ๐’ด to be bounded. For a general image-to-image regression problem, given a pair ( ๐‘ฅ , ๐‘ฆ ) drawn from an unknown distribution ๐’Ÿ over ๐’ณ ร— ๐’ด , the task is to retrieve ๐‘ฅ โˆˆ ๐’ณ given ๐‘ฆ โˆˆ ๐’ด . This is usually carried out by means of a predictor ๐‘“ : ๐’ด โ†’ ๐’ณ that minimizes some notion of distance (e.g., MSE loss) between the ground truth images and reconstructed estimates on a set { ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› โˆผ ๐’Ÿ ๐‘› of ๐‘› pairs of high- and low-quality images. For example, in the classical denoising problem, one has ๐‘ฆ

๐‘ฅ + ๐‘ฃ 0 where ๐‘ฃ 0 โˆผ ๐’ฉ โข ( 0 , ๐œŽ 0 2 โข ๐•€ ) is random Gaussian noise with variance ๐œŽ 0 2 , and one wishes to learn a denoiser ๐‘“ such that ๐‘“ โข ( ๐‘ฆ ) โ‰ˆ ๐‘ฅ .

2.1Score-based Conditional Sampling

Most image-to-image regression problems are ill-posed: there exist several ground truth images that could have generated the same low-quality observation. This is easy to see for the classical denoising problem described above. Instead of a point predictor ๐‘“ โ€”which could approximate a maximum-a-posteriori (MAP) estimateโ€”one is often interested in devising a sampling procedure ๐น : ๐’ด โ†’ ๐’ณ for the posterior ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) , which precisely describes the distribution of possible ground truth images that generated the observation ๐‘ฆ . In real-world scenarios, however, the full joint ( ๐‘ฅ , ๐‘ฆ ) is unknown, and one must resort to approximate ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) from finite data. It is known that for a general Itรด process d โข ๐‘ฅ

โ„Ž โข ( ๐‘ฅ , ๐‘ก ) โข d โข ๐‘ก + ๐‘” โข ( ๐‘ก ) โข d โข ๐‘ค that perturbs an input ๐‘ฅ into random noise [30], it suffices to know the Stein score โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ) [2, 39] to sample from ๐‘ โข ( ๐‘ฅ ) via the reverse-time process

d โข ๐‘ฅ

[ โ„Ž โข ( ๐‘ฅ , ๐‘ก ) โˆ’ ๐‘” โข ( ๐‘ก ) 2 โข โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ) ] โข d โข ๐‘ก + ๐‘” โข ( ๐‘ก ) โข d โข ๐‘ค ยฏ ,

(1)

where โ„Ž โข ( ๐‘ฅ , ๐‘ก ) and ๐‘” โข ( ๐‘ก ) are a drift and diffusion term, respectively, and d โข ๐‘ค and d โข ๐‘ค ยฏ are forward- and reverse-time standard Brownian motion.5 Furthermore, if the likelihood ๐‘ โข ( ๐‘ฆ | ๐‘ฅ ) is knownโ€”which is usually the case for image-to-image regression problemsโ€”it is possible to condition the sampling procedure on an observation ๐‘ฆ . Specifically, by Bayesโ€™ rule, it follows that โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ | ๐‘ฆ )

โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฆ | ๐‘ฅ ) + โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ) which can be plugged-in into the reverse-time SDE in Equation 1 to sample from ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) .

Recent advances in generative modeling by Song and Ermon [50], Song et al. [53] showed that one can efficiently train a time-conditional score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) to approximate the score โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ~ ) via denoising score-matching [27]. In this way, given a forward-time SDE that models the observation process, a score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) โ‰ˆ โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ~ ) , and the likelihood term ๐‘ โข ( ๐‘ฆ | ๐‘ฅ ~ ) , one can sample from ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) by solving the conditional reverse-time SDE with any discretization (e.g., Euler-Maruyama) or predictor-corrector scheme [53]. While these models perform remarkably well in practice, limited guarantees exist on the distributions that they sample from [37]. Instead, we will provide guarantees for diffusion models by leveraging ideas of conformal prediction and conformal risk control, which we now introduce.

2.2Conformal Prediction

Conformal prediction has a rich history in mathematical statistics [57, 44, 56, 8, 9, 22].6 It comprises various methodologies to construct finite-sample, statistically valid uncertainty guarantees for general predictors without making any assumption on the distribution of the response (i.e., they are distribution-free). It particular, these methods construct valid prediction sets that provide coverage, which we now define.

Definition 2.1 (Coverage [48]).

Let ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘š , ๐‘ง ๐‘š + 1 be ๐‘š + 1 exchangeable random variables drawn from the same unknown distribution ๐’ฌ over ๐’ต . For a desired miscoverage level ๐›ผ โˆˆ [ 0 , 1 ] , a set ๐’ž โІ 2 ๐’ต that only depends on ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘š provides coverage if

โ„™ โข [ ๐‘ง ๐‘š + 1 โˆˆ ๐’ž ] โ‰ฅ 1 โˆ’ ๐›ผ .

(2)

We remark that the notion of coverage defined above was introduced in the context of classification problems, where one is interested in guaranteeing that the true, unseen label of a future sample will be in the prediction set ๐’ž with high probability. It is immediate to see how conformal prediction conveys a very precise notion of uncertaintyโ€”the larger ๐’ž has to be in order to guarantee coverage, the more uncertain the underlying predictor. We refer the interested reader to [48, 3] for classical examples of conformal prediction.

In many scenarios (e.g., regression), the natural notion of uncertainty may be different from miscoverage as described above (e.g., โ„“ 2 norm). We now move onto presenting conformal risk control, which extends the coverage to any notion of risk.

2.3Conformal Risk Control

Let โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ be a general set-valued predictor from ๐’ด into ๐’ณ โ€ฒ โІ 2 ๐’ณ . Consider a nonnegative loss โ„“ : ๐’ณ ร— ๐’ณ โ€ฒ โ†’ โ„ measuring the discrepancy between a ground truth ๐‘ฅ and the predicted intervals โ„ โข ( ๐‘ฆ ) . We might be interested in guaranteeing that this loss will be below a certain tolerance ๐œ– โ‰ฅ 0 with high probability on future, unseen samples ๐‘ฆ for which we do not know the ground truth ๐‘ฅ . Conformal risk control [10, 4, 5] extends ideas of conformal prediction in order to select a specific predictor โ„ that controls the risk ๐”ผ โข [ โ„“ โข ( ๐‘ฅ , โ„ โข ( ๐‘ฆ ) ) ] in the following sense.

Definition 2.2 (Risk Controlling Prediction Sets).

Let ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› โˆผ ๐’Ÿ ๐‘› be a calibration set of ๐‘› i.i.d. samples from an unknown distribution ๐’Ÿ over ๐’ณ ร— ๐’ด . For a desired risk level ๐œ– โ‰ฅ 0 and a failure probability ๐›ฟ โˆˆ [ 0 , 1 ] , a random set-valued predictor โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ โІ 2 ๐’ณ is an ( ๐œ– , ๐›ฟ ) -RCPS w.r.t. a loss function โ„“ : ๐’ณ ร— ๐’ณ โ€ฒ โ†’ โ„ if

โ„™ ๐’ฎ cal โข [ ๐”ผ ( ๐‘ฅ , ๐‘ฆ ) โˆผ ๐’Ÿ โข [ โ„“ โข ( ๐‘ฅ , โ„ โข ( ๐‘ฆ ) ) ] โ‰ค ๐œ– ] โ‰ฅ 1 โˆ’ ๐›ฟ .

(3)

Bates et al. [10] introduced the first conformal risk control procedure for monotonically nonincreasing loss functions, those that satisfy, for a fixed ๐‘ฅ ,

โ„ โข ( ๐‘ฆ ) โŠ‚ โ„ โ€ฒ โข ( ๐‘ฆ ) โŸน โ„“ โข ( ๐‘ฅ , โ„ โ€ฒ โข ( ๐‘ฆ ) ) โ‰ค โ„“ โข ( ๐‘ฅ , โ„ โข ( ๐‘ฆ ) ) .

(4)

In this way, increasing the size of the sets cannot increase the value of the loss. Furthermore, assume that for a fixed input ๐‘ฆ the family of set predictors { โ„ ๐œ† โข ( ๐‘ฆ ) } ๐œ† โˆˆ ฮ› , indexed by ๐œ† โˆˆ ฮ› , ฮ› โŠ‚ โ„ ยฏ โ‰” โ„ โˆช { ยฑ โˆž } , satisfies the following nesting property [22]

๐œ† 1 < ๐œ† 2 โŸน โ„ ๐œ† 1 โข ( ๐‘ฆ ) โŠ‚ โ„ ๐œ† 2 โข ( ๐‘ฆ ) .

(5)

Denote ๐‘… โข ( ๐œ† )

๐”ผ โข [ โ„“ โข ( ๐‘ฅ , โ„ ๐œ† โข ( ๐‘ฆ ) ) ] the risk of โ„ ๐œ† โข ( ๐‘ฆ ) and ๐‘… ^ โข ( ๐œ† ) its empirical estimate over a calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› . Finally, let ๐‘… ^ + โข ( ๐œ† ) be a pointwise upper confidence bound (UCB) that covers the risk, that is

โ„™ โข [ ๐‘… โข ( ๐œ† ) โ‰ค ๐‘… ^ + โข ( ๐œ† ) ] โ‰ฅ 1 โˆ’ ๐›ฟ

(6)

for each, fixed value of ๐œ† โ€”such that can be derived by means of concentration inequalities (e.g., Hoeffdingโ€™s inequality [24], Bentkusโ€™ inequality [11], or respective hybridization [10]).7 With these elements, Bates et al. [10] show that choosing

๐œ† ^

inf { ๐œ† โˆˆ ฮ› : ๐‘… ^ + โข ( ๐œ† โ€ฒ ) < ๐œ– , โˆ€ ๐œ† โ€ฒ โ‰ฅ ๐œ† }

(7)

guarantees that โ„ ๐œ† ^ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS according to Definition 2.2. In other words, choosing ๐œ† ^ as the smallest ๐œ† such that the UCB is below the desired level ๐œ– for all values of ๐œ† โ‰ฅ ๐œ† ^ controls the risk al level ๐œ– with probability at least 1 โˆ’ ๐›ฟ . For the sake of completeness, we include the original conformal risk control procedure in Algorithm 2 in Appendix B.

Equipped with these general concepts, we now move onto presenting the contributions of this work.

3How to Trust Your Diffusion Model

We now go back to the main focus of this paper: solving image-to-image regression problems with diffusion models. Rather than a point-predictor ๐‘“ : ๐’ด โ†’ ๐’ณ , we assume to have access to a stochastic sampling procedure ๐น : ๐’ด โ†’ ๐’ณ such that ๐น โข ( ๐‘ฆ ) is a random variable with unknown distribution ๐’ฌ ๐‘ฆ โ€”that hopefully approximates the posterior distribution of ๐‘ฅ given ๐‘ฆ , i.e. ๐’ฌ ๐‘ฆ โ‰ˆ ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) . However, we make no assumptions on the quality of this approximation for our results to hold. As described in Section 2.1, ๐น can be obtained by means of a time-conditional score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) and a reverse-time SDE. While our results are applicable to any sampling procedure, we present them in the context of diffusion models because of their remarkable empirical results and increasing use in critical applications [61, 17].

One can identify three separate sources of randomness in a general stochastic image-to-image regression problem: (i) the unknown prior ๐‘ โข ( ๐‘ฅ ) over the space of ground-truth images, as ๐‘ฅ โˆผ ๐‘ โข ( ๐‘ฅ ) , (ii) the randomness in the observation process of ๐‘ฆ (which can be modeled by a forward-time SDE over ๐‘ฅ ), and finally (iii) the stochasticity in the sampling procedure ๐น โข ( ๐‘ฆ ) . We will first provide conformal prediction guarantees for a fixed observation ๐‘ฆ , and then move onto conformal risk control for the ground truth image ๐‘ฅ .

3.1Calibrated Quantiles for Future Samples

Given the same low-quality (e.g., noisy) observation ๐‘ฆ , where will future unseen samples from ๐น โข ( ๐‘ฆ ) โˆผ ๐’ฌ ๐‘ฆ fall? How concentrated will they be? Denote โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ a (random) set-valued predictor from ๐’ด โŠ‚ โ„ ๐‘‘ into a space of sets ๐’ณ โ€ฒ โІ 2 ๐’ณ over ๐’ณ โŠ‚ โ„ ๐‘‘ (e.g., ๐’ณ

[ 0 , 1 ] ๐‘‘ , ๐’ณ โ€ฒ โІ 2 [ 0 , 1 ] ๐‘‘ ). We extend the notion of coverage in Definition 2.1 to entrywise coverage, which we now make precise.

Definition 3.1 (Entrywise coverage).

Let ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘š , ๐‘ง ๐‘š + 1 be ๐‘š + 1 exchangeable random vectors drawn from the same unknown distribution ๐’ฌ over ๐’ณ โŠ‚ โ„ ๐‘‘ . For a desired miscoverage level ๐›ผ โˆˆ [ 0 , 1 ] , a set โ„ โІ 2 ๐’ณ that only depends on ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘š provides entrywise coverage if

โ„™ โข [ ( ๐‘ง ๐‘š + 1 ) ๐‘— โˆˆ โ„ ๐‘— ] โ‰ฅ 1 โˆ’ ๐›ผ

(8)

for each ๐‘— โˆˆ [ ๐‘‘ ] โ‰” { 1 , โ€ฆ , ๐‘‘ } .

We stress that the definition above is different from notions of vector quantiles [13, 15] in the sense that coverage is not guaranteed over the entire new random vector ๐‘ง ๐‘š + 1 but rather along each dimension independently. Ideas of vector quantile regression (VQR) are complementary to the contribution of the current work and subject of ongoing research [21, 14, 47].

For a fixed observation ๐‘ฆ , we use conformal prediction to construct a set predictor that provides entrywise coverage.

Lemma 3.2 (Calibrated quantiles guarantee entrywise coverage).

Let ๐น : ๐’ด โ†’ ๐’ณ be a stochastic sampling procedure from ๐’ด โŠ‚ โ„ ๐‘‘ into ๐’ณ โŠ‚ โ„ ๐‘‘ . Given ๐‘ฆ โˆˆ ๐’ด , let ๐น 1 , โ€ฆ , ๐น ๐‘š , ๐น ๐‘š + 1 be ๐‘š + 1 i.i.d. samples from ๐น โข ( ๐‘ฆ ) . For a desired miscoverage level ๐›ผ โˆˆ [ 0 , 1 ] and for each ๐‘— โˆˆ [ ๐‘‘ ] , let ๐‘™ ^ ๐‘— , ๐›ผ , ๐‘ข ^ ๐‘— , ๐›ผ be the โŒŠ ( ๐‘š + 1 ) โข ๐›ผ / 2 โŒ‹ / ๐‘š and โŒˆ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ / 2 ) โŒ‰ / ๐‘š entrywise calibrated empirical quantiles of ๐น 1 , โ€ฆ , ๐น ๐‘š . Then,

โ„ ๐›ผ โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— , ๐›ผ , ๐‘ข ^ ๐‘— , ๐›ผ ]

(9)

provides entrywise coverage.

The simple proof of this result is included in Section A.1. We remark that, analogously to previous works [6, 26], the intervals in โ„ ๐›ผ โข ( ๐‘ฆ ) are feature-dependent and they capture regions of the image where the sampling process ๐น โข ( ๐‘ฆ ) may have larger uncertainty. The intervals in โ„ ๐›ผ โข ( ๐‘ฆ ) are statistically valid for any number of samples ๐‘š and any distribution ๐’ฌ ๐‘ฆ , i.e. they are not a heuristic notion of uncertainty. If the sampling procedure ๐น is a diffusion model, constructing โ„ ๐›ผ โข ( ๐‘ฆ ) is agnostic of the discretization scheme used to solve the reverse-time SDE [53] and it does not require retraining the underlying score network, which can be a time-consuming and delicate process, especially when the size of the images is considerable. On the other hand, constructing the intervals โ„ ๐›ผ โข ( ๐‘ฆ ) requires sampling a large enough number of times from ๐น โข ( ๐‘ฆ ) , which may seen cumbersome [26]. This is by construction and intention: diffusion models are indeed very useful in providing good (and varied) samples from the approximate posterior. In this way, practitioners do typically sample several realizations to get an empirical study of this distribution. In these settings, constructing the intervals โ„ ๐›ผ โข ( ๐‘ฆ ) does not involve any additional computational costs. Furthermore, note that sampling is completely parallelizable, and so no extra complexity is incurred if a larger number of computing nodes are available.

3.2A Provable Approach to Optimal Risk Control

In this section, we will revisit the main ideas around conformal risk control introduced in Section 2.3 and generalize them into our proposed approach, ๐พ -RCPS. Naturally, one would like a good conformal risk control procedure to yield the shortest possible interval lengths. Assume pixel intensities are normalized between [ 0 , 1 ] and consider the loss function

โ„“ 01 โข ( ๐‘ฅ , โ„ โข ( ๐‘ฆ ) )

1 ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐Ÿ™ โข [ ๐‘ฅ ๐‘— โˆ‰ โ„ โข ( ๐‘ฆ ) ๐‘— ] ,

(10)

which counts the (average) number of ground truth pixels that fall outside of their respective intervals in โ„ โข ( ๐‘ฆ ) . The constant set-valued predictor ๐’ฐ โข ( ๐‘ฆ )

[ 0 , 1 ] ๐‘‘ would trivially control the risk, i.e. ๐‘… 01 โข ( ๐œ† )

๐”ผ โข [ โ„“ 01 โข ( ๐‘ฅ , ๐’ฐ โข ( ๐‘ฆ ) ) ]

0 . Alas, such a predictor would be completely uninformative. Instead, let { โ„ ๐œ† โข ( ๐‘ฆ ) } ๐œ† โˆˆ ฮ› , ฮ› โŠ‚ โ„ ยฏ be a family of predictors that satisfies the nesting property in Equation 5. In particular, we propose the following additive parametrization in ๐œ†

โ„ ๐œ† โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† , ๐‘ข ^ ๐‘— + ๐œ† ]

(11)

for some lower and upper endpoints ๐‘™ ^ ๐‘— < ๐‘ข ^ ๐‘— that may depend on ๐‘ฆ . For this particularly chosen family of nested predictors, it follows that the mean interval length is

๐ผ ยฏ โข ( ๐œ† )

1 ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— ) + 2 โข ๐œ† ,

(12)

a linear function of ๐œ† . Moreover, we can instantiate ๐‘™ ^ ๐‘— and ๐‘ข ^ ๐‘— to be the calibrated quantiles with entrywise coverage, i.e. โ„ ๐œ† ๐›ผ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— , ๐›ผ โˆ’ ๐œ† , ๐‘ข ^ ๐‘— , ๐›ผ + ๐œ† ] .

For such a class of predictorsโ€”since the โ„“ 01 loss is monotonically nonincreasingโ€”the original RCPS procedure (see Equation 7) is equivalent to the following constrained optimization problem

๐œ† ^

arg โก min ๐œ† โˆˆ ฮ› โก ๐ผ ยฏ โข ( ๐œ† ) s.t. ๐‘… ^ 01 + โข ( ๐œ† โ€ฒ ) < ๐œ– , โˆ€ ๐œ† โ€ฒ โ‰ฅ ๐œ†

(P 1 )

which naturally minimizes ๐œ† . However, optimizing the mean interval length over a single scalar parameter ๐œ† is suboptimal in general, as shown in Figure 1. With abuse of notationโ€”we do not generally refer to vectors with boldfaceโ€”let { โ„ ๐€ โข ( ๐‘ฆ ) } ๐€ โˆˆ ฮ› ๐‘‘ be a family of predictors indexed by a ๐‘‘ -dimensional vector ๐€

( ๐œ† 1 , โ€ฆ , ๐œ† ๐‘‘ ) that satisfies the nesting property in Equation 5 in an entrywise fashion. A natural extension of Equation 11 is then

โ„ ๐€ โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] ,

(13)

from which one can define an equivalent function ๐ผ ยฏ โข ( ๐€ ) . In particular, using the calibrated intervals as before, define

โ„ ๐€ ๐›ผ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— , ๐›ผ โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— , ๐›ผ + ๐œ† ๐‘— ] .

(14)

Note now that โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) is entrywise monotonically nonincreasing. For fixed vectors ๐€ ~ โˆˆ โ„ ๐‘‘ and ๐œผ โˆˆ โ„ ๐‘‘ in the positive orthant (i.e., ๐œผ โ‰ฅ 0 , entrywise), denote ๐šฒ ~

๐€ ~ + ๐œ” โข ๐œผ , ๐œ” โˆˆ ฮ› the set of points on a line with offset ๐€ ~ and direction ๐œผ parametrized by ๐œ” . Then, the ๐‘‘ -dimensional extension of (P 1 ) becomes

๐€ ^

arg โก min ๐€ โˆˆ ๐šฒ ~ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐œ† ๐‘— s.t. ๐‘… ^ 01 + โข ( ๐€ + ๐›ฝ โข ๐œผ ) < ๐œ– , โˆ€ ๐›ฝ โ‰ฅ 0 .

(P ๐‘‘ )

We include an explicit analytical expression for ๐‘… ^ 01 + โข ( ๐€ + ๐œผ ) in Appendix C. Intuitively, ๐€ ^ minimizes the sum of its entries such that the UCB is smaller than ๐œ– for all points to its right along the direction of ๐œผ parametrized by ๐›ฝ . We now show a general high-dimensional risk control result that holds for any entrywise monotonically nonincreasing loss function โ„“ (and not just โ„“ 01 as presented in (P ๐‘‘ )) with risk ๐‘… โข ( ๐€ ) , empirical estimate ๐‘… ^ โข ( ๐€ ) and respective UCB ๐‘… ^ + โข ( ๐€ ) .8

(a) ๐œ‡

[ โˆ’ 1 , 1 ] ๐‘‡ . (b) ๐œ‡

[ โˆ’ 2 , 0.75 ] ๐‘‡ . Figure 1:Pictorial representation of the suboptimality of the choice of a single scalar parameter ๐œ† w.r.t. the mean interval length. ๐’ฎ cal โˆผ ๐’ฉ โข ( ๐œ‡ , ๐•€ 2 ) ๐‘› , ๐‘›

128 , and ( โ„ ๐€ ) ๐‘—

[ โˆ’ 1 โˆ’ ๐œ† ๐‘— , 1 + ๐œ† ๐‘— ] , ๐€

( ๐œ† 1 , ๐œ† 2 ) . For ๐œ–

๐›ฟ

0.1 , ๐‘… ^ 01 + โข ( ๐€ ) is obtained via Hoeffding-Bentkus hybridization. Green areas indicate regions where ๐‘… ^ 01 + โข ( ๐€ ) โ‰ค ๐œ– , and conversely for red regions. 0(a) Shows that when features are concentrated symmetrically around the intervals, minimizing ๐œ† 1

๐œ† 2

๐œ† (blue star) minimizes the mean interval length, while 0(b) shows that in the general case, the optimal ๐€ (orange star) may have ๐œ† 1 โ‰  ๐œ† 2 . ฮ” highlights the gain in mean interval length obtained by choosing the orange star instead of the blue one. Theorem 3.3 (Optimal mean interval length risk control).

Let โ„“ : ๐’ณ ร— ๐’ณ โ€ฒ โ†’ โ„ , ๐’ณ โ€ฒ โІ 2 ๐’ณ , ๐’ณ โŠ‚ โ„ ๐‘‘ be an entrywise monotonically nonincreasing function and let { โ„ ๐›Œ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] } ๐›Œ โˆˆ ฮ› ๐‘‘ be a family of set-valued predictors โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ , ๐’ด โŠ‚ โ„ ๐‘‘ indexed by ๐›Œ โˆˆ ฮ› ๐‘‘ , ฮ› โŠ‚ โ„ ยฏ , for some lower and upper bounds ๐‘™ ^ ๐‘— < ๐‘ข ^ ๐‘— that may depend on ๐‘ฆ . Given ๐šฒ ~

๐›Œ ~ + ๐œ” โข ๐›ˆ , ๐œ” โˆˆ ฮ› , for fixed ๐›Œ ~ โˆˆ โ„ ๐‘‘ and ๐›ˆ โˆˆ โ„ ๐‘‘ , ๐›ˆ โ‰ฅ 0 , if

๐€ ^

arg โก min ๐€ โˆˆ ๐šฒ ~ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐œ† ๐‘— s.t. ๐‘… ^ + โข ( ๐€ + ๐›ฝ โข ๐œผ ) < ๐œ– , โˆ€ ๐›ฝ โ‰ฅ 0

(15)

then โ„ ๐›Œ ^ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS and ๐›Œ ^ minimizes the mean interval length over ๐šฒ ~ .

The proof is included in Section A.2. Since โ„“ 01 is entrywise monotonically nonincreasing, it follows that the solution to (P ๐‘‘ ) controls risk. The attentive reader will have noticed (as shown in Figure 1) that the constraint set ๐‘… ^ 01 + โข ( ๐€ ) โ‰ค ๐œ– need not be convex. Furthermore, and as shown in Figure 1(b), โ„“ 01 is not convex in ๐€ . Hence, it is not possible to optimally solve (P ๐‘‘ ) directly. Instead, we relax it to a convex optimization problem by means of a convex upper bound9

โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) )

1 ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] [ 2 โข ( 1 + ๐‘ž ) ๐ผ โข ( ๐€ ) ๐‘— โข | ๐‘ฅ ๐‘— โˆ’ ๐‘ ๐‘— | โˆ’ ๐‘ž ] + ,

(16)

where ๐‘ž

๐›พ / ( 1 โˆ’ ๐›พ ) , ๐›พ โˆˆ [ 0 , 1 ) , ๐ผ โข ( ๐€ ) ๐‘—

๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— , ๐‘ ๐‘—

( ๐‘ข ^ ๐‘— + ๐‘™ ^ ๐‘— ) / 2 , and [ โ‹… ] +

max โก ( 0 , โ‹… ) . As shown in Figure 1(a), the hyperparameter ๐›พ controls the degree of relaxation by means of changing the portion of the intervals [ ๐‘™ ^ ๐‘— , ๐‘ข ^ ๐‘— ] where the loss is 0. This way, ๐›พ

0 retrieves the โ„“ 1 loss centered at ๐‘ ๐‘— , and lim ๐›พ โ†’ 1 โ„“ ๐›พ

โˆž if โˆƒ ๐‘— โˆˆ [ ๐‘‘ ] : ๐‘ฅ ๐‘— โˆ‰ [ ๐‘™ ^ ๐‘— , ๐‘ข ^ ๐‘— ] and 0 otherwise.

(a) โ„“ 01 , โ„“ ๐›พ as a function of ๐‘ฅ . (b) โ„“ 01 , โ„“ ๐›พ as a function of ๐œ† . Figure 2:Visualization of โ„“ 01 โข ( ๐‘ฅ , โ„ ๐œ† โข ( ๐‘ฆ ) ) and โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐œ† โข ( ๐‘ฆ ) ) for โ„ ๐œ† โข ( ๐‘ฆ )

[ 0.50 โˆ’ ๐œ† , 1.50 + ๐œ† ] , ๐›พ โˆˆ { 0 , 0.5 , 0.9 } . In 1(a) ๐œ†

0 , and in 1(b) ๐‘ฅ

1.6 .

While one can readily propose a convex alternative to (P ๐‘‘ ) by means of this new loss, we instead propose a generalization of this idea in our final problem formulation

๐€ ~ ๐พ

arg โก min ๐€ โˆˆ ฮ› ๐พ โข โˆ‘ ๐‘˜ โˆˆ [ ๐พ ] ๐‘› ๐‘˜ โข ๐œ† ๐‘˜ s.t. ๐‘… ^ ๐›พ โข ( ๐‘€ โข ๐€ ) โ‰ค ๐œ– ,

(P ๐พ )

for any user-defined ๐พ -partition of the [ ๐‘‘ ] featuresโ€”which can be identified by a membership matrix ๐‘€ โˆˆ { 0 , 1 } ๐‘‘ ร— ๐พ where each feature belongs to (only) one of the ๐พ groups with ๐‘› ๐‘˜ โ‰” | { ๐‘— โˆˆ [ ๐‘‘ ] : ๐‘€ ๐‘— โข ๐‘˜

1 } | , โˆ‘ ๐‘˜ โˆˆ [ ๐พ ] ๐‘› ๐‘˜

๐‘‘ . As we will shortly see, it will be useful to define these groups as the empirical quantiles; i.e. set ๐‘€ as that assigning each feature to their respective ๐‘˜ th quantile of the entrywise empirical loss over the optimization set (which is a vector in โ„ ๐‘‘ ). We remark that the constrain set in (P ๐พ ) is defined on the empirical estimate of the risk of โ„ ๐€ โข ( ๐‘ฆ ) and it does not involve the computation of the UCB. Then, (P ๐พ ) can be solved with any standard off-the-shelf convex optimization software (e.g., CVXPY [19, 1], MOSEK [7]).

Our novel conformal risk control procedure, ๐พ -RCPS, finds a vector ๐€ ^ ๐พ that approximates a solution to the nonconvex optimization problem (P ๐‘‘ ) via a two step procedure:

1.

First obtaining the optimal solution ๐€ ~ ๐พ to a user-defined (P ๐พ ) problem, and then

2.

Choosing ๐›ฝ ^ โˆˆ ฮ› such that

๐›ฝ ^

inf { ๐›ฝ โˆˆ ฮ› : ๐‘… ^ 01 + โข ( ๐‘€ โข ๐€ ~ ๐พ + ๐›ฝ โ€ฒ โข ๐Ÿ ) < ๐œ– , โˆ€ ๐›ฝ โ€ฒ โ‰ฅ ๐›ฝ }

and return ๐€ ^ ๐พ

๐‘€ โข ๐€ ~ ๐พ + ๐›ฝ ^ โข ๐Ÿ .

Intuitively, the ๐พ -RCPS algorithm is equivalent to performing the original RCPS procedure along the line ๐‘€ โข ๐€ ~ ๐พ + ๐›ฝ โข ๐Ÿ parametrized by ๐›ฝ . We remark thatโ€”as noted in Theorem 3.3โ€”any choice of ๐œผ โ‰ฅ 0 provides a valid direction along which to perform the RCPS procedure. Here, we choose ๐Ÿ because it is precisely the gradient of the objective function. Future work entails devising more sophisticated algorithms to approximate the solution of (P ๐‘‘ ).

Algorithm 1 ๐พ -RCPS 1:  Input: risk level ๐œ– โ‰ฅ 0 , failure probability ๐›ฟ โˆˆ [ 0 , 1 ] , calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› of ๐‘› i.i.d. samples such that ๐‘›

๐‘› opt + ๐‘› RCPS , membership function โ„ณ , family of set-valued predictors { โ„ ๐€ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ๐‘— + ๐œ† ๐‘— ] } ๐€ โˆˆ ฮ› ๐‘‘ , initial (large) value ๐›ฝ max , stepsize d โข ๐›ฝ

0 . 2:  Split ๐’ฎ cal into ๐’ฎ opt , ๐’ฎ RCPS 3:   ๐‘€ โ† โ„ณ โข ( ๐’ฎ opt ) 4:   ๐€ ~ ๐พ โ† SOLVE-PK โข ( ๐’ฎ opt , ๐‘€ ) 5:   ๐€ โ† ๐‘€ โข ๐€ ~ ๐พ + ๐›ฝ max โข ๐Ÿ 6:   ๐‘… ^ 01 + โข ( ๐€ ) โ† 0 7:  while  ๐‘… ^ 01 + โข ( ๐€ ) โ‰ค ๐œ–  do 8:      ๐€ prev โ† ๐€ 9:      ๐€ โ† ๐€ โˆ’ ( d โข ๐›ฝ ) โข ๐Ÿ 10:      ๐€ โ† [ ๐€ ] + 11:      ๐‘… ^ 01 โข ( ๐€ ) โ† 1 / ๐‘› RCPS โ‹… โˆ‘ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) โˆˆ ๐’ฎ RCPS โ„“ 01 โข ( ๐‘ฅ ๐‘– , โ„ ๐€ โข ( ๐‘ฆ ๐‘– ) ) 12:      ๐‘… ^ 01 + โข ( ๐€ ) โ† UCB โข ( ๐‘› RCPS , ๐›ฟ , ๐‘… ^ 01 โข ( ๐€ ) ) 13:  end while 14:   ๐€ ^ ๐พ โ† ๐€ prev 15:  return ๐€ ^ ๐พ

Algorithm 1 implements the ๐พ -RCPS procedure for any calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› , any general family of set-valued predictors of the form { โ„ ๐€

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] } ๐€ โˆˆ ฮ› ๐‘‘ , any membership function โ„ณ : { ๐’ณ ร— ๐’ด } ๐‘› โ†’ { 0 , 1 } ๐‘‘ ร— ๐พ , and a general UCB โข ( ๐‘› , ๐›ฟ , ๐‘… ^ โข ( ๐€ ) ) that accepts a number of samples ๐‘› , a failure probability ๐›ฟ , and an empirical risk ๐‘… ^ โข ( ๐€ ) and that returns a pointwise upper confidence bound ๐‘… ^ + โข ( ๐€ ) that satisfies Equation 6. We remark that, following the split fixed sequence testing framework introduced in Angelopoulos et al. [4] and applied in previous work [36], the membership matrix and its optimization problem (P ๐พ ) are computed on a subset ๐’ฎ opt of the calibration set ๐’ฎ cal , such that the direction ๐‘€ โข ๐€ ~ ๐พ + ๐›ฝ โข ๐Ÿ along which to perform the RCPS procedure is chosen before looking at the data ๐’ฎ RCPS

๐’ฎ cal โˆ– ๐’ฎ opt . We note that ๐พ -RCPS allows for some of the entries in ๐€ ^ ๐พ to be set to 0, which preserves the original intervals such thatโ€”if they are obtained as described in Section 3.1โ€”they still provide entrywise coverage of future samples at the desired level ๐›ผ .

We now move onto showcasing the advantage of ๐พ -RCPS in terms on mean interval length on two real-world high dimensional denoising problems: one on natural images of faces as well as on CT scans of the abdomen.

4Experiments (a)CelebA dataset, ๐œŽ 0 2

1.0 , ๐›ผ

0.10 . (b)AbdomenCT-1K dataset, ๐œŽ 0 2

0.4 , ๐›ผ

0.20 . Figure 3:Calibrated quantiles โ„ ๐›ผ โข ( ๐‘ฆ ) computed on 128 samples from ๐น โข ( ๐‘ฆ ) for noisy inputs ๐‘ฆ with noise level ๐œŽ 0 2 . The difference ๐‘ข ^ ๐›ผ โˆ’ ๐‘™ ^ ๐›ผ represents intervals sizes (i.e., larger intervals indicate larger uncertainty).

As a reminder, the methodological contribution of this paper is two-fold: ( ๐‘– ) we propose to use the calibrated quantiles โ„ ๐›ผ โข ( ๐‘ฆ ) as a statistically valid notion of uncertainty for diffusion models, and ( ๐‘– โข ๐‘– ) we introduce the ๐พ -RCPS procedure to guarantee high-dimensional risk control. Although it is natural to use the two in conjunction, we remark that the ๐พ -RCPS procedure is agnostic of the notion of uncertainty and it can be applied to any nested family of set predictors { โ„ ๐€ โข ( ๐‘ฆ ) } ๐€ โˆˆ ฮ› ๐‘‘ that satisfy the additive parametrization in Equation 13. Therefore, we compare ๐พ -RCPS with the original RCPS algorithm on several baseline notions of uncertainty: quantile regression [6], MC-Dropout [20], N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› [26], and naive (i.e., not calibrated) quantiles. We focus on denoising problems where ๐‘ฆ

๐‘ฅ + ๐‘ฃ 0 with ๐‘ฃ 0 โˆผ ๐’ฉ โข ( 0 , ๐œŽ 0 2 ) , on two imaging datasets: the CelebA dataset [40] and the AbdomenCT-1K dataset [41]. In particularโ€”for each datasetโ€”we train:

โ€ข

A time-conditional score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) โ‰ˆ โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ~ ) following Song et al. [53] to sample from the posterior distribution ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) as described in Section 2.1, and

โ€ข

a time-conditional image regressor ๐‘“ : ๐’ด ร— โ„ โ†’ ๐’ณ 3 following Angelopoulos et al. [6] such that ๐‘“ โข ( ๐‘ฆ , ๐‘ก )

( ๐‘ž ^ ๐›ผ / 2 , ๐‘ฅ ^ , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) , where ๐‘ฅ ^ โ‰ˆ ๐”ผ โข [ ๐‘ฅ โˆฃ ๐‘ฆ ] minimizes the MSE loss between the noisy observation ๐‘ฆ and the ground truth ๐‘ฅ , and ๐‘ž ^ ๐›ผ / 2 , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 are the ๐›ผ / 2 and 1 โˆ’ ๐›ผ / 2 quantile regressors of ๐‘ฅ , respectively [34, 45, 6].

Both models are composed of the same NCSN++ backbone [53] with dropout ๐‘

0.10 for a fair comparison. We then fine-tune the original score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) according to the N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› algorithm proposed by Horwitz and Hoshen [26] such thatโ€”similarly to the image regressor ๐‘“ โ€”the resulting time-conditional predictor ๐‘  ~ โข ( ๐‘ฆ , ๐‘ก )

( ๐‘ž ^ ๐›ผ / 2 , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) estimates the ๐›ผ / 2 and 1 โˆ’ ๐›ผ / 2 quantile regressors of ๐‘ฅ . Finally, in order to compare with MC-Dropout, we activate the dropout layers in the image regressor ๐‘“ at inference time, and estimate the mean ๐‘ฅ ยฏ and standard deviation ๐œŽ ^ over 128 samples ๐‘ฅ ^ 1 , โ€ฆ , ๐‘ฅ ^ 128 . To summarize, we compare ๐พ -RCPS and RCPS on the following families of nested set predictors:

Quantile Regression (QR)

โ„ ๐€ , QR โข ( ๐‘ฆ ) ๐‘—

[ ๐‘ฅ ^ ๐‘— โˆ’ ๐œ† ๐‘— โข ( ๐‘ž ^ ๐›ผ / 2 ) ๐‘— , ๐‘ฅ ^ ๐‘— + ๐œ† ๐‘— โข ( ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) ๐‘— ] ,

(17)

where ๐‘“ โข ( ๐‘ฆ , ๐‘ก )

( ๐‘ž ^ ๐›ผ / 2 , ๐‘ฅ ^ , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) .

MC-Dropout

โ„ ๐€ , MC-Dropout โข ( ๐‘ฆ ) ๐‘—

[ ๐‘ฅ ยฏ ๐‘— โˆ’ ๐œ† ๐‘— โข ๐œŽ ^ ๐‘— , ๐‘ฅ ยฏ ๐‘— + ๐œ† ๐‘— โข ๐œŽ ^ ๐‘— ] ,

(18)

where ๐‘ฅ ยฏ , ๐œŽ ^ are the sample mean and standard deviation over 128 samples ๐‘ฅ ^ 1 , โ€ฆ , ๐‘ฅ ^ 128 obtained by activating the dropout layers in the image regressor ๐‘“ .

N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› 10 We compare two different parametrizationsโ€”multiplicative and additive:

โ„ ๐€ , Conffusion multiplicative โข ( ๐‘ฆ ) ๐‘—

[ ( ๐‘ž ^ ๐›ผ / 2 ) ๐‘— ๐œ† ๐‘— , ๐œ† ๐‘— โข ( ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) ๐‘— ]

(19)

and

โ„ ๐€ , Conffusion additive โข ( ๐‘ฆ ) ๐‘—

[ ( ๐‘ž ^ ๐›ผ / 2 ) ๐‘— โˆ’ ๐œ† ๐‘— , ( ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) ๐‘— + ๐œ† ๐‘— ] ,

(20)

where ๐‘  ~ โข ( ๐‘ฆ , ๐‘ก )

( ๐‘ž ^ ๐›ผ / 2 , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) is the fine-tuned score network by means of quantile regression on 1000 additional samples.

Naive quantiles

โ„ ๐€ , naive โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] ,

(21)

where ๐‘™ ^ , ๐‘ข ^ are the naive (i.e., not calibrated) ๐›ผ / 2 and 1 โˆ’ ๐›ผ / 2 entrywise empirical quantiles computed on 128 samples from the diffusion model.

Calibrated quantiles

โ„ ๐€ ๐›ผ โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— , ๐›ผ โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— , ๐›ผ + ๐œ† ๐‘— ]

(22)

where ๐‘™ ^ ๐›ผ , ๐‘ข ^ ๐›ผ are the entrywise calibrated quantiles computed on 128 samples from the diffusion model as described in Equation 9 (see Figure 3 for some examples).

(a)CelebA dataset ( ๐›ผ

0.10 , ๐œ–

0.10 ). (b)AbdomenCT-1K dataset ( ๐›ผ

0.20 , ๐œ–

0.05 ). Figure 4:Example optimal ๐€ ^ ๐พ for ๐พ โˆˆ { 4 , 8 , 32 } , ๐‘› opt

256 , and ๐‘‘ opt

100 with respective conformalized uncertainty maps โ„ ๐€ ^ ๐›ผ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— , ๐›ผ โˆ’ ( ๐œ† ^ ๐พ ) ๐‘— , ๐‘ข ^ ๐‘— , ๐›ผ + ( ๐œ† ^ ๐พ ) ๐‘— ] . With probability at least 90 % no more than ๐œ– portion of the ground truth pixels will fall outside of โ„ ๐€ ^ ๐›ผ on future, unseen samples.

We include further details on the datasets, the models, and the training and sampling procedures in Appendix D. The implementation of ๐พ -RCPS with all code and data necessary to reproduce the experiments is available at https://github.com/Sulam-Group/k-rcps.

We compare all models and calibration procedures on 20 random draws of calibration and validation sets ๐’ฎ cal , ๐’ฎ val of length ๐‘› cal and ๐‘› val , respectively. We remark that for the ๐พ -RCPS procedure, ๐‘› opt samples from ๐’ฎ cal will be used to solve the optimization problem (P ๐พ ). It follows that for a fixed ๐‘› cal , the concentration inequality used in the ๐พ -RCPS procedure will be looser compared to the one in the RCPS algorithm. We will show that there remains a clear benefit of using the ๐พ -RCPS algorithm in terms of mean interval length given the same amount of calibration data available (i.e., even while the concentration bound becomes looser). In these experiments, we construct the membership matrix ๐‘€ by assigning each feature ๐‘— โˆˆ [ ๐‘‘ ] to the respective ๐‘˜ th , ๐‘˜

1 , โ€ฆ , ๐พ quantile of the entrywise empirical estimate of the risk on ๐’ฎ opt . Furthermore, even though (P ๐พ ) is low-dimensional (i.e., ๐พ โ‰ช ๐‘‘ ), the number of constraints grows as ๐‘‘ โข ๐‘› opt , which quickly makes the computation of ๐€ ~ ๐พ inefficient and time-consuming (e.g., for the AbdomenCT-1K dataset, ๐‘‘ โข ๐‘› opt โˆผ 10 8 when ๐‘› opt

128 , a mild number of samples to optimize over). In practice, we randomly subsample a small number of features ๐‘‘ opt โ‰ช ๐‘‘ stratified by membership, which drastically speeds up computation. In the following experiments, we set ๐‘‘ opt โˆˆ { 50 , 100 } and use ๐พ โˆˆ { 4 , 8 , 32 } , which reduces the runtime of solving the optimization problem to less than a second for both datasets. Finally, we pick ๐›พ that minimizes the objective function over 16 values equally spaced in [ 0.3 , 0.7 ] . The choice of these heuristics makes the runtime of ๐พ -RCPS comparable to that of RCPS, with a small overhead to solve the reduced (P ๐พ ) problem (potentially multiple times to optimize ๐›พ ).

Figure G.1 shows that all combinations of notion of uncertainty and calibration procedures control the risk, as promised. In particular, we set ๐›ฟ

0.10 for both datasets, and ๐œ–

0.10 , 0.05 for the CelebA and AbdomenCT-1K dataset, respectively. We repeat all calibration procedures over 20 random samples of ๐’ฎ cal , ๐’ฎ val , with ๐‘› val

128 , and ๐‘› cal

640 or ๐‘› cal

512 for the CelebA or AbdomenCT-1K dataset, respectively. Figure 4 showcases some example ๐€ ^ ๐พ โ€™s obtained by running the ๐พ -RCPS procedure with ๐พ

4 , 8 , and 32 quantiles alongside their respective conformalized uncertainty maps from ๐’ฎ val . We can appreciate how for both datasets, ๐€ ^ ๐พ captures information about the structure of the data distribution (e.g., eyes and lips for the CelebA dataset, and the position of lungs and the heart for the AbdomenCT-1K dataset). Finally, we compare all baselines and calibration procedures in terms of the guarantees each of them provide and their mean interval length. In particular, we report whether each notion of uncertainty provides guarantees over a diffusion model or not. Note that naive and calibrated quantiles are the only notions of uncertainty that precisely provide guarantees on the samples from a diffusion model. Furthermore, calibrated quantiles are the only method that ensures entrywise coverage on future samples on the same noisy observation. For ๐พ -RCPS, we perform a grid search over ๐‘› opt โˆˆ { 128 , 256 } , ๐‘‘ opt โˆˆ { 50 , 100 } , and ๐พ โˆˆ { 4 , 8 , 32 } , and we report the optimal results in Table 1. For both datasets, ๐พ -RCPS provides the tightest intervals among methods that provide both entrywise coverage and risk control for diffusion models. When relaxing the constraint of entrywise coverage, ๐พ -RCPS still provides the tightest intervals. Across the uncertainty quantification methods that do not relate to a diffusion model, we found that ๐พ -RCPS with naive sampling provides better results on the CelebA dataset. For the AbdomenCT-1K dataset, N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› with multiplicative parametrization and RCPS provides slightly shorter intervals compared to Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› with additive parametrization and ๐พ -RCPS. However, we stress that the intervals provided by N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› are computed for a fine-tuned model that is different from the original diffusion model, and thus provide no guarantees over the samples of the original model. We found that N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› with multiplicative parametrization underperforms on the CelebA dataset because the lower bounds do not decay fast enough, and the loss is concentrated on features whose ( ๐‘ž ^ ๐›ผ / 2 ) ๐‘— / ๐œ† ๐‘—

๐‘ฅ ๐‘— .

5Conclusions Table 1:Comparison of all notions of uncertainty with RCPS and ๐พ -RCPS in terms of guarantees provided and mean interval length over 20 independent draws of ๐’ฎ cal . We refer the reader to Appendix E for a detailed discussion of the comparison with the Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› framework. Uncertainty Diffusion model? Entrywise coverage? Risk control? Calibration procedure Mean interval length CelebA AbdomenCT-1K QR โœ— โœ— โœ“ RCPS 0.4843 ยฑ 0.0121

0.2943 ยฑ 0.0060

MC-Dropout โœ— โœ— โœ“ RCPS 0.6314 ยฑ 0.0109

0.2810 ยฑ 0.0013

N-Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘›

โ€” multiplicative โœ— โœ— โœ“ RCPS 0.6949 ยฑ 0.0084

0.1126 ยฑ 0.0020

โ€” additive โœ— โœ— โœ“ RCPS 0.3314 ยฑ 0.0040

0.1164 ยฑ 0.0024

โ€” additive โœ— โœ— โœ“ ๐พ -RCPS 0.3131 ยฑ 0.0056

0.1136 ยฑ 0.0019

Naive quantiles โœ“ โœ— โœ“ RCPS 0.2688 ยฑ 0.0068

0.1518 ยฑ 0.0016

Naive quantiles โœ“ โœ— โœ“ ๐พ -RCPS 0.2523 ยฑ 0.0052

0.1374 ยฑ 0.0019

Calibrated quantiles โœ“ โœ“ โœ“ RCPS 0.2762 ยฑ 0.0059

0.1506 ยฑ 0.0014

Calibrated quantiles โœ“ โœ“ โœ“ ๐พ -RCPS 0.2644 ยฑ 0.0067

0.1369 ยฑ 0.0016

Diffusion models represent huge potential for sampling in inverse problems, alas how to devise precise guarantees on uncertainty has remained open. We have provided (i) calibrated intervals that guarantee coverage of future samples generated by diffusion models, (ii) shown how to extend RCPS to ๐พ -RCPS, allowing for greater flexibility by conformalizing in higher dimensions by means of a convex surrogate problem. Yet, our results are general and hold for any data distribution and any sampling procedureโ€”diffusion models or otherwise. When combined, these two contributions provide state of the art uncertainty quantification by controlling risk with minimal mean interval length. Our contributions open the door to a variety of new problems. While we have focused on denoising problems, the application of these tools for other, more challenging restoration tasks is almost direct since no distributional assumptions are employed. The variety of diffusion models for other conditional-sampling problem can readily be applied here too [61, 17]. Lastlyโ€”and differently from other works that explore controlling multiple risks [36]โ€”ours is the first approach to provide multi-dimensional control of one risk for conformal prediction, and likely improvements to our optimization schemes could be possible. More generally, we envision our tools to contribute to the responsible use of machine learning in modern settings.

Acknowledgments

The authors sincerely thank Anastasios Angelopoulos for insightful discussions in early stages of this work, as well as the anonymous reviewers who have helped improve our paper.

References Agrawal et al. [2018] โ†‘ Akshay Agrawal, Robin Verschueren, Steven Diamond, and Stephen Boyd.A rewriting system for convex optimization problems.Journal of Control and Decision, 5(1):42โ€“60, 2018. Anderson [1982] โ†‘ Brian DO Anderson.Reverse-time diffusion equation models.Stochastic Processes and their Applications, 12(3):313โ€“326, 1982. Angelopoulos and Bates [2021] โ†‘ Anastasios N Angelopoulos and Stephen Bates.A gentle introduction to conformal prediction and distribution-free uncertainty quantification.arXiv preprint arXiv:2107.07511, 2021. Angelopoulos et al. [2021] โ†‘ Anastasios N Angelopoulos, Stephen Bates, Emmanuel J Candรจs, Michael I Jordan, and Lihua Lei.Learn then test: Calibrating predictive algorithms to achieve risk control.arXiv preprint arXiv:2110.01052, 2021. Angelopoulos et al. [2022a] โ†‘ Anastasios N Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster.Conformal risk control.arXiv preprint arXiv:2208.02814, 2022a. Angelopoulos et al. [2022b] โ†‘ Anastasios N Angelopoulos, Amit Pal Kohli, Stephen Bates, Michael Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, and Yaniv Romano.Image-to-image regression with distribution-free uncertainty quantification and applications in imaging.In International Conference on Machine Learning, pages 717โ€“730. PMLR, 2022b. ApS [2019] โ†‘ MOSEK ApS.The MOSEK optimization toolbox for MATLAB manual. Version 9.0., 2019.URL http://docs.mosek.com/9.0/toolbox/index.html. Barber et al. [2021] โ†‘ Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani.Predictive inference with the jackknife+.The Annals of Statistics, 49(1):486โ€“507, 2021. Barber et al. [2022] โ†‘ Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani.Conformal prediction beyond exchangeability.arXiv preprint arXiv:2202.13415, 2022. Bates et al. [2021] โ†‘ Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael Jordan.Distribution-free, risk-controlling prediction sets.Journal of the ACM (JACM), 68(6):1โ€“34, 2021. Bentkus [2004] โ†‘ Vidmantas Bentkus.On hoeffdingโ€™s inequalities.The Annals of Probability, 32(2):1650โ€“1673, 2004. Bishop and Nasrabadi [2006] โ†‘ Christopher M Bishop and Nasser M Nasrabadi.Pattern recognition and machine learning, volume 4.Springer, 2006. Carlier et al. [2016] โ†‘ Guillaume Carlier, Victor Chernozhukov, and Alfred Galichon.Vector quantile regression: an optimal transport approach.The Annals of Statistics, 44(3):1165โ€“1192, 2016. Carlier et al. [2020] โ†‘ Guillaume Carlier, Victor Chernozhukov, Gwendoline De Bie, and Alfred Galichon.Vector quantile regression and optimal transport, from theory to numerics.Empirical Economics, pages 1โ€“28, 2020. Chernozhukov et al. [2017] โ†‘ Victor Chernozhukov, Alfred Galichon, Marc Hallin, and Marc Henry.Mongeโ€“kantorovich depth, quantiles, ranks and signs.The Annals of Statistics, 45(1):223โ€“256, 2017. Corso et al. [2022] โ†‘ Gabriele Corso, Hannes Stรคrk, Bowen Jing, Regina Barzilay, and Tommi Jaakkola.Diffdock: Diffusion steps, twists, and turns for molecular docking.arXiv preprint arXiv:2210.01776, 2022. Croitoru et al. [2022] โ†‘ Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah.Diffusion models in vision: A survey.arXiv preprint arXiv:2209.04747, 2022. Croitoru et al. [2023] โ†‘ Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah.Diffusion models in vision: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. Diamond and Boyd [2016] โ†‘ Steven Diamond and Stephen Boyd.CVXPY: A Python-embedded modeling language for convex optimization.Journal of Machine Learning Research, 17(83):1โ€“5, 2016. Gal and Ghahramani [2016] โ†‘ Yarin Gal and Zoubin Ghahramani.Dropout as a bayesian approximation: Representing model uncertainty in deep learning.In international conference on machine learning, pages 1050โ€“1059. PMLR, 2016. Genevay et al. [2016] โ†‘ Aude Genevay, Marco Cuturi, Gabriel Peyrรฉ, and Francis Bach.Stochastic optimization for large-scale optimal transport.Advances in neural information processing systems, 29, 2016. Gupta et al. [2022] โ†‘ Chirag Gupta, Arun K Kuchibhotla, and Aaditya Ramdas.Nested conformal prediction and quantile out-of-bag ensemble methods.Pattern Recognition, 127:108496, 2022. Ho et al. [2020] โ†‘ Jonathan Ho, Ajay Jain, and Pieter Abbeel.Denoising diffusion probabilistic models.Advances in Neural Information Processing Systems, 33:6840โ€“6851, 2020. Hoeffding [1994] โ†‘ Wassily Hoeffding.Probability inequalities for sums of bounded random variables.In The collected works of Wassily Hoeffding, pages 409โ€“426. Springer, 1994. Hoogeboom et al. [2022] โ†‘ Emiel Hoogeboom, Vฤฑctor Garcia Satorras, Clรฉment Vignac, and Max Welling.Equivariant diffusion for molecule generation in 3d.In International Conference on Machine Learning, pages 8867โ€“8887. PMLR, 2022. Horwitz and Hoshen [2022] โ†‘ Eliahu Horwitz and Yedid Hoshen.Conffusion: Confidence intervals for diffusion models.arXiv preprint arXiv:2211.09795, 2022. Hyvรคrinen and Dayan [2005] โ†‘ Aapo Hyvรคrinen and Peter Dayan.Estimation of non-normalized statistical models by score matching.Journal of Machine Learning Research, 6(4), 2005. Ingraham et al. [2022] โ†‘ John Ingraham, Max Baranov, Zak Costello, Vincent Frappier, Ahmed Ismail, Shan Tie, Wujie Wang, Vincent Xue, Fritz Obermeyer, Andrew Beam, et al.Illuminating protein space with a programmable generative model.bioRxiv, 2022. Kadkhodaie and Simoncelli [2021] โ†‘ Zahra Kadkhodaie and Eero Simoncelli.Stochastic solutions for linear inverse problems using the prior implicit in a denoiser.Advances in Neural Information Processing Systems, 34:13242โ€“13254, 2021. Karatzas et al. [1991] โ†‘ Ioannis Karatzas, Ioannis Karatzas, Steven Shreve, and Steven E Shreve.Brownian motion and stochastic calculus, volume 113.Springer Science & Business Media, 1991. Kawar et al. [2021a] โ†‘ Bahjat Kawar, Gregory Vaksman, and Michael Elad.Snips: Solving noisy inverse problems stochastically.Advances in Neural Information Processing Systems, 34:21757โ€“21769, 2021a. Kawar et al. [2021b] โ†‘ Bahjat Kawar, Gregory Vaksman, and Michael Elad.Stochastic image denoising by sampling from the posterior distribution.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1866โ€“1875, 2021b. Kazerouni et al. [2022] โ†‘ Amirhossein Kazerouni, Ehsan Khodapanah Aghdam, Moein Heidari, Reza Azad, Mohsen Fayyaz, Ilker Hacihaliloglu, and Dorit Merhof.Diffusion models for medical image analysis: A comprehensive survey.arXiv preprint arXiv:2211.07804, 2022. Koenker and Bassett Jr [1978] โ†‘ Roger Koenker and Gilbert Bassett Jr.Regression quantiles.Econometrica: journal of the Econometric Society, pages 33โ€“50, 1978. Kutiel et al. [2022] โ†‘ Gilad Kutiel, Regev Cohen, Michael Elad, and Daniel Freedman.Whatโ€™s behind the mask: Estimating uncertainty in image-to-image problems.arXiv preprint arXiv:2211.15211, 2022. Laufer-Goldshtein et al. [2022] โ†‘ Bracha Laufer-Goldshtein, Adam Fisch, Regina Barzilay, and Tommi Jaakkola.Efficiently controlling multiple risks with pareto testing.arXiv preprint arXiv:2210.07913, 2022. Lee et al. [2022] โ†‘ Holden Lee, Jianfeng Lu, and Yixin Tan.Convergence for score-based generative modeling with polynomial complexity.arXiv preprint arXiv:2206.06227, 2022. Lei and Wasserman [2014] โ†‘ Jing Lei and Larry Wasserman.Distribution-free prediction bands for non-parametric regression.Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):71โ€“96, 2014. Liu et al. [2016] โ†‘ Qiang Liu, Jason Lee, and Michael Jordan.A kernelized stein discrepancy for goodness-of-fit tests.In International conference on machine learning, pages 276โ€“284. PMLR, 2016. Liu et al. [2018] โ†‘ Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.Large-scale celebfaces attributes (celeba) dataset.Retrieved August, 15(2018):11, 2018. Ma et al. [2021] โ†‘ Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu, et al.Abdomenct-1k: Is abdominal organ segmentation a solved problem.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. Metzer et al. [2022] โ†‘ Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, and Daniel Cohen-Or.Latent-nerf for shape-guided generation of 3d shapes and textures.arXiv preprint arXiv:2211.07600, 2022. Pang et al. [2020] โ†‘ Tianyu Pang, Kun Xu, Chongxuan Li, Yang Song, Stefano Ermon, and Jun Zhu.Efficient learning of generative models via finite-difference score matching.Advances in Neural Information Processing Systems, 33:19175โ€“19188, 2020. Papadopoulos et al. [2002] โ†‘ Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman.Inductive confidence machines for regression.In European Conference on Machine Learning, pages 345โ€“356. Springer, 2002. Romano et al. [2019] โ†‘ Yaniv Romano, Evan Patterson, and Emmanuel Candes.Conformalized quantile regression.Advances in neural information processing systems, 32, 2019. Ronneberger et al. [2015] โ†‘ Olaf Ronneberger, Philipp Fischer, and Thomas Brox.U-net: Convolutional networks for biomedical image segmentation.In International Conference on Medical image computing and computer-assisted intervention, pages 234โ€“241. Springer, 2015. Rosenberg et al. [2022] โ†‘ Aviv A Rosenberg, Sanketh Vedula, Yaniv Romano, and Alex M Bronstein.Fast nonlinear vector quantile regression.arXiv preprint arXiv:2205.14977, 2022. Shafer and Vovk [2008] โ†‘ Glenn Shafer and Vladimir Vovk.A tutorial on conformal prediction.Journal of Machine Learning Research, 9(3), 2008. Sohl-Dickstein et al. [2015] โ†‘ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.Deep unsupervised learning using nonequilibrium thermodynamics.In International Conference on Machine Learning, pages 2256โ€“2265. PMLR, 2015. Song and Ermon [2019] โ†‘ Yang Song and Stefano Ermon.Generative modeling by estimating gradients of the data distribution.Advances in Neural Information Processing Systems, 32, 2019. Song and Ermon [2020] โ†‘ Yang Song and Stefano Ermon.Improved techniques for training score-based generative models.Advances in neural information processing systems, 33:12438โ€“12448, 2020. Song et al. [2020a] โ†‘ Yang Song, Sahaj Garg, Jiaxin Shi, and Stefano Ermon.Sliced score matching: A scalable approach to density and score estimation.In Uncertainty in Artificial Intelligence, pages 574โ€“584. PMLR, 2020a. Song et al. [2020b] โ†‘ Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole.Score-based generative modeling through stochastic differential equations.arXiv preprint arXiv:2011.13456, 2020b. Song et al. [2021] โ†‘ Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon.Solving inverse problems in medical imaging with score-based generative models.arXiv preprint arXiv:2111.08005, 2021. Torem et al. [2022] โ†‘ Nadav Torem, Roi Ronen, Yoav Y Schechner, and Michael Elad.Towards a most probable recovery in optical imaging.arXiv preprint arXiv:2212.03235, 2022. Vovk [2015] โ†‘ Vladimir Vovk.Cross-conformal predictors.Annals of Mathematics and Artificial Intelligence, 74(1):9โ€“28, 2015. Vovk et al. [2005] โ†‘ Vladimir Vovk, Alexander Gammerman, and Glenn Shafer.Algorithmic learning in a random world.Springer Science & Business Media, 2005. Watson et al. [2022] โ†‘ Joseph L Watson, David Juergens, Nathaniel R Bennett, Brian L Trippe, Jason Yim, Helen E Eisenach, Woody Ahern, Andrew J Borst, Robert J Ragotte, Lukas F Milles, et al.Broadly applicable and accurate protein design by integrating structure prediction networks and diffusion generative models.bioRxiv, 2022. Xie and Li [2022] โ†‘ Yutong Xie and Quanzheng Li.Measurement-conditioned denoising diffusion probabilistic model for under-sampled medical image reconstruction.arXiv preprint arXiv:2203.03623, 2022. Xu et al. [2022] โ†‘ Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, and Shenghua Gao.Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models.arXiv preprint arXiv:2212.14704, 2022. Yang et al. [2022] โ†‘ Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Yingxia Shao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang.Diffusion models: A comprehensive survey of methods and applications.arXiv preprint arXiv:2209.00796, 2022. Zeng et al. [2022] โ†‘ Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis.Lion: Latent point diffusion models for 3d shape generation.arXiv preprint arXiv:2210.06978, 2022. Appendix AProofs

In this section, we include the proofs for the results presented in this paper. Herein, denote โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ a set-valued predictor from ๐’ด โŠ‚ โ„ ๐‘‘ into a space of subsets ๐’ณ โ€ฒ โІ 2 ๐’ณ for ๐’ณ โŠ‚ โ„ ๐‘‘ .

A.1Proof of Lemma 3.2

Let ๐น : ๐’ด โ†’ ๐’ณ be a stochastic sampling procedure from ๐’ด to ๐’ณ such that for a fixed ๐‘ฆ โˆˆ ๐’ด , ๐น โข ( ๐‘ฆ ) is a random vector with unknown distribution ๐’ฌ ๐‘ฆ . We show that for a desired miscoverage level ๐›ผ โˆˆ [ 0 , 1 ] , the entrywise calibrated empirical quantiles โ„ ๐›ผ โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— , ๐›ผ , ๐‘ข ^ ๐‘— , ๐›ผ ] defined in Equation 9 provide entrywise coverage as in Definition 3.1. That is, for each ๐‘— โˆˆ [ ๐‘‘ ] โ‰” { 1 , โ€ฆ , ๐‘‘ }

โ„™ โข [ ๐น โข ( ๐‘ฆ ) ๐‘— โˆˆ โ„ ๐›ผ โข ( ๐‘ฆ ) ๐‘— ] โ‰ฅ 1 โˆ’ ๐›ผ .

(23) Proof.

The proof is a variation of the classical split conformal prediction coverage guarantee (see Angelopoulos and Bates [3], Theorem D.1). Let ๐น 1 , โ€ฆ , ๐น ๐‘š , ๐น ๐‘š + 1 be ๐‘š + 1 i.i.d. samples from ๐น โข ( ๐‘ฆ ) . For a desired miscoverage level ๐›ผ โˆˆ [ 0 , 1 ] and for each ๐‘— โˆˆ [ ๐‘‘ ] denote

๐‘™ ^ ๐‘— , ๐›ผ

inf { ๐‘™ : | { ๐‘˜ : ( ๐น ๐‘˜ ) ๐‘— โ‰ค ๐‘™ } | ๐‘š โ‰ฅ โŒŠ ( ๐‘š + 1 ) โข ๐›ผ / 2 โŒ‹ ๐‘š }

(24)

and

๐‘ข ^ ๐‘— , ๐›ผ

inf { ๐‘ข : | { ๐‘˜ : ( ๐น ๐‘˜ ) ๐‘— โ‰ค ๐‘ข } | ๐‘š โ‰ฅ โŒˆ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ / 2 ) โŒ‰ ๐‘š }

(25)

the โŒŠ ( ๐‘š + 1 ) โข ๐›ผ / 2 โŒ‹ / ๐‘š and โŒˆ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ / 2 ) โŒ‰ / ๐‘š entrywise calibrated empirical quantiles of ๐น 1 , โ€ฆ , ๐น ๐‘š . Assume that for each ๐‘— โˆˆ [ ๐‘‘ ] , the first ๐‘š samples are ordered in ascending order, i.e. ( ๐น 1 ) ๐‘— < โ‹ฏ < ( ๐น ๐‘š ) ๐‘— such that

๐‘™ ^ ๐‘— , ๐›ผ

( ๐น โŒŠ ( ๐‘š + 1 ) โข ๐›ผ / 2 โŒ‹ ) ๐‘— and ๐‘ข ^ ๐‘— , ๐›ผ

( ๐น โŒˆ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ / 2 ) โŒ‰ ) ๐‘— .

(26)

Note that by symmetry of ( ๐น 1 ) ๐‘— , โ€ฆ , ( ๐น ๐‘š ) ๐‘— it follows that ( ๐น ๐‘š + 1 ) ๐‘— is equally likely to fall between any of the first ๐‘š samples. That is, for any two indices ๐‘š 1 < ๐‘š 2

โ„™ โข [ ( ๐น ๐‘š + 1 ) ๐‘— โˆˆ [ ( ๐น ๐‘š 1 ) ๐‘— , ( ๐น ๐‘š 2 ) ๐‘— ] ]

๐‘š 2 โˆ’ ๐‘š 1 ๐‘š + 1 .

(27)

Instantiating the above equality with โ„ ๐›ผ โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— , ๐›ผ , ๐‘ข ^ ๐‘— , ๐›ผ ] yields

โ„™ โข [ ( ๐น ๐‘š + 1 ) ๐‘— โˆˆ โ„ ๐›ผ โข ( ๐‘ฆ ) ๐‘— ]

โ„™ โข [ ( ๐น ๐‘š + 1 ) ๐‘— โˆˆ [ ๐‘™ ^ ๐‘— , ๐›ผ , ๐‘ข ^ ๐‘— , ๐›ผ ] ]

(28)

= โ„™ โข [ ( ๐น ๐‘š + 1 ) ๐‘— โˆˆ [ ( ๐น โŒŠ ( ๐‘š + 1 ) โข ๐›ผ / 2 โŒ‹ ) ๐‘— , ( ๐น โŒˆ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ / 2 ) โŒ‰ ) ๐‘— ] ]

(29)

= โŒˆ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ / 2 ) โŒ‰ โˆ’ โŒŠ ( ๐‘š + 1 ) โข ๐›ผ / 2 โŒ‹ ๐‘š + 1

(30)

โ‰ฅ ( ๐‘š + 1 ) โข ( 1 โˆ’ ๐›ผ ) ๐‘š + 1

1 โˆ’ ๐›ผ

(31)

which concludes the proof. โˆŽ

A.2Proof of Theorem 3.3

Recall that for a calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› of ๐‘› i.i.d. samples from an unknown distribution ๐’Ÿ over ๐’ณ ร— ๐’ด , a loss function โ„“ : ๐’ณ ร— ๐’ณ โ€ฒ โ†’ โ„ , and a family { โ„ ๐€ โข ( ๐‘ฆ ) } ๐€ โˆˆ ฮ› ๐‘‘ of set-valued predictors indexed by a ๐‘‘ -dimensional vector ๐€

( ๐œ† 1 , โ€ฆ , ๐œ† ๐‘‘ ) โˆˆ ฮ› ๐‘‘ , ฮ› โŠ‚ โ„ ยฏ

โ„ โˆช { ยฑ โˆž }

๐‘… โข ( ๐€ )

๐”ผ ( ๐‘ฅ , ๐‘ฆ ) โˆผ ๐’Ÿ โข [ โ„“ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ] and ๐‘… ^ โข ( ๐€ )

1 ๐‘› โข โˆ‘ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) โˆˆ ๐’ฎ cal ๐‘› โ„“ โข ( ๐‘ฅ ๐‘– , โ„ ๐€ โข ( ๐‘ฆ ๐‘– ) )

(32)

denote the risk of โ„ ๐€ โข ( ๐‘ฆ ) and its empirical estimate on the calibration set, respectively. Furthermore, let ๐‘… ^ + โข ( ๐€ ) be a pointwise upper confidence bound (UCB) such that for each fixed ๐€ โˆˆ ฮ› ๐‘‘ and โˆ€ ๐›ฟ โˆˆ [ 0 , 1 ]

โ„™ โข [ ๐‘… โข ( ๐€ ) โ‰ค ๐‘… ^ + โข ( ๐€ ) ] โ‰ฅ 1 โˆ’ ๐›ฟ

(33)

as presented in Equation 6. Equivalently to Definition 2.2, for a risk level ๐œ– โ‰ฅ 0 , we say that โ„ ๐€ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS if

โ„™ ๐’ฎ cal โข [ ๐‘… โข ( ๐€ ) โ‰ค ๐œ– ] โ‰ฅ 1 โˆ’ ๐›ฟ .

(34)

Given fixed vectors ๐€ ~ โˆˆ โ„ ๐‘‘ and ๐œผ โˆˆ โ„ ๐‘‘ , ๐œผ โ‰ฅ 0 , denote ๐šฒ ~

๐€ ~ + ๐œ” โข ๐œผ , ๐œ” โˆˆ ฮ› the line with offset ๐€ ~ and direction ๐œผ parametrized by ๐œ” . We show that for entrywise monotonically nonincreasing loss functions and for the family of set-valued predictors of the form โ„ ๐€ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] , for some lower and upper bounds ๐‘™ ^ ๐‘— < ๐‘ข ^ ๐‘— that may depend on ๐‘ฆ , if

๐€ ^

arg โก min ๐€ โˆˆ ๐šฒ ~ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐œ† ๐‘— s.t. ๐‘… ^ + โข ( ๐€ + ๐›ฝ โข ๐œผ ) < ๐œ– , โˆ€ ๐›ฝ โ‰ฅ 0

(35)

โ„ ๐€ ^ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS. We start by reminding the following definitions.

Definition A.1 (Entrywise monotonically nonincreasing function).

A loss function โ„“ is entrywise monotonically nonincreasing if, for a fixed ๐‘ฅ , โˆ€ ๐‘— โˆˆ [ ๐‘‘ ]

โ„ โข ( ๐‘ฆ ) ๐‘— โŠ‚ โ„ โ€ฒ โข ( ๐‘ฆ ) ๐‘— โŸน โ„“ โข ( ๐‘ฅ , โ„ โ€ฒ โข ( ๐‘ฆ ) ) โ‰ค โ„“ โข ( ๐‘ฅ , โ„ โข ( ๐‘ฆ ) ) .

(36) Definition A.2 (Entrywise nesting property).

A family of set predictors { โ„ ๐€ โข ( ๐‘ฆ ) } ๐€ โˆˆ ฮ› ๐‘‘ is entrywise nested if, for a fixed ๐‘ฆ , โˆ€ ๐‘— โˆˆ [ ๐‘‘ ]

๐œ† ๐‘— , 1 < ๐œ† ๐‘— , 2 โŸน โ„ [ ๐œ† ๐‘— , 1 , ๐€ โˆ’ ๐‘— ] โข ( ๐‘ฆ ) ๐‘— โŠ‚ โ„ [ ๐œ† ๐‘— , 2 , ๐€ โˆ’ ๐‘— ] โข ( ๐‘ฆ ) ๐‘— ,

(37)

where [ ๐œ† ๐‘— , ๐€ โˆ’ ๐‘— ] is the vector that takes value ๐œ† ๐‘— in its ๐‘— th entry and ๐œ† โˆ’ ๐‘— in its complement โˆ’ ๐‘— โ‰” [ ๐‘‘ ] โˆ– { ๐‘— } .

Proof.

The proof is a high-dimensional extension of the validity of the original RCPS calibration procedure. Note that the family of set-valued predictors { โ„ ๐€ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] } ๐€ โˆˆ ฮ› ๐‘‘ satisfies the entrywise nesting property in Definition A.2. For ๐€ ^ chosen as in Equation 35, denote ๐ฟ : ฮ› โ†’ โ„ the one-dimensional function such that

๐ฟ โข ( ๐›ฝ )

๐‘… โข ( ๐€ ^ + ๐›ฝ โข ๐œผ ) and ๐ฟ ^ + โข ( ๐›ฝ )

๐‘… ^ + โข ( ๐€ ^ + ๐›ฝ โข ๐œผ ) .

(38)

It follows that ๐ฟ is monotonically nonincreasing because โ„“ is entrywise monotonically nonincreasing by assumption, and ๐œผ belongs to the positive orthant. Furthermore, โ„™ โข [ ๐ฟ โข ( ๐›ฝ ) โ‰ค ๐ฟ ^ + โข ( ๐›ฝ ) ] โ‰ฅ 1 โˆ’ ๐›ฟ by definition of ๐‘… ^ + โข ( ๐€ ) . Denote

๐›ฝ *

inf { ๐›ฝ โˆˆ ฮ› : ๐ฟ โข ( ๐›ฝ ) โ‰ค ๐œ– }

(39)

and suppose ๐‘… โข ( ๐€ ^ )

๐ฟ โข ( 0 ) > ๐œ– . By monotonicity of ๐ฟ it follows that ๐›ฝ * > 0 , and by definition of ๐€ ^ , ๐ฟ ^ + โข ( ๐›ฝ * )

๐‘… ^ + โข ( ๐€ ^ + ๐›ฝ * โข ๐œผ ) < ๐œ– . However, since ๐ฟ โข ( ๐›ฝ * )

๐œ– and โ„™ โข [ ๐ฟ โข ( ๐›ฝ * ) โ‰ค ๐ฟ ^ + โข ( ๐›ฝ * ) ] โ‰ฅ 1 โˆ’ ๐›ฟ , we conclude that this event happens with probability at most ๐›ฟ . Hence, โ„™ โข [ ๐‘… โข ( ๐€ ^ ) โ‰ค ๐œ– ] โ‰ฅ 1 โˆ’ ๐›ฟ and โ„ ๐€ ^ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS.

Lastly, it is easy to see that ๐€ ^ minimizes the mean interval length ๐ผ ยฏ โข ( ๐€ ) over ๐šฒ ~ . Note that

๐ผ ยฏ โข ( ๐€ )

1 ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— ) + 2 โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐œ† ๐‘— ,

(40)

and the statement follows by definition. โˆŽ

A.3Proof that โ„“ ๐›พ is a convex upper bound to โ„“ 01 (see Equation 16)

Recall that for a family of set-valued predictors { โ„ ๐€ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] } ๐€ โˆˆ ฮ› ๐‘‘ indexed by a ๐‘‘ -dimensional vector ๐€

( ๐œ† 1 , โ€ฆ , ๐œ† ๐‘‘ ) โˆˆ ฮ› ๐‘‘ , ฮ› โŠ‚ โ„ ยฏ โ‰” โ„ โˆช { ยฑ โˆž } for some general lower and upper bounds ๐‘™ ^ ๐‘— < ๐‘ข ^ ๐‘— that may depend on ๐‘ฆ , we define โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) to be

โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) )

1 ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] [ 2 โข ( 1 + ๐‘ž ) ๐ผ โข ( ๐€ ) ๐‘— โข | ๐‘ฅ ๐‘— โˆ’ ๐‘ ๐‘— | โˆ’ ๐‘ž ] + ,

(41)

where ๐‘ž

๐›พ / ( 1 โˆ’ ๐›พ ) , ๐›พ โˆˆ [ 0 , 1 ) , ๐ผ โข ( ๐€ ) ๐‘—

๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— , ๐‘ ๐‘—

( ๐‘ข ^ ๐‘— + ๐‘™ ^ ๐‘— ) / 2 , and [ ๐‘ข ] +

max โก ( 0 , ๐‘ข ) . First, we show that โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) is convex in ๐€ for ๐€ โ‰ฅ 0 .

Proof.

Note that โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) is separable in ๐€ . Hence, it suffices to show that โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) is convex in each entry ๐œ† ๐‘— . That is, we want to show that

โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘—

[ 2 โข ( 1 + ๐‘ž ) ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— โข | ๐‘ฅ ๐‘— โˆ’ ๐‘ ๐‘— | โˆ’ ๐‘ž ] +

(42)

is convex in ๐œ† ๐‘— . Note that:

โ€ข

The term 1 / ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— ) behaves like 1 / ๐œ† ๐‘— , hence it is convex for ๐œ† ๐‘— โ‰ฅ 0

โˆ’ ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— ) / 2 ,

โ€ข

๐ถ

2 โข ( 1 + ๐‘ž ) โข | ๐‘ฅ ๐‘— โˆ’ ๐‘ ๐‘— | is nonnegative, hence ๐ถ โ‹… 1 / ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— ) is convex,

โ€ข

๐‘ž does not depend on ๐œ† ๐‘— , hence ๐ถ โ‹… 1 / ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— ) โˆ’ ๐‘ž is still convex, and finally

โ€ข

the positive part [ ๐‘ข ] +

max โก ( 0 , ๐‘ข ) is a convex function of its argument, hence [ ๐ถ โ‹… 1 / ( ๐‘ข ^ ๐‘— โˆ’ ๐‘™ ^ ๐‘— + 2 โข ๐œ† ๐‘— ) โˆ’ ๐‘ž ] + is convex.

We conclude that โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘— is convex in each entry ๐‘— โˆˆ [ ๐‘‘ ] for ๐œ† ๐‘— โ‰ฅ 0 , hence โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) is convex for ๐€ โ‰ฅ 0 . โˆŽ

Note that โ„“ ๐›พ is an upper bound to โ„“ 01 by construction. One can see that โˆ€ ๐€ , โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) โ‰ฅ โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) .

Proof.

We will now show that โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘— โ‰ฅ โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘— entrywise. To this end, we will show that both functions attain the same value (i.e., 1) at the extremes of the intervals (i.e., ๐‘™ ^ โˆ’ ๐€ and ๐‘ข ^ + ๐€ ), and use the fact that โ„“ ๐›พ โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘— is convex nonnegative.

First, note that โˆ€ ๐€ , it holds that

โ„“ ๐›พ โข ( ๐‘™ ^ โˆ’ ๐€ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘—

โ„“ ๐›พ โข ( ๐‘ข ^ + ๐€ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘—

[ 2 โข ( 1 + ๐‘ž ) ๐ผ โข ( ๐€ ) ๐‘— โ‹… | ๐ผ โข ( ๐€ ) ๐‘— 2 | โˆ’ ๐‘ž ] +

1 ,

(43)

โ„“ 01 โข ( ๐‘™ ^ โˆ’ ๐€ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘—

โ„“ 01 โข ( ๐‘ข ^ + ๐€ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘—

1 .

(44)

Furthermore,
โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) ๐‘—

0 , โˆ€ ๐‘ฅ ๐‘— โˆˆ [ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ]

(45)

by definition. We conclude that โ„“ ๐›พ upper bounds โ„“ 01 because โ„“ ๐›พ is convex, nonnegative, it intersects โ„“ 01 at ๐‘™ ^ โˆ’ ๐€ and ๐‘ข ^ + ๐€ (for which both losses are equal to 1), and โ„“ 01 is exactly 0 between ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— and ๐‘ข ^ ๐‘— + ๐œ† ๐‘— . โˆŽ

Appendix BRisk Controlling Prediction Sets (RCPS) [10]

In this section, we present the original RCPS procedure presented in Bates et al. [10]. Let โ„“ : ๐’ด ร— ๐’ด โ€ฒ โ†’ โ„ , ๐’ด โ€ฒ โІ 2 ๐’ด be a monotonically nonincreasing loss function (see Equation 4) over ๐’ณ โŠ‚ โ„ ๐‘‘ and ๐’ด โŠ‚ โ„ ๐‘‘ , and let { โ„ ๐œ† โข ( ๐‘ฆ ) } ๐œ† โˆˆ ฮ› , ฮ› โŠ‚ โ„ ยฏ be a family of set-valued predictors โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ that satisfies the nesting property in Equation 5. Here,

Algorithm 2 summarizes the original conformal risk control procedure introduced in Bates et al. [10] for a general bounding function UCB โข ( ๐‘› , ๐›ฟ , ๐‘… ^ โข ( ๐œ† ) ) that accepts the number of samples in a calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› of ๐‘› i.i.d. samples from an unknown distribution ๐’Ÿ over ๐’ณ ร— ๐’ด , a failure probability ๐›ฟ โˆˆ [ 0 , 1 ] , the empirical estimate ๐‘… ^ โข ( ๐œ† )

1 / ๐‘› โ‹… โˆ‘ ๐‘–

1 ๐‘› โ„“ โข ( ๐‘ฅ ๐‘– , โ„ ๐œ† โข ( ๐‘ฆ ๐‘– ) ) evaluated on ๐’ฎ cal , and that returns a pointwise upper confidence bound ๐‘… ^ + โข ( ๐œ† ) that satisfies

โ„™ โข [ ๐‘… โข ( ๐œ† ) โ‰ค ๐‘… ^ + โข ( ๐œ† ) ] โ‰ฅ 1 โˆ’ ๐›ฟ

(46)

as presented in Equation 6. For example, for losses bounded above by 1, Hoeffdingโ€™s inequality [24] yields

๐‘… ^ + โข ( ๐œ† )

UCB โข ( ๐‘› , ๐›ฟ , ๐‘… ^ โข ( ๐œ† ) )

๐‘… ^ โข ( ๐œ† ) + 1 2 โข ๐‘› โข log โก ( 1 ๐›ฟ ) .

(47)

In practice, tighter alternatives exist (see Bates et al. [10], Section 3.1 for a thorough discussion).

Algorithm 2 ( ๐œ– , ๐›ฟ ) -RCPS (see [6], Algorithm 2) 1:  Input: risk level ๐œ– โ‰ฅ 0 , failure probability ๐›ฟ โˆˆ [ 0 , 1 ] , calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› of ๐‘› i.i.d. samples, monotonically nonincreasing loss function โ„“ , family of nested set-valued predictors { โ„ ๐œ† } ๐œ† โˆˆ ฮ› , ฮ› โŠ‚ โ„ ยฏ , initial (large) value ๐œ† max , stepsize d โข ๐œ†

0 . 2:   ๐œ† โ† ๐œ† max 3:   ๐‘… ^ + โข ( ๐œ† ) โ† 0 4:  while  ๐‘… ^ + โข ( ๐œ† ) โ‰ค ๐œ–  do 5:      ๐œ† โ† ๐œ† โˆ’ d โข ๐œ† 6:      ๐‘… ^ โข ( ๐œ† ) โ† 1 / ๐‘› โ‹… โˆ‘ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) โˆˆ ๐’ฎ cal โ„“ โข ( ๐‘ฅ ๐‘– , โ„ ๐œ† โข ( ๐‘ฆ ๐‘– ) ) 7:      ๐‘… ^ + โข ( ๐œ† ) โ† UCB โข ( ๐‘› , ๐›ฟ , ๐‘… ^ โข ( ๐œ† ) ) 8:  end while 9:   ๐œ† ^ โ† ๐œ† + d โข ๐œ† 10:  return ๐œ† ^ Appendix CComputing the UCB

In this section, we include a detailed analytical example on how one can compute the UCB ๐‘… ^ 01 + โข ( ๐€ + ๐œผ ) by means of concentration inequalities. Recall that, for a family of nested set predictors { โ„ ๐€ โข ( ๐‘ฆ ) } ๐€ โˆˆ ฮ› ๐‘‘ indexed by a vector ๐€

( ๐œ† 1 , โ€ฆ , ๐œ† ๐‘‘ ) , the 01 loss โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ โข ( ๐‘ฆ ) ) counts the number of pixels in ๐‘ฅ that fall outside of their respective intervals in โ„ ๐€ โข ( ๐‘ฆ ) . For example, choosing โ„ ๐€ โข ( ๐‘ฆ ) ๐‘—

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] as proposed in Equation 11, given a vector ๐œผ

( ๐œ‚ 1 , โ€ฆ , ๐œ‚ ๐‘‘ ) yields

โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ + ๐œผ โข ( ๐‘ฆ ) )

1 ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐Ÿ™ โข [ ๐‘ฅ ๐‘— โˆ‰ [ ๐‘™ ^ ๐‘— โˆ’ ( ๐œ† ๐‘— + ๐œ‚ ๐‘— ) , ๐‘ข ^ ๐‘— + ( ๐œ† ๐‘— + ๐œ‚ ๐‘— ) ] ] .

(48)

Then, ๐‘… 01 โข ( ๐€ + ๐œผ )

๐”ผ โข [ โ„“ 01 โข ( ๐‘ฅ , โ„ ๐€ + ๐œผ โข ( ๐‘ฆ ) ) ] denotes the risk of โ„ ๐€ + ๐œผ and its empirical estimate over a calibration set ๐’ฎ cal

{ ( ๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) } ๐‘–

1 ๐‘› of ๐‘› i.i.d. samples from an unknown distribution ๐’Ÿ over ๐’ณ ร— ๐’ด is

๐‘… ^ 01 โข ( ๐€ + ๐œผ )

1 ๐‘› โข โˆ‘ ๐‘– โˆˆ [ ๐‘› ] โ„“ 01 โข ( ๐‘ฅ ๐‘– , โ„ ๐€ + ๐œผ โข ( ๐‘ฆ ๐‘– ) ) .

(49)

In turn, for a user-specified failure probability ๐›ฟ โˆˆ [ 0 , 1 ] , the UCB ๐‘… 01 + โข ( ๐€ + ๐œผ ) can be obtained by means of common concentration inequalities. For example, Hoeffdingโ€™s inequality implies

๐‘… ^ 01 + โข ( ๐€ + ๐œผ )

๐‘… ^ 01 โข ( ๐€ + ๐œผ ) + 1 2 โข ๐‘› โข log โก ( 1 ๐›ฟ ) .

(50) Appendix DExperimental Details

In this section, we include further experimental details. All experiments were performed on a private cluster with 8 NVIDIA RTX A5000 with 24 GB of memory.

D.1Datasets (a)CelebA dataset. (b)AbdomenCT-1K dataset. Figure D.1:Example images. D.1.1CelebA dataset

The CelebA dataset [40] (available at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) contains more than 200 ร— 10 3 , 178 ร— 218 pixel images of celebrity faces with several landmark locations and binary attributes (e.g., eyeglasses, bangs, smiling). In this work, we center-crop all images to 128 ร— 128 pixels and normalize them between [ โˆ’ 1 , 1 ] .

D.1.2AbdomenCT-1K dataset

The AbdomenCT-1K dataset [41] (available at https://github.com/JunMa11/AbdomenCT-1K) comprises more than 1000 abdominal CT scans (for a total of more than 200 ร— 10 3 , 512 ร— 512 pixel individual images) aggregated from 6 existing datasets. Scans are provided in NIfTI format, so we first convert them to their Hounsfield unit (HU) values and subsequently window them with the standard abdomen setting (WW = 400, WL = 40) such that pixels intensities are normalized in [ 0 , 1 ] and they represent the same tissue across images.

Figure D.1 shows some example images for both datasets.

D.2Models

Recall that in this work, we train both:

โ€ข

A score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) โ‰ˆ โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ~ ) to sample from ๐‘ โข ( ๐‘ฅ | ๐‘ฆ ) as introduced in Section 2.1, and

โ€ข

a modified time-conditional image regressor ๐‘“ : ๐’ด ร— โ„ โ†’ ๐’ณ 3 such that ๐‘“ โข ( ๐‘ฆ , ๐‘ก )

( ๐‘ž ^ ๐›ผ / 2 , ๐‘ฅ ^ , ๐‘ž ^ ๐›ผ / 2 ) , where ๐‘ฅ ^ โ‰ˆ ๐”ผ โข [ ๐‘ฅ โˆฃ ๐‘ฆ ] and ๐‘ž ^ ๐›ผ / 2 , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 are the entrywise ๐›ผ / 2 and 1 โˆ’ ๐›ผ / 2 quantile regressors of ๐‘ฅ [34, 45, 6], respectively.

Both models are implemented wit the same U-net-like [46] backbone: NCSN++, which was introduced by Song et al. [53] (code is available at https://github.com/yang-song/score_sde). We use the original NCSN++ configurations presented in Song et al. [53] for the CelebA dataset and, for the AbdomenCT-1K dataset, for the FFHQ dataset (available at https://github.com/NVlabs/ffhq-dataset) given the larger image size of 512 ร— 512 . For the image regressor ๐‘“ , we use the original implementation of the quantile regression head used in Angelopoulos et al. [6] (available at https://github.com/aangelopoulos/im2im-uq) on top of the NCSN++ backbone. This allows us to maintain a time-conditional backbone and extend the original image regressor presented in Angelopoulos et al. [6] to all noise levels as the score network.

D.3Training D.3.1Data Augmentation

For both datasets, we augment the training data by means of random horizontal and vertical flips, and random rotations between [ โˆ’ ๐œ‹ / 2 , ๐œ‹ / 2 ] degrees.

D.3.2Forward SDE (a)CelebA dataset. (b)AbdomenCT-1K dataset. Figure D.2:Example of perturbed images via the forward SDE. The final level of noise is ๐œŽ 2

1 .

Recall that in this work, we are interested in solving the classical denoising problem where ๐‘ฆ

๐‘ฅ + ๐‘ฃ 0 is a noisy observation of a ground truth image ๐‘ฅ perturbed with random Gaussian noise with known variance ๐œŽ 0 2 . As done by previous works [50, 51, 54, 31], we model the observation process with a variance exploding (VE) forward-time SDE

d โข ๐‘ฅ

d โข [ ๐œŽ 2 โข ( ๐‘ก ) ] d โข ๐‘ก โข d โข ๐‘ค where ๐œŽ โข ( ๐‘ก )

๐œŽ min โข ( ๐œŽ max ๐œŽ min ) ๐‘ก , ๐‘ก โˆˆ [ 0 , 1 ]

(51)

such that ๐œŽ โข ( 0 )

๐œŽ min and ๐œŽ โข ( 1 )

๐œŽ max . In particular, we set ๐œŽ min

0 and ๐œŽ max

90 for the CelebA dataset and ๐œŽ max

1 for the AbdomenCT-1K dataset. It has been shown [53] that the above forward-time SDE is equivalent to the following discrete Markov chain

๐‘ฅ ๐‘ก

๐‘ฅ ๐‘ก โˆ’ 1 + ๐œŽ ๐‘ก 2 โˆ’ ๐œŽ ๐‘ก โˆ’ 1 2 โข ๐‘ง , ๐‘ง โˆผ ๐’ฉ โข ( 0 , ๐•€ ) , for ๐‘ก

1 , โ€ฆ , ๐‘

(52)

and { ๐œŽ ๐‘– } ๐‘–

0 ๐‘ noise levels. That is, ๐‘ฅ ๐‘ก

๐‘ฅ + ๐‘ง ๐‘ก , where ๐‘ง ๐‘ก โˆผ ( 0 , ๐œŽ ๐‘ก 2 ) . Figure D.2 shows some example images from both datasets perturbed via the forward SDE described in this section.

D.3.3Denoising Score-matching

Here, we briefly describe the loss function used to train the time-conditional score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) . Denote ๐œƒ โˆˆ ฮ˜ the parametrization of ๐‘  , then, following Song and Ermon [50], Song et al. [53], we have ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก )

๐‘  ๐œƒ * โข ( ๐‘ฅ ~ , ๐‘ก ) , where

๐œƒ *

arg โก min ๐œƒ โˆˆ ฮ˜ ๐”ผ ๐‘ก โˆผ ๐‘ˆ โข ( 0 , 1 ) [ ๐œ‰ ( ๐‘ก ) ๐”ผ ๐‘ฅ โˆผ ๐‘ โข ( ๐‘ฅ ) , ๐‘ฅ โข ( ๐‘ก ) | ๐‘ฅ [ โˆฅ ๐‘  ๐œƒ ( ๐‘ฅ ( ๐‘ก ) , ๐‘ก ) โˆ’ โˆ‡ ๐‘ฅ log ๐‘ ๐‘ก ( ๐‘ฅ ( ๐‘ก ) | ๐‘ฅ ) โˆฅ 2 ] ]

(53)

= arg โก min ๐œƒ โˆˆ ฮ˜ โก ๐”ผ ๐‘ก โˆผ ๐‘ˆ โข ( 0 , 1 ) โข [ ๐œ‰ โข ( ๐‘ก ) โข ๐”ผ ๐‘ฅ โˆผ ๐‘ โข ( ๐‘ฅ ) , ๐‘ฅ โข ( ๐‘ก ) | ๐‘ฅ โข [ โ€– ๐‘  ๐œƒ โข ( ๐‘ฅ โข ( ๐‘ก ) , ๐‘ก ) + ๐‘ฅ โข ( ๐‘ก ) โˆ’ ๐‘ฅ ๐œŽ โข ( ๐‘ก ) โ€– 2 ] ] ,

(54)

where ๐œ‰ โข ( ๐‘ก ) โˆ ๐œŽ 2 โข ( ๐‘ก ) and ๐‘ˆ โข ( 0 , 1 ) is the uniform distribution over [ 0 , 1 ] .

D.3.4Quantile Regression (a)CelebA dataset. (b)AbdomenCT-1K dataset. Figure D.3:Example of results with the modified image regressor.

Similarly to above, we briefly describe the loss function used to train the modified time-conditional image regressor ๐‘“ . Denote ๐œƒ โˆˆ ฮ˜ the parametrization of ๐‘“ , and recall that for a desired ๐›ผ quantile and its respective quantile regressor ๐‘ž ^ ๐›ผ , the quantile loss function [34, 45, 6] โ„“ ๐›ผ โข ( ๐‘ฅ , ๐‘ž ^ ๐›ผ ) is

โ„“ ๐›ผ โข ( ๐‘ฅ , ๐‘ž ^ ๐›ผ )

๐›ผ โข ( ๐‘ฅ โˆ’ ๐‘ž ^ ๐›ผ ) โ‹… ๐Ÿ™ โข [ ๐‘ฅ

๐‘ž ^ ๐›ผ ] + ( 1 โˆ’ ๐›ผ ) โข ( ๐‘ž ^ ๐›ผ โˆ’ ๐‘ฅ ) โ‹… ๐Ÿ™ โข [ ๐‘ฅ โ‰ค ๐‘ž ^ ๐›ผ ] .

(55)

For the image regressor ๐‘“ , we set ๐›ผ

0.10 and use the original implementation of the quantile regression head from Angelopoulos et al. [6] (available at https://github.com/aangelopoulos/im2im-uq) which minimizes the multi-loss objective

โ„’ โข ( ๐‘ฅ , ๐‘“ โข ( ๐‘ฆ , ๐‘ก ) )

โ„“ ๐›ผ / 2 โข ( ๐‘ฅ , ๐‘ž ^ ๐›ผ / 2 ) + ( ๐‘ฅ โˆ’ ๐‘ฅ ^ ) 2 + โ„“ 1 โˆ’ ๐›ผ / 2 โข ( ๐‘ฅ , ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 )

(56)

on top of the NCSN++ backbone. This allows us to maintain a time-conditional backbone and extend the original image regressor presented in Angelopoulos et al. [6] to all noise levels as the score network. Then, ๐‘“ โข ( ๐‘ฆ , ๐‘ก )

๐‘“ ๐œƒ * โข ( ๐‘ฆ , ๐‘ก ) where

๐œƒ *

arg โก min ๐œƒ โˆˆ ฮ˜ ๐”ผ ๐‘ก โˆผ ๐‘ˆ โข ( 0 , 1 ) [ โ„’ ( ๐‘ฅ , ๐‘“ ( ๐‘ฅ ( ๐‘ก ) , ๐‘ก ) ] .

(57) D.4Sampling (a)CelebA dataset. (b)AbdomenCT-1K dataset. Figure D.4:Example of images sampled via Algorithm 3.

Here, we briefly describe the sampling procedure used in this work to solve the conditional reserve-time SDE

d โข ๐‘ฅ

[ โ„Ž โข ( ๐‘ฅ , ๐‘ก ) โˆ’ ๐‘” โข ( ๐‘ก ) 2 โข โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ | ๐‘ฆ ) ] โข d โข ๐‘ก + ๐‘” โข ( ๐‘ก ) โข d โข ๐‘ค ยฏ

(58)

where ๐‘ฆ is the initial noisy observation with known noise level ๐œŽ 0 2 , and ๐‘” โข ( ๐‘ก )

d โข [ ๐œŽ 2 โข ( ๐‘ก ) ] / d โข ๐‘ก , โ„Ž โข ( ๐‘ฅ , ๐‘ก )

0 as in Equation 51. Although several different discretization schemes exist [53], we use the classical Euler-Maruyama discretization [30] since the contribution of this work is not focused on improving existing diffusion models. Recall that for a general Itรด process

d โข ๐‘ฅ

โ„Ž โข ( ๐‘ฅ , ๐‘ก ) โข d โข ๐‘ก + ๐‘” โข ( ๐‘ก ) โข d โข ๐‘ค ,

(59)

its Euler-Maruyama discretization with step ฮ” โข ๐‘ก is

๐‘ฅ ๐‘ก + 1

๐‘ฅ ๐‘ก + โ„Ž โข ( ๐‘ฅ , ๐‘ก ) โข ฮ” โข ๐‘ก + ๐‘” โข ( ๐‘ก ) โข ฮ” โข ๐‘ก โข ๐‘ง , ๐‘ง โˆผ ๐’ฉ โข ( 0 , ๐•€ ) .

(60)

Furthermore, under the forward SDE described in Equation 51, and as shown by previous works [32, 31, 29], it holds that

โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ โข ( ๐‘ก ) | ๐‘ฆ )

โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ โข ( ๐‘ก ) ) + ๐‘ฆ โˆ’ ๐‘ฅ โข ( ๐‘ก ) ๐œŽ 0 2 โˆ’ ๐œŽ 2 โข ( ๐‘ก ) .

(61)

We conclude that the reverse-time Euler-Maruyama discretization of the conditional SDE above is

๐‘ฅ ๐‘ก โˆ’ 1

๐‘ฅ ๐‘ก + ๐‘” ๐‘ก 2 โข [ โˆ‡ ๐‘ฅ log โก ๐‘ ๐‘ก โข ( ๐‘ฅ ๐‘ก ) + ๐‘ฆ โˆ’ ๐‘ฅ ๐‘ก ๐œŽ 0 2 โˆ’ ๐œŽ ๐‘ก 2 ] โข ฮ” โข ๐‘ก + ๐‘” ๐‘ก โข ฮ” โข ๐‘ก โข ๐‘ง , ๐‘ง โˆผ ๐’ฉ โข ( 0 , ๐•€ ) .

(62)

Algorithm 3 implements the sampling procedure for a general score network ๐‘  โข ( ๐‘ฅ ~ , ๐‘ก ) .

Algorithm 3 Denoising Reverse-time SDE 1:  Input: observation ๐‘ฆ , initial noise level ๐œŽ 0 , ๐œŽ max , ๐œŽ min , number of steps ๐‘ 2:   ๐‘ก 0 โ† ( log โก ๐œŽ 0 โˆ’ log โก ๐œŽ min ) / ( log โก ๐œŽ max โˆ’ log โก ๐œŽ min ) 3:   ฮ” โข ๐‘ก โ† 1 / ๐‘ 4:   ๐‘› โ† โŒŠ ๐‘ก 0 / ฮ” โข ๐‘ก โŒ‹ 5:   ๐‘ฅ ๐‘› โ† ๐‘ฆ 6:  for  ๐‘–

๐‘› , โ€ฆ , 1  do 7:     Draw ๐‘ง ๐‘– โˆผ ๐’ฉ โข ( 0 , ๐•€ ) 8:      ๐‘ก ๐‘– โ† ๐‘– โข ฮ” โข ๐‘ก 9:      ๐œŽ ๐‘– โ† ๐œŽ min โ‹… ( ๐œŽ max / ๐œŽ min ) ๐‘ก ๐‘– 10:      ๐‘” ๐‘– โ† ๐œŽ ๐‘– โ‹… 2 โข log โก ( ๐œŽ max / ๐œŽ min ) 11:      ๐‘ฅ ๐‘– โˆ’ 1 โ† ๐‘ฅ ๐‘– + ๐‘” ๐‘– 2 โข [ ๐‘  โข ( ๐‘ฅ ๐‘– , ๐‘ก ๐‘– ) + ( ๐‘ฆ โˆ’ ๐‘ฅ ๐‘– ) / ( ๐œŽ 0 2 โˆ’ ๐œŽ ๐‘– 2 ) ] โข ฮ” โข ๐‘ก + ๐‘” ๐‘– โข ฮ” โข ๐‘ก โข ๐‘ง ๐‘– 12:  end for 13:  return ๐‘ฅ 0 Appendix EComparison with Conffusion

In this section, we discuss the differences between the Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› framework proposed by Horwitz and Hoshen [26] and the contributions of this paper. Although similar in spirit and motivation, the two are fundamentally different and address separate and distinct questions. Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› deploys ideas of quantile regression [34] to fine-tune an existing score network and obtain an image regressor equipped with heuristic uncertainty intervals. Such intervals can then be conformalized to provide risk control. On the other hand, our ๐พ -RCPS is a novel high-dimensional calibration procedure for any stochastic sampler and any notion of uncertainty (i.e., it is agnostic of the notion of uncertainty), including diffusion models and quantile regression. In this paper, we present ๐พ -RCPS for diffusion models given their remarkable performance in solving inverse problems via conditional sampling.

In order to contextualize the choices of parametrization for the families of set predictors in Equations 19 and 20, we highlight a gap between the presentation of Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› in the arXiv paper available at https://arxiv.org/abs/2211.09795v1 and the official code release in https://github.com/eliahuhorwitz/Conffusion. In particular Section 3.2 of Horwitz and Hoshen [26] introduces the following family of set predictors:

โ„ ๐€ , Conffusion arXiv โข ( ๐‘ฆ ) ๐‘—

[ ๐œ† ๐‘— โข ( ๐‘ž ^ ๐›ผ / 2 ) ๐‘— , ๐œ† ๐‘— โข ( ๐‘ž ^ 1 โˆ’ ๐›ผ / 2 ) ๐‘— ] ,

(63)

which does not satisfy the nesting property in Equation 5. It is easy to see thatโ€”without any further assumptionsโ€”as ๐œ† ๐‘— increases, โ„ ๐€ , Conffusion arXiv โข ( ๐‘ฆ ) ๐‘— does not cover the interval [ 0 , 1 ] and the 01 loss in Equation 10 is not entrywise monotonically non-increasing. This renders the RCPS procedure (and most calibration procedures) not applicable as presented in the arXiv paper. While this family of set predictors is thus not applicable, we resorted to the authorsโ€™ GitHub release where such parametrization is different (see https://github.com/eliahuhorwitz/Conffusion/blob/fffe5c946219cf9dead1a1c921a131111e31214e/inpainting_n_conffusion/core/calibration_masked.py#L28). More precisely, the GitHub implementation reflects the multiplicative parametrization presented in Equation 19, which does indeed satisfy the entrywise nesting property. Finally, we additionally propose the additive parametrization in Equation 20 to showcase how ๐พ -RCPS is agnostic of the notion of uncertainty and it can be used in conjunction with Con ๐‘“ โข ๐‘“ โข ๐‘ข โข ๐‘  โข ๐‘– โข ๐‘œ โข ๐‘› .

Appendix FErratum of Theorem 3.3

In this section, we address the differences between the version of Theorem 3.3 presented in this manuscript and the published paper available at https://proceedings.mlr.press/v202/teneggi23a.html, which contains a small typo. We include the two theorems side-by-side with differences highlighted.

Theorem (This manuscript). Let โ„“ : ๐’ณ ร— ๐’ณ โ€ฒ โ†’ โ„ , ๐’ณ โ€ฒ โІ 2 ๐’ณ , ๐’ณ โŠ‚ โ„ ๐‘‘ be an entrywise monotonically nonincreasing function and let { โ„ ๐›Œ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] } ๐›Œ โˆˆ ฮ› ๐‘‘ be a family of set-valued predictors โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ , ๐’ด โŠ‚ โ„ ๐‘‘ indexed by ๐›Œ โˆˆ ฮ› ๐‘‘ , ฮ› โŠ‚ โ„ ยฏ , for some lower and upper bounds ๐‘™ ^ ๐‘— < ๐‘ข ^ ๐‘— that may depend on ๐‘ฆ . Given ๐šฒ ~

๐›Œ ~ + ๐œ” โข ๐›ˆ , ๐œ” โˆˆ ฮ› , for fixed ๐›Œ ~ โˆˆ โ„ ๐‘‘ and ๐›ˆ โˆˆ โ„ ๐‘‘ , ๐›ˆ โ‰ฅ 0 , if
๐€ ^

arg โก min ๐€ โˆˆ ๐šฒ ~ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐œ† ๐‘— s.t. ๐‘… ^ + โข ( ๐€ + ๐›ฝ โข ๐œผ ) < ๐œ–

โˆ€ ๐›ฝ โ‰ฅ 0 , then โ„ ๐›Œ ^ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS and ๐›Œ ^ minimizes the mean interval length over ๐šฒ ~ .

 

Theorem (Published paper). Let โ„“ : ๐’ณ ร— ๐’ณ โ€ฒ โ†’ โ„ , ๐’ณ โ€ฒ โІ 2 ๐’ณ , ๐’ณ โŠ‚ โ„ ๐‘‘ be an entrywise monotonically nonincreasing function and let { โ„ ๐›Œ โข ( ๐‘ฆ )

[ ๐‘™ ^ ๐‘— โˆ’ ๐œ† ๐‘— , ๐‘ข ^ ๐‘— + ๐œ† ๐‘— ] } ๐›Œ โˆˆ ฮ› ๐‘‘ be a family of set-valued predictors โ„ : ๐’ด โ†’ ๐’ณ โ€ฒ , ๐’ด โŠ‚ โ„ ๐‘‘ indexed by ๐›Œ โˆˆ ฮ› ๐‘‘ , ฮ› โŠ‚ โ„ ยฏ , for some lower and upper bounds ๐‘™ ^ ๐‘— < ๐‘ข ^ ๐‘— that may depend on ๐‘ฆ . For a fixed vector ๐›ˆ โˆˆ โ„ ๐‘‘ , ๐›ˆ โ‰ฅ 0 , if
๐€ ^

arg โก min ๐€ โˆˆ ฮ› ๐‘‘ โข โˆ‘ ๐‘— โˆˆ [ ๐‘‘ ] ๐œ† ๐‘— s.t. ๐‘… ^ + โข ( ๐€ + ๐›ฝ โข ๐œผ ) < ๐œ–

โˆ€ ๐›ฝ โ‰ฅ 0 , then โ„ ๐›Œ ^ โข ( ๐‘ฆ ) is an ( ๐œ– , ๐›ฟ ) -RCPS, and ๐›Œ ^ minimizes the mean interval length.

Reason for the differences

The domain of the optimization problem in the published paper, i.e. ๐€ โˆˆ ฮ› ๐‘‘ , does not reflect the pointwise coverage guarantee provided by ๐‘… ^ + . That is, for each fixed ๐€ โˆˆ ฮ› ๐‘‘ , โ„™ โข [ ๐‘… โข ( ๐€ ) โ‰ค ๐‘… ^ + โข ( ๐€ ) ] โ‰ฅ 1 โˆ’ ๐›ฟ . To solve the optimization problem over ๐€ โˆˆ ฮ› ๐‘‘ , ๐‘… ^ + should provide uniform coverage, i.e. โˆ€ ๐€ โˆˆ ฮ› ๐‘‘ , โ„™ โข [ ๐‘… โข ( ๐€ ) โ‰ค ๐‘… ^ + โข ( ๐€ ) ] โ‰ฅ 1 โˆ’ ๐›ฟ .

Effect on results

We remark that the changes in Theorem 3.3 do not affect the ๐พ -RCPS procedure nor the results presented in the published version of the paper.

Appendix GFigures

This section contains supplementary figures.

(a)CelebA dataset. (b)AbdomenCT-1K dataset. Figure G.1:Empirical estimates of risk over 20 random draws of ๐’ฎ cal . All combinations of notion of uncertainty and calibration procedure successfully control risk at level ๐œ–

0.10 , 0.05 for the CelebA and AbdomenCT-1K dataset, respectively, with probability at least 1 โˆ’ ๐›ฟ , ๐›ฟ

0.10 . Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Report Issue Report Issue for Selection

Xet Storage Details

Size:
93.5 kB
ยท
Xet hash:
cabee630903582d8e42aaee2e94ab5da031af5b2f8b7d186078eb0cdb4d5a468

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.