Buckets:

|
download
raw
56.3 kB

Title: Conformal Risk Control

URL Source: https://arxiv.org/html/2208.02814

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Theory 3Examples 4Extensions 5Conclusion

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: scalerel failed: mdframed failed: biblatex failed: apptools

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: arXiv.org perpetual non-exclusive license arXiv:2208.02814v4 [stat.ME] 13 Jun 2025 \addbibresource

bibliography.bib \AtAppendix \AtAppendix

Conformal Risk Control Anastasios N. Angelopoulos1, Stephen Bates1, Adam Fisch2, Lihua Lei3, and Tal Schuster4 ( 1University of California, Berkeley 2Massachusetts Institute of Technology 3Stanford University 4Google Research ) Abstract

We extend conformal prediction to control the expected value of any monotone loss function. The algorithm generalizes split conformal prediction together with its coverage guarantee. Like conformal prediction, the conformal risk control procedure is tight up to an ๐’ช โข ( 1 / ๐‘› ) factor. We also introduce extensions of the idea to distribution shift, quantile risk control, multiple and adversarial risk control, and expectations of U-statistics. Worked examples from computer vision and natural language processing demonstrate the usage of our algorithm to bound the false negative rate, graph distance, and token-level F1-score.

1Introduction

We seek to endow some pre-trained machine learning model with guarantees on its performance as to ensure its safe deployment. Suppose we have a base model ๐‘“ that is a function mapping inputs ๐‘ฅ โˆˆ ๐’ณ to values in some other space, such as a probability distribution over classes. Our job is to design a procedure that takes the output of ๐‘“ and post-processes it into quantities with desirable statistical guarantees.

Split conformal prediction \citepvovk2005algorithmic,papadopoulos2002inductive, which we will henceforth refer to simply as โ€œconformal predictionโ€, has been useful in areas such as computer vision \citepangelopoulos2020sets and natural language processing \citepfisch2020efficient to provide such a guarantee. By measuring the modelโ€™s performance on a calibration dataset { ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) } ๐‘–

1 ๐‘› of feature-response pairs, conformal prediction post-processes the model to construct prediction sets that bound the miscoverage,

โ„™ โข ( ๐‘Œ ๐‘› + 1 โˆ‰ ๐’ž โข ( ๐‘‹ ๐‘› + 1 ) ) โ‰ค ๐›ผ ,

(1)

where ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) is a new test point, ๐›ผ is a user-specified error rate (e.g., 10%), and ๐’ž is a function of the model and calibration data that outputs a prediction set. Note that ๐’ž is formed using the first ๐‘› data points, and the probability in (1) is over the randomness in all ๐‘› + 1 data points (i.e., the draw of both the calibration and test points).

In this work, we extend conformal prediction to prediction tasks where the natural notion of error is not simply miscoverage. In particular, our main result is that a generalization of conformal prediction provides guarantees of the form

๐”ผ โข [ โ„“ โข ( ๐’ž โข ( ๐‘‹ ๐‘› + 1 ) , ๐‘Œ ๐‘› + 1 ) ] โ‰ค ๐›ผ ,

(2)

for any bounded loss function โ„“ that shrinks as ๐’ž โข ( ๐‘‹ ๐‘› + 1 ) grows. We call this a conformal risk control guarantee. Note that (2) recovers the conformal miscoverage guarantee in (1) when using the miscoverage loss, โ„“ โข ( ๐’ž โข ( ๐‘‹ ๐‘› + 1 ) , ๐‘Œ ๐‘› + 1 )

๐Ÿ™ โข { ๐‘Œ ๐‘› + 1 โˆ‰ ๐’ž โข ( ๐‘‹ ๐‘› + 1 ) } . However, our algorithm also extends conformal prediction to situations where other loss functions, such as the false negative rate (FNR) or F1-score, are more appropriate.

As an example, consider multilabel classification, where the ๐‘Œ ๐‘– โІ { 1 , โ€ฆ , ๐พ } are sets comprising a subset of ๐พ classes. Given a trained multilabel classifier ๐‘“ : ๐’ณ โ†’ [ 0 , 1 ] ๐พ , we want to output sets that include a large fraction of the true classes in ๐‘Œ ๐‘– . To that end, we post-process the modelโ€™s raw outputs into the set of classes with sufficiently high scores, ๐’ž ๐œ† โข ( ๐‘ฅ )

{ ๐‘˜ : ๐‘“ โข ( ๐‘‹ ) ๐‘˜ โ‰ฅ 1 โˆ’ ๐œ† } . Note that as the threshold ๐œ† grows, we include more classes in ๐’ž ๐œ† โข ( ๐‘ฅ ) โ€”i.e., it becomes more conservative. In this case, conformal risk control finds a threshold value ๐œ† ^ that controls the fraction of missed classes, i.e., the expected value of โ„“ โข ( ๐’ž ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) , ๐‘Œ ๐‘› + 1 )

1 โˆ’ | ๐‘Œ ๐‘› + 1 โˆฉ ๐’ž ๐œ† โข ( ๐‘‹ ๐‘› + 1 ) | / | ๐‘Œ ๐‘› + 1 | . Setting ๐›ผ

0.1 would ensure that our algorithm produces sets ๐’ž ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) containing โ‰ฅ 90 % of the true classes in ๐‘Œ ๐‘› + 1 on average.

1.1Algorithm and preview of main results

Formally, we will consider post-processing the predictions of the model ๐‘“ to create a function ๐’ž ๐œ† โข ( โ‹… ) . The function has a parameter ๐œ† that encodes its level of conservativeness: larger ๐œ† values yield more conservative outputs (e.g., larger prediction sets). To measure the quality of the output of ๐’ž ๐œ† , we consider a loss function โ„“ โข ( ๐’ž ๐œ† โข ( ๐‘ฅ ) , ๐‘ฆ ) โˆˆ ( โˆ’ โˆž , ๐ต ] for some ๐ต < โˆž . We require the loss function to be non-increasing as a function of ๐œ† . Our goal is to choose ๐œ† ^ based on the observed data { ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) } ๐‘–

1 ๐‘› so that risk control as in (2) holds.

We now rewrite this same task in a more notationally convenient and abstract form. Consider an exchangeable collection of non-increasing, random functions ๐ฟ ๐‘– : ฮ› โ†’ ( โˆ’ โˆž , ๐ต ] , ๐‘–

1 , โ€ฆ , ๐‘› + 1 . Throughout the paper, we assume ๐œ† max

โ–ณ sup ฮ› โˆˆ ฮ› . We seek to use the first ๐‘› functions to choose a value of the parameter, ๐œ† ^ , in such a way that the risk on the unseen function is controlled:

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ .

(3)

We are primarily motivated by the case where ๐ฟ ๐‘– โข ( ๐œ† )

โ„“ โข ( ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– ) , ๐‘Œ ๐‘– ) , in which case the guarantee in (3) coincides with risk control as in (2).

Now we describe the algorithm. Let ๐‘… ^ ๐‘› โข ( ๐œ† )

( ๐ฟ 1 โข ( ๐œ† ) + โ€ฆ + ๐ฟ ๐‘› โข ( ๐œ† ) ) / ๐‘› . Given any desired risk level upper bound ๐›ผ โˆˆ ( โˆ’ โˆž , ๐ต ) , define

๐œ† ^

inf { ๐œ† : ๐‘› ๐‘› + 1 โข ๐‘… ^ ๐‘› โข ( ๐œ† ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ } .

(4)

When the set is empty, we define ๐œ† ^

๐œ† max . Our proposed conformal risk control algorithm is to deploy ๐œ† ^ on the forthcoming test point. Our main result is that this algorithm satisfies (3). When the ๐ฟ ๐‘– are i.i.d. from a continuous distribution, the algorithm satisfies a lower bound saying it is not too conservative,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ฅ ๐›ผ โˆ’ 2 โข ๐ต ๐‘› + 1 .

(5)

We show the reduction from conformal risk control to conformal prediction in Section 2.3. Furthermore, if the risk is non-monotone, then this algorithm does not control the risk; we discuss this in Section 2.4. Finally, we provide both practical examples using real-world data and several theoretical extensions of our procedure in Sections 3 and 4, respectively.

1.2Related work

Conformal prediction was developed by Vladimir Vovk and collaborators beginning in the late 1990s \citepvovk1999machine, vovk2005algorithmic, and has recently become a popular uncertainty estimation tool in the machine learning community, due to its favorable model-agnostic, distribution-free, finite-sample guarantees. See [angelopoulos-gentle] for a modern introduction to the area or [shafer2008tutorial] for a more classical alternative. As previously discussed, in this paper we primarily build on split conformal prediction \citeppapadopoulos2002inductive; statistical properties of this algorithm including the coverage upper bound were studied in [lei2018distribution]. Recently there have been many extensions of the conformal algorithm, mainly targeting deviations from exchangeability \citeptibshirani2019conformal,gibbs2021adaptive,barber2022conformal,fannjiang2022conformal and improved conditional coverage \citepbarber2019limits,romano2019conformalized,guan2020conformal,romano2020classification,angelopoulos2020sets. Most relevant to us is recent work on risk control in high probability \citepvovk2012conditional, bates2021distribution,angelopoulos2021learn and its applications \citep[][inter alia]Park2020PAC,fisch2022conformal, schuster2021consistent, schuster2022confident, sankaranarayanan2022semantic, angelopoulos2022image, angelopoulos2022recommendation. Though these works closely relate to ours in terms of motivation, the algorithm presented herein differs greatly: it has a guarantee in expectation, and neither the algorithm nor its analysis share much technical similarity with these previous works.

To elaborate on the difference between our work and previous literature, first consider conformal prediction. The purpose of conformal prediction is to provide coverage guarantees of the form in (1). The guarantee available through conformal risk control, (3), strictly subsumes that of conformal prediction; it is generally impossible to recast risk control as coverage control. As a second question, one might ask whether (3) can be achieved through standard statistical machinery, such as uniform concentration inequalities. Though it is possible to integrate a uniform concentration inequality to get a bound in expectation, this strategy tends to be excessively loose both in theory and in practice (see, e.g., the bound of  [anthony1993result]). The technique herein avoids these complications; it is simpler than concentration-based approaches, practical to implement, and tight up to a factor of ๐’ช โข ( 1 / ๐‘› ) , which is comparatively faster than concentration would allow. Finally, herein we target distribution-free finite-sample control of (3), but as a side-note it is also worth pointing the reader to the rich literature on functional central limit theorems \citepdavidson2000functional, which are another way of estimating risk functions.

2Theory

In this section, we establish the core theoretical properties of conformal risk control. All proofs, unless otherwise specified, are deferred to Appendix B.

2.1Risk control

We first show that the proposed algorithm leads to risk control when the loss is monotone.

Theorem 2.1.

Assume that ๐ฟ ๐‘– โข ( ๐œ† ) is non-increasing in ๐œ† , right-continuous, and

๐ฟ ๐‘– โข ( ๐œ† max ) โ‰ค ๐›ผ , sup ๐œ† ๐ฟ ๐‘– โข ( ๐œ† ) โ‰ค ๐ต < โˆž โข  almost surely .

(6)

Then

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ .

(7) Proof.

Let ๐‘… ^ ๐‘› + 1 โข ( ๐œ† )

( ๐ฟ 1 โข ( ๐œ† ) + โ€ฆ + ๐ฟ ๐‘› + 1 โข ( ๐œ† ) ) / ( ๐‘› + 1 ) and

๐œ† ^ โ€ฒ

inf { ๐œ† โˆˆ ฮ› : ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ) โ‰ค ๐›ผ } .

(8)

Since inf ๐œ† ๐ฟ ๐‘– โข ( ๐œ† )

๐ฟ ๐‘– โข ( ๐œ† max ) โ‰ค ๐›ผ , ๐œ† ^ โ€ฒ is well-defined almost surely. Since ๐ฟ ๐‘› + 1 โข ( ๐œ† ) โ‰ค ๐ต , we know ๐‘… ^ ๐‘› + 1 โข ( ๐œ† )

๐‘› ๐‘› + 1 โข ๐‘… ^ ๐‘› โข ( ๐œ† ) + ๐ฟ ๐‘› + 1 โข ( ๐œ† ) ๐‘› + 1 โ‰ค ๐‘› ๐‘› + 1 โข ๐‘… ^ ๐‘› โข ( ๐œ† ) + ๐ต ๐‘› + 1 . Thus,

๐‘› ๐‘› + 1 โข ๐‘… ^ ๐‘› โข ( ๐œ† ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ โŸน ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ) โ‰ค ๐›ผ .

(9)

This implies ๐œ† ^ โ€ฒ โ‰ค ๐œ† ^ when the LHS holds for some ๐œ† โˆˆ ฮ› . When the LHS is above ๐›ผ for all ๐œ† โˆˆ ฮ› , by definition, ๐œ† ^

๐œ† max โ‰ฅ ๐œ† ^ โ€ฒ . Thus, ๐œ† ^ โ€ฒ โ‰ค ๐œ† ^ almost surely. Since ๐ฟ ๐‘– โข ( ๐œ† ) is non-increasing in ๐œ† ,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ค ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒ ) ] .

(10)

Let ๐ธ be the multiset of loss functions { ๐ฟ 1 , โ€ฆ , ๐ฟ ๐‘› + 1 } . Then ๐œ† ^ โ€ฒ is a function of ๐ธ , or, equivalently, ๐œ† ^ โ€ฒ is a constant conditional on ๐ธ . Additionally, ๐ฟ ๐‘› + 1 ( ๐œ† ) | ๐ธ โˆผ Uniform ( { ๐ฟ 1 , โ€ฆ , ๐ฟ ๐‘› + 1 } ) by exchangeability. These facts combined with the right-continuity of ๐ฟ ๐‘– imply

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒ ) โˆฃ ๐ธ ]

1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› + 1 ๐ฟ ๐‘– โข ( ๐œ† ^ โ€ฒ ) โ‰ค ๐›ผ .

(11)

The proof is completed by the law of total expectation and (10). โˆŽ

2.2A risk lower bound

Next we show that the conformal risk control procedure is tight up to a factor 2 โข ๐ต / ( ๐‘› + 1 ) . Like the standard conformal coverage upper bound, the proof will rely on a form of continuity that prohibits large jumps in the risk function. Towards that end, we will define the jump function below, which quantifies the size of the discontinuity in a right-continuous input function ๐‘™ at point ๐œ† :

๐ฝ โข ( ๐‘™ , ๐œ† )

lim ๐œ– โ†’ 0 + โข ๐‘™ โข ( ๐œ† โˆ’ ๐œ– ) โˆ’ ๐‘™ โข ( ๐œ† )

(12)

The jump function measures the size of a discontinuity at ๐‘™ โข ( ๐œ† ) . When there is a discontinuity and ๐‘™ is non-increasing, ๐ฝ โข ( ๐‘™ , ๐œ† ) > 0 . When there is no discontinuity, the jump function is zero. The next theorem will assume that the probability that ๐ฟ ๐‘– has a discontinuity at any pre-specified ๐œ† is โ„™ โข ( ๐ฝ โข ( ๐ฟ ๐‘– , ๐œ† ) > 0 )

0 . Under this assumption the conformal risk control procedure is not too conservative.

Theorem 2.2.

In the setting of Theorem 2.1, further assume that the ๐ฟ ๐‘– are i.i.d., ๐ฟ ๐‘– โ‰ฅ 0 , and for any ๐œ† , โ„™ โข ( ๐ฝ โข ( ๐ฟ ๐‘– , ๐œ† ) > 0 )

0 . Then

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ฅ ๐›ผ โˆ’ 2 โข ๐ต ๐‘› + 1 .

2.3Conformal prediction reduces to risk control

Conformal prediction can be thought of as controlling the expectation of an indicator loss function. Recall that the risk upper bound (2) specializes to the conformal coverage guarantee in (1) when the loss function is the indicator of a miscoverage event. The conformal risk control procedure specializes to conformal prediction under this loss function as well. However, the risk lower bound in Theorem 2.2 has a slightly worse constant than the usual conformal guarantee. We now describe these correspondences.

First, we show the equivalence of the algorithms. In conformal prediction, we have conformal scores ๐‘  โข ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) for some score function ๐‘  : ๐’ณ ร— ๐’ด โ†’ โ„ . Based on this score function, we create prediction sets for the test point ๐‘‹ ๐‘› + 1 as

๐’ž ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 )

{ ๐‘ฆ : ๐‘  โข ( ๐‘‹ ๐‘› + 1 , ๐‘ฆ ) โ‰ค ๐œ† ^ } ,

where ๐œ† ^ is the conformal quantile, a parameter that is set based on the calibration data. In particular, conformal prediction chooses ๐œ† ^ to be the โŒˆ ( ๐‘› + 1 ) โข ( 1 โˆ’ ๐›ผ ) โŒ‰ / ๐‘› sample quantile of { ๐‘  โข ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) } ๐‘–

1 ๐‘› . To formulate this in the language of risk control, we consider a miscoverage loss ๐ฟ ๐‘– Cvg โข ( ๐œ† )

๐Ÿ™ โข { ๐‘Œ ๐‘– โˆ‰ ๐’ž ^ ๐œ† โข ( ๐‘‹ ๐‘– ) }

๐Ÿ™ โข { ๐‘  โข ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– )

๐œ† } . Direct calculation of ๐œ† ^ from (4) then shows the equivalence of the proposed procedure to conformal prediction:

๐œ† ^

inf { ๐œ† : 1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› ๐Ÿ™ โข { ๐‘  โข ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) > ๐œ† } + 1 ๐‘› + 1 โ‰ค ๐›ผ }

inf { ๐œ† : 1 ๐‘› โข โˆ‘ ๐‘–

1 ๐‘› ๐Ÿ™ โข { ๐‘  โข ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) โ‰ค ๐œ† } โ‰ฅ โŒˆ ( ๐‘› + 1 ) โข ( 1 โˆ’ ๐›ผ ) โŒ‰ ๐‘› } โŸ conformal โข prediction โข algorithm .

(13)

Next, we discuss how the risk lower bound relates to its conformal prediction equivalent. In the setting of conformal prediction, [lei2018distribution] proves that โ„™ โข ( ๐‘Œ ๐‘› + 1 โˆ‰ ๐’ž ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) ) โ‰ฅ ๐›ผ โˆ’ 1 / ( ๐‘› + 1 ) when the conformal score function follows a continuous distribution. Theorem 2.2 recovers this guarantee with a slightly worse constant: โ„™ โข ( ๐‘Œ ๐‘› + 1 โˆ‰ ๐’ž ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) ) โ‰ฅ ๐›ผ โˆ’ 2 / ( ๐‘› + 1 ) . First, note that our assumption in Theorem 2.2 about the distribution of discontinuities specializes to the continuity of the score function when the miscoverage loss is used:

โ„™ โข ( ๐ฝ โข ( ๐ฟ ๐‘– Cvg , ๐œ† ) > 0 )

0 โŸบ โ„™ โข ( ๐‘  โข ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– )

๐œ† )

0 .

(14)

However, the bound for the conformal case is better than the bound for the general case in Theorem 2.2 by a factor of two. We do not know whether this factor of 2 is improvable. However, this factor is of little practical importance, as the difference between 1 / ( ๐‘› + 1 ) and 2 / ( ๐‘› + 1 ) is small even for moderate values of ๐‘› .

2.4Controlling general loss functions

We next show that the conformal risk control algorithm does not control the risk if the ๐ฟ ๐‘– are not assumed to be monotone. In particular, (3) does not hold. We show this by example.

Proposition 2.1.

For any ๐œ– , there exists a non-monotone loss function such that

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ฅ ๐ต โˆ’ ๐œ– .

(15)

Notice that for any desired level ๐›ผ , the expectation in (3) can be arbitrarily close to ๐ต . Since the function values here are in [ 0 , ๐ต ] , this means that even for bounded random variables, risk control can be violated by an arbitrary amountโ€”unless further assumptions are placed on the ๐ฟ ๐‘– . However, the algorithms developed may still be appropriate for near-monotone loss functions. Simply โ€˜monotonizingโ€™ all loss functions ๐ฟ ๐‘– and running conformal risk control will guarantee (3), but this strategy will only be powerful if the loss is near-monotone. For concreteness, we describe this procedure below as a corollary of Theorem 2.1.

Corollary 1.

Allow ๐ฟ ๐‘– โข ( ๐œ† ) to be any (possibly non-monotone) function of ๐œ† satisfying 6. Take

๐ฟ ~ ๐‘– โข ( ๐œ† )

sup ๐œ† โ€ฒ โ‰ฅ ๐œ† โข ๐ฟ ๐‘– โข ( ๐œ† โ€ฒ ) , ๐‘… ~ ๐‘› โข ( ๐œ† )

1 ๐‘› โข โˆ‘ ๐‘–

1 ๐‘› ๐ฟ ~ ๐‘– โข ( ๐œ† ) and  โข ๐œ† ~

inf { ๐œ† : ๐‘› ๐‘› + 1 โข ๐‘… ~ ๐‘› โข ( ๐œ† ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ } .

(16)

Then,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ~ ) ] โ‰ค ๐›ผ .

(17)

If the loss function is already monotone, then ๐œ† ~ reduces to ๐œ† ^ . We propose a further algorithm for picking ๐œ† in Appendix A that provides an asymptotic risk-control guarantee for non-monotone loss functions. However, this algorithm again is only powerful when the risk ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ) ] is near-monotone and reduces to the standard conformal risk control algorithm when the loss is monotone.

3Examples

To demonstrate the flexibility and empirical effectiveness of the proposed algorithm, we apply it to four tasks across computer vision and natural language processing. All four loss functions are non-binary, monotone losses bounded by 1 . They are commonly used within their respective application domains. Our results validate that the procedure bounds the risk as desired and gives useful outputs to the end-user. We note that the choices of ๐’ž ๐œ† used herein are only for the purposes of illustration; any nested family of sets will work. For each example use case, for a representative ๐›ผ (details provided for each task) we provide both qualitative results, as well as quantitative histograms of the risk and set sizes over 1000 random data splits that demonstrate valid risk control (i.e., with mean โ‰ค ๐›ผ ). Code to reproduce our examples is available at the following GitHub link: https://github.com/aangelopoulos/conformal-risk.

3.1FNR control in tumor segmentation Figure 1:FNR control in tumor segmentation. The top figure shows examples of our procedure with correct pixels in white, false positives in blue, and false negatives in red. The bottom plots report FNR and set size over 1000 independent random data splits. The dashed gray line marks ๐›ผ .

In the tumor segmentation setting, our input is a ๐‘‘ ร— ๐‘‘ image and our label is a set of pixels ๐‘Œ ๐‘– โˆˆ โ„˜ โข ( { ( 1 , 1 ) , ( 1 , 2 ) , โ€ฆ , ( ๐‘‘ , ๐‘‘ ) } ) , where โ„˜ denotes the power set. We build on an image segmentation model ๐‘“ : ๐’ณ โ†’ [ 0 , 1 ] ๐‘‘ ร— ๐‘‘ outputting a probability for each pixel and measure loss as the fraction of false negatives,

๐ฟ ๐‘– FNR โข ( ๐œ† )

1 โˆ’ | ๐‘Œ ๐‘– โˆฉ ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– ) | | ๐‘Œ ๐‘– | ,  where  โข ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– )

{ ๐‘ฆ : ๐‘“ โข ( ๐‘‹ ๐‘– ) ๐‘ฆ โ‰ฅ 1 โˆ’ ๐œ† } .

(18)

The expected value of ๐ฟ ๐‘– FNR is the FNR. Since ๐ฟ ๐‘– FNR is monotone, so is the FNR. Thus, we use the technique in Section 2.1 to pick ๐œ† ^ by (4) that controls the FNR on a new point, resulting in the following guarantee:

๐”ผ โข [ ๐ฟ ๐‘› + 1 FNR โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ .

(19)

For evaluating the proposed procedure we pool data from several online open-source gut polyp segmentation datasets: Kvasir, Hyper-Kvasir, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib. We choose a PraNet \citepfan2020pranet as our base model ๐‘“ and used ๐‘›

1000 , and evaluated risk control with the 781 remaining validation data points. We report results with ๐›ผ

0.1 in Figure 1. The mean and standard deviation of the risk over 1000 trials are 0.0987 and 0.0114, respectively.

3.2FNR control in multilabel classification Figure 2:FNR control on MS COCO. The top figure shows examples of our procedure with correct classes in black, false positives in blue, and false negatives in red. The bottom plots report FNR and set size over 1000 independent random data splits. The dashed gray line marks ๐›ผ .

In the multilabel classification setting, our input ๐‘‹ ๐‘– is an image and our label is a set of classes ๐‘Œ ๐‘– โŠ‚ { 1 , โ€ฆ , ๐พ } for some number of classes ๐พ . Using a multiclass classification model ๐‘“ : ๐’ณ โ†’ [ 0 , 1 ] ๐พ , we form prediction sets and calculate the number of false positives exactly as in (18). By Theorem 2.1, picking ๐œ† ^ as in (4) again yields the FNR-control guarantee in (19).

We use the Microsoft Common Objects in Context (MS COCO) computer vision dataset \citeplin2014microsoft, a large-scale 80-class multiclass classification baseline dataset commonly used in computer vision, to evaluate the proposed procedure. We choose a TResNet \citepridnik2020tresnet as our base model ๐‘“ and used ๐‘›

4000 , and evaluated risk control with 1000 validation data points. We report results with ๐›ผ

0.1 in Figure 2. The mean and standard deviation of the risk over 1000 trials are 0.0996 and 0.0052, respectively.

3.3Control of graph distance in hierarchical image classification Figure 3:Control of graph distance on hierarchical ImageNet. The top figure shows examples of our procedure with correct classes in black, false positives in blue, and false negatives in red. The bottom plots report our minimum hierarchical distance loss and set size over 1000 independent random data splits. The dashed gray line marks ๐›ผ .

In the ๐พ -class hierarchical classification setting, our input ๐‘‹ ๐‘– is an image and our label is a leaf node ๐‘Œ ๐‘– โˆˆ { 1 , โ€ฆ , ๐พ } on a tree with nodes ๐’ฑ and edges โ„ฐ . Using a single-class classification model ๐‘“ : ๐’ณ โ†’ ฮ” ๐พ , we calibrate a loss in graph distance between the interior node we select and the closest ancestor of the true class. For any ๐‘ฅ โˆˆ ๐’ณ , let ๐‘ฆ ^ โข ( ๐‘ฅ )

arg โก max ๐‘˜ โก ๐‘“ โข ( ๐‘ฅ ) ๐‘˜ be the class with the highest estimated probability. Further, let ๐‘‘ : ๐’ฑ ร— ๐’ฑ โ†’ โ„ค be the function that returns the length of the shortest path between two nodes, let ๐’œ : ๐’ฑ โ†’ 2 ๐’ฑ be the function that returns the ancestors of its argument, and let ๐’ซ : ๐’ฑ โ†’ 2 ๐’ฑ be the function that returns the set of leaf nodes that are descendants of its argument. We also let ๐‘” โข ( ๐‘ฃ , ๐‘ฅ )

โˆ‘ ๐‘˜ โˆˆ ๐’ซ โข ( ๐‘ฃ ) โข ๐‘“ โข ( ๐‘ฅ ) ๐‘˜ be the sum of scores of leaves descended from ๐‘ฃ . Further, define a hierarchical distance

๐‘‘ ๐ป โข ( ๐‘ฃ , ๐‘ข )

inf ๐‘Ž โˆˆ ๐’œ โข ( ๐‘ฃ ) โข { ๐‘‘ โข ( ๐‘Ž , ๐‘ข ) } .

For a set of nodes ๐’ž ๐œ† โˆˆ 2 ๐’ฑ , we then define the set-valued loss

๐ฟ ๐‘– Graph โข ( ๐œ† )

inf ๐‘  โˆˆ ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– ) โข { ๐‘‘ ๐ป โข ( ๐‘ฆ , ๐‘  ) } / ๐ท ,  where  โข ๐’ž ๐œ† โข ( ๐‘ฅ )

โ‹‚ { ๐‘Ž โˆˆ ๐’œ โข ( ๐‘ฆ ^ โข ( ๐‘ฅ ) ) : ๐‘” โข ( ๐‘Ž , ๐‘ฅ ) โ‰ฅ โˆ’ ๐œ† } โข ๐’ซ โข ( ๐‘Ž ) .

(20)

This loss returns zero if ๐‘ฆ is a child of any element in ๐’ž ๐œ† , and otherwise returns the minimum distance between any element of ๐’ž ๐œ† and any ancestor of ๐‘ฆ , scaled by the depth ๐ท . Thus, it is a monotone loss function and can be controlled by choosing ๐œ† ^ as in (4) to achieve the guarantee

๐”ผ โข [ ๐ฟ ๐‘› + 1 Graph โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ .

(21)

For this experiment, we use the ImageNet dataset \citepdeng2009imagenet, which comes with an existing label hierarchy, WordNet, of maximum depth ๐ท

14 . We choose a ResNet152 \citephe2016deep as our base model ๐‘“ and used ๐‘›

30000 , and evaluated risk control with the remaining 20000 . We report results with ๐›ผ

0.05 in Figure 3. The mean and standard deviation of the risk over 1000 trials are 0.0499 and 0.0011, respectively.

3.4F1-score control in open-domain question answering Figure 4:F1-score control on Natural Questions. The top figure shows examples of our procedure with fully correct answers in green, partially correct answers in blue, and false positives in gray. Note that due to the nature of the evaluation, answers that are technically correct may still be down-graded if they do not match the reference. We treat this as part of the randomness in the task. The bottom plots report the F1 risk and average set size over 1000 independent random data splits. The dashed gray line marks ๐›ผ .

In the open-domain question answering setting, our input ๐‘‹ ๐‘– is a question and our label ๐‘Œ ๐‘– is a set of (possibly non-unique) correct answers. For example, the input

๐‘‹ ๐‘› + 1

โ€œWhere was Barack Obama Born?โ€

could have the answer set

๐‘Œ ๐‘› + 1

{ โ€œHawaiiโ€, โ€œHonolulu, Hawaiiโ€, โ€œKapoโ€™olani Medical Centerโ€ }

Formally, here we treat all questions and answers as being composed of sequences (up to size ๐‘š ) of tokens in a vocabulary ๐’ฑ โ€”i.e., assuming ๐‘˜ valid answers, we have ๐‘‹ ๐‘– โˆˆ ๐’ต and ๐‘Œ ๐‘– โˆˆ ๐’ต ๐‘˜ , where ๐’ต := ๐’ฑ ๐‘š . Using an open-domain question answering model that individually scores candidate output answers ๐‘“ : ๐’ต ร— ๐’ต โ†’ โ„ , we calibrate the best token-based F1-score of the prediction set, taken over all pairs of predictions and answers:

๐ฟ ๐‘– F1 โข ( ๐œ† )

1 โˆ’ max โก { F1 โข ( ๐‘Ž , ๐‘ ) : ๐‘ โˆˆ ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– ) , ๐‘Ž โˆˆ ๐‘Œ ๐‘– } ,

 where  โข ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– )

{ ๐‘ฆ โˆˆ ๐’ฑ ๐‘š : ๐‘“ โข ( ๐‘‹ ๐‘– , ๐‘ฆ ) โ‰ฅ ๐œ† } .

(22)

We define the F1-score following popular QA evaluation metrics \citepsquad, where we treat predictions and ground truth answers as bags of tokens and compute the geometric average of their precision and recall (while ignoring punctuation and articles { โ€œaโ€, โ€œanโ€, โ€œtheโ€ } ). Since ๐ฟ ๐‘– F1 , as defined in this way, is monotone and upper bounded by 1 , it can be controlled by choosing ๐œ† ^ as in Section 2.1 to achieve the following guarantee:

๐”ผ โข [ ๐ฟ ๐‘› + 1 F1 โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ .

(23)

We use the Natural Questions (NQ) dataset \citepnq, a popular open-domain question answering baseline, to evaluate our method. We use the splits distributed as part of the Dense Passage Retrieval (DPR) package \citepdpr. Our base model is the DPR Retriever-Reader model \citepdpr, which retrieves passages from Wikipedia that might contain the answer to the given query, and then uses a reader model to extract text sub-spans from the retrieved passages that serve as candidate answers. Instead of enumerating all possible answers to a given question (which is intractable), we retrieve the top several hundred candidate answers, extracted from the top 100 passages (which is sufficient to control all risks of interest). We use ๐‘›

2500 calibration points, and evaluate risk control with the remaining 1110 . We report results with ๐›ผ

0.3 (chosen empirically as the lowest F1 score which typically results in nearly correct answers) in Figure 4. The mean and standard deviation of the risk over 1000 trials are 0.2996 and 0.0150, respectively.

4Extensions

In this section, we discuss several theoretical extensions of our procedure.

4.1Risk control under distributional shift

Suppose the researcher wants to control the risk under a distribution shift. Then the goal in (3) can be redefined as

๐”ผ ( ๐‘‹ 1 , ๐‘Œ 1 ) , โ€ฆ , ( ๐‘‹ ๐‘› , ๐‘Œ ๐‘› ) โˆผ ๐‘ƒ train , ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) โˆผ ๐‘ƒ test โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ ,

(24)

where ๐‘ƒ test denotes the test distribution that is different from the training distribution ๐‘ƒ train that ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) ๐‘–

1 ๐‘› are sampled from. Assuming that ๐‘ƒ test is absolutely continuous with respect to ๐‘ƒ train , the weighted objective (24) can be rewritten as

๐”ผ ( ๐‘‹ 1 , ๐‘Œ 1 ) , โ€ฆ , ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) โˆผ ๐‘ƒ train โข [ ๐‘ค โข ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) โข ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ]

โ‰ค ๐›ผ ,

(25)

where  โข ๐‘ค โข ( ๐‘ฅ , ๐‘ฆ )

๐‘‘ โข ๐‘ƒ test โข ( ๐‘ฅ , ๐‘ฆ ) ๐‘‘ โข ๐‘ƒ train โข ( ๐‘ฅ , ๐‘ฆ ) .

When ๐‘ค is known and bounded, we can apply our procedure on the loss function ๐ฟ ~ ๐‘› + 1 โข ( ๐œ† )

๐‘ค โข ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) โข ๐ฟ ๐‘› + 1 โข ( ๐œ† ) , which is non-decreasing, bounded, and right-continuous in ๐œ† whenever ๐ฟ ๐‘› + 1 is. Thus, Theorem 2.1 guarantees that the resulting ๐œ† ^ satisfies (25).

In the setting of transductive learning, ๐‘‹ ๐‘› + 1 is available to the user. If the conditional distribution of ๐‘Œ given ๐‘‹ remains the same in the training and test domains, the distributional shift reduces to a covariate shift and

๐‘ค โข ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 )

๐‘ค โข ( ๐‘‹ ๐‘› + 1 )

โ–ณ ๐‘‘ โข ๐‘ƒ test โข ( ๐‘‹ ๐‘› + 1 ) ๐‘‘ โข ๐‘ƒ train โข ( ๐‘‹ ๐‘› + 1 ) .

In this case, we can achieve the risk control even when ๐‘ค is unbounded. In particular, assuming ๐ฟ ๐‘– โˆˆ [ 0 , ๐ต ] , for any potential value ๐‘ฅ of the covariate, we define

๐œ† ^ โข ( ๐‘ฅ )

inf { ๐œ† : โˆ‘ ๐‘–

1 ๐‘› ๐‘ค โข ( ๐‘‹ ๐‘– ) โข ๐ฟ ๐‘– โข ( ๐œ† ) + ๐‘ค โข ( ๐‘ฅ ) โข ๐ต โˆ‘ ๐‘–

1 ๐‘› ๐‘ค โข ( ๐‘‹ ๐‘– ) + ๐‘ค โข ( ๐‘ฅ ) โ‰ค ๐›ผ } .

(26)

When ๐œ† does not exist, we simply set ๐œ† ^ โข ( ๐‘ฅ )

max โก ฮ› . It is not hard to see that ๐œ† ^ โข ( ๐‘ฅ ) โ‰ก ๐œ† ^ in the absence of covariate shifts. We can prove the following result.

Proposition 4.1.

In the setting of Theorem 2.1,

๐”ผ ( ๐‘‹ 1 , ๐‘Œ 1 ) , โ€ฆ , ( ๐‘‹ ๐‘› , ๐‘Œ ๐‘› ) โˆผ ๐‘ƒ train , ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) โˆผ ๐‘ƒ test โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) ) ] โ‰ค ๐›ผ .

(27)

It is easy to show that the weighted conformal procedure \citeptibshirani2019conformal is a special case with ๐ฟ ๐‘– โข ( ๐œ† )

๐Ÿ™ โข { ๐‘Œ ๐‘– โˆ‰ ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– ) } where ๐’ž ๐œ† โข ( ๐‘‹ ๐‘– ) is the prediction set that thresholds the conformity score at ๐œ† . Thus, Proposition 4.1 generalizes [tibshirani2019conformal] to any monotone risk. When the covariate shift ๐‘ค โข ( ๐‘ฅ ) is unknown but unlabeled data in the test domain are available, it can be estimated, up to a multiplicative factor that does not affect ๐œ† ^ โข ( ๐‘ฅ ) , by any probabilistic classification algorithm; see [lei2020conformal] and [candes2023conformalized] in the context of missing and censored data, respectively. We leave the full investigation of weighted conformal risk control with an estimated covariate shift for future research.

Total variation bound

Finally, for arbitrary distribution shifts, we give a total variation bound describing the way standard (unweighted) conformal risk control degrades. The bound is analogous to that of [barber2022conformal]for independent but non-identically distributed data (see their Section 4.1), though the proof is different. Here we will use the notation ๐‘ ๐‘–

( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) , and ๐œ† ^ โข ( ๐‘ 1 , โ€ฆ , ๐‘ ๐‘› ) to refer to that chosen in (4).

Proposition 4.2.

Let ๐‘

( ๐‘ 1 , โ€ฆ , ๐‘ ๐‘› + 1 ) be a sequence of random variables. Then, under the conditions in Theorem 2.1,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ค ๐›ผ + ๐ต โข โˆ‘ ๐‘–

1 ๐‘› TV โข ( ๐‘ ๐‘– , ๐‘ ๐‘› + 1 ) .

If further the assumptions of Theorem 2.2 hold,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ฅ ๐›ผ โˆ’ ๐ต โข ( 2 ๐‘› + 1 + โˆ‘ ๐‘–

1 ๐‘› TV โข ( ๐‘ ๐‘– , ๐‘ ๐‘› + 1 ) ) .

4.2Quantile risk control

[snell2022quantile] generalizes [bates2021distribution] to control the quantile of a monotone loss function conditional on ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) ๐‘–

1 ๐‘› with probability 1 โˆ’ ๐›ฟ over the calibration dataset for any user-specified tolerance parameter ๐›ฟ . In some applications, it may be sufficient to control the unconditional quantile of the loss function, which alleviates the burden of the user to choose the tolerance parameter ๐›ฟ .

For any random variable ๐‘‹ , let

Quantile ๐›ฝ โข ( ๐‘‹ )

inf { ๐‘ฅ : โ„™ โข ( ๐‘‹ โ‰ค ๐‘ฅ ) โ‰ฅ ๐›ฝ } .

Analogous to (3), we want to find ๐œ† ^ based on ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) ๐‘–

1 ๐‘› such that

Quantile ๐›ฝ โข ( ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ๐›ฝ ) ) โ‰ค ๐›ผ .

(28)

By definition,

Quantile ๐›ฝ โข ( ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ๐›ฝ ) ) โ‰ค ๐›ผ โŸบ ๐”ผ โข [ ๐Ÿ™ โข { ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ๐›ฝ )

๐›ผ } ] โ‰ค 1 โˆ’ ๐›ฝ .

As a consequence, quantile risk control is equivalent to expected risk control (3) with loss function ๐ฟ ~ ๐‘– โข ( ๐œ† )

๐Ÿ™ โข { ๐ฟ ๐‘– โข ( ๐œ† )

๐›ผ } . Let

๐œ† ^ ๐›ฝ

inf { ๐œ† โˆˆ ฮ› : 1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› ๐Ÿ™ โข { ๐ฟ ๐‘– โข ( ๐œ† )

๐›ผ } + 1 ๐‘› + 1 โ‰ค 1 โˆ’ ๐›ฝ } .

Proposition 4.3.

In the setting of Theorem 2.1, (28) is achieved.

[snell2022quantile] considers the high-probability control of a wider class of quantile-based risks which include the conditional value-at-risk (CVaR). It is unclear whether those more general risks can be controlled unconditionally. We leave this open problem for future research.

4.3Controlling multiple risks

Let ๐ฟ ๐‘– โข ( ๐œ† ; ๐›พ ) be a family of loss functions indexed by ๐›พ โˆˆ ฮ“ for some domain ฮ“ that may have infinitely many elements. A researcher may want to control ๐”ผ โข [ ๐ฟ ๐‘– โข ( ๐œ† ; ๐›พ ) ] at level ๐›ผ โข ( ๐›พ ) . Equivalently, we need to find an ๐œ† ^ based on ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) ๐‘–

1 ๐‘› such that

sup ๐›พ โˆˆ ฮ“ ๐”ผ โข [ ๐ฟ ๐‘– โข ( ๐œ† ^ ; ๐›พ ) ๐›ผ โข ( ๐›พ ) ] โ‰ค 1 .

(29)

Though the above worst-case risk is not an expectation, it can still be controlled. Towards this end, we define

๐œ† ^

sup ๐›พ โˆˆ ฮ“ ๐œ† ^ ๐›พ ,  where  โข ๐œ† ^ ๐›พ

inf { ๐œ† : 1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› ๐ฟ ๐‘– โข ( ๐œ† ; ๐›พ ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ โข ( ๐›พ ) } .

(30)

Then the risk is controlled.

Proposition 4.4.

In the setting of Theorem 2.1, (29) is satisfied.

4.4Adversarial risks

We next show how to control risks defined by adversarial perturbations. We adopt the same notation as Section 4.3. [bates2021distribution] (Section 6.3) discusses the adversarial risk where ฮ“ parametrizes a class of perturbations of ๐‘‹ ๐‘› + 1 , e.g., ๐ฟ ๐‘– โข ( ๐œ† ; ๐›พ )

๐ฟ โข ( ๐‘‹ ๐‘– + ๐›พ , ๐‘Œ ๐‘– ) and ฮ“

{ ๐›พ : โ€– ๐›พ โ€– โˆž โ‰ค ๐œ– } . A researcher may want to find an ๐œ† ^ based on ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) ๐‘–

1 ๐‘› such that

๐”ผ โข [ sup ๐›พ โˆˆ ฮ“ ๐ฟ ๐‘– โข ( ๐œ† ; ๐›พ ) ] โ‰ค ๐›ผ .

(31)

This can be recast as a conformal risk control problem by taking ๐ฟ ~ ๐‘– โข ( ๐œ† )

sup ๐›พ โˆˆ ฮ“ ๐ฟ ๐‘– โข ( ๐œ† ; ๐›พ ) . Then, the following choice of ๐œ† leads to risk control:

๐œ† ^

inf { ๐œ† : 1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› ๐ฟ ~ ๐‘– โข ( ๐œ† ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ } .

(32) Proposition 4.5.

In the setting of Theorem 2.1, (31) is satisfied.

4.5U-risk control

For ranking and metric learning, [bates2021distribution] considered loss functions that depend on two test points. In general, for any ๐‘˜ > 1 and subset ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› + ๐‘˜ } with | ๐’ฎ |

๐‘˜ , let ๐ฟ ๐’ฎ โข ( ๐œ† ) be a loss function. Our goal is to find ๐œ† ^ ๐‘˜ based on ( ๐‘‹ ๐‘– , ๐‘Œ ๐‘– ) ๐‘–

1 ๐‘› such that

๐”ผ โข [ ๐ฟ { ๐‘› + 1 , โ€ฆ , ๐‘› + ๐‘˜ } โข ( ๐œ† ^ ๐‘˜ ) ] โ‰ค ๐›ผ .

(33)

We call the LHS a U-risk since, for any fixed ๐œ† ^ ๐‘˜ , it is the expectation of an order- ๐‘˜ U-statistic. As a natural extension, we can define

๐œ† ^ ๐‘˜

inf { ๐œ† : ๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† ) + ๐ต โข ( 1 โˆ’ ( ๐‘› ! ) 2 ( ๐‘› + ๐‘˜ ) ! โข ( ๐‘› โˆ’ ๐‘˜ ) ! ) โ‰ค ๐›ผ } .

(34)

Again, we define ๐œ† ^ ๐‘˜

๐œ† max when the right-hand side is an empty set. Then we can prove the following result.

Proposition 4.6.

Assume that ๐ฟ ๐’ฎ โข ( ๐œ† ) is non-increasing in ๐œ† , right-continuous, and

๐ฟ ๐’ฎ โข ( ๐œ† max ) โ‰ค ๐›ผ , sup ๐œ† ๐ฟ ๐’ฎ โข ( ๐œ† ) โ‰ค ๐ต < โˆž โข  almost surely .

Then (33) is achieved.

5Conclusion

This generalization of conformal prediction broadens its scope to new applications, as shown in Section 3. The mathematical tools developed in Section 2, Section 4, and the Appendix may be of independent technical interest, since they provide a new and more general language for studying conformal prediction along with new results about its validity.

Acknowledgements

The authors thank Christopher Yeh for pointing out that the factor of 2 in Theorem 2.2 may not be tight, correcting an error in an earlier draft. The incorrect proposition stating that it was tight has been removed; the tightness of this factor remains an open question. The authors would like to thank Nicolas Christianson, Amit Kohli, Sherrie Wang, and Tijana Zrniฤ‡ for comments on early drafts. A. A. would like to thank Ziheng (Tony) Wang for helpful conversations. A. A. is funded by the NSF GRFP and a Berkeley Fellowship. S. B. is supported by the NSF FODSI fellowship and the Simons institute. A. F. is partially funded by the NSF GRFP and MIT MLPDS.

\printbibliography Appendix AMonotonizing non-monotone risks

We next show that the proposed algorithm leads to asymptotic risk control for non-monotone risk functions when applied to a monotonized version of the empirical risk. We set the monotonized empirical risk to be

๐‘… ^ ๐‘› โ†‘ โข ( ๐œ† )

sup ๐‘ก โ‰ฅ ๐œ† โข ๐‘… ^ ๐‘› โข ( ๐‘ก ) ,

(35)

then define

๐œ† ^ ๐‘› โ†‘

inf { ๐œ† : ๐‘… ^ ๐‘› โ†‘ โข ( ๐œ† ) โ‰ค ๐›ผ } .

(36) Theorem A.1.

Let the ๐ฟ ๐‘– โข ( ๐œ† ) be right-continuous, i.i.d., bounded (both above and below) functions satisfying (6). Then,

lim ๐‘› โ†’ โˆž โข ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ๐‘› โ†‘ ) ] โ‰ค ๐›ผ .

(37)

Theorem A.1 implies that an analogous procedure to 4 also controls the risk asymptotically. In particular, taking

๐œ† ~ โ†‘

inf { ๐œ† : ๐‘… ^ ๐‘› โ†‘ โข ( ๐œ† ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ }

(38)

also results in asymptotic risk control (to see this, plug ๐œ† ~ โ†‘ into Theorem A.1 and see that the risk level is bounded above by ๐›ผ โˆ’ ๐ต ๐‘› + 1 ). Note that in the case of a monotone loss function, ๐œ† ~ โ†‘

๐œ† ^ . However, the counterexample in Proposition 2.1 does not apply to ๐œ† ~ โ†‘ , and it is currently unknown whether this procedure does or does not provide finite-sample risk control.

Appendix BProofs

The proof of Theorem 2.2 uses the following lemma on the approximate continuity of the empirical risk.

Lemma 1 (Jump Lemma).

In the setting of Theorem 2.2, any jumps in the empirical risk are bounded, i.e.,

sup ๐œ† ๐ฝ โข ( ๐‘… ^ ๐‘› , ๐œ† ) โข โ‰ค ๐‘Ž . ๐‘  . โข ๐ต ๐‘› .

(39) Proof of Jump Lemma, Lemma 1.

By boundedness, the maximum contribution of any single point to the jump is ๐ต ๐‘› , so

โˆƒ ๐œ† : ๐ฝ โข ( ๐‘… ^ ๐‘› , ๐œ† )

๐ต ๐‘› โŸน โˆƒ ๐œ† : ๐ฝ โข ( ๐ฟ ๐‘– , ๐œ† )

0 โข  and  โข ๐ฝ โข ( ๐ฟ ๐‘— , ๐œ† )

0 โข  for some  โข ๐‘– โ‰  ๐‘— .

(40)

Call ๐’Ÿ ๐‘–

{ ๐œ† : ๐ฝ โข ( ๐ฟ ๐‘– , ๐œ† )

0 } the sets of discontinuities in ๐ฟ ๐‘– . Since ๐ฟ ๐‘– is bounded monotone, ๐’Ÿ ๐‘– has countably many points. The union bound then implies that

โ„™ ( โˆƒ ๐œ† : ๐ฝ ( ๐‘… ^ ๐‘› , ๐œ† )

๐ต ๐‘› ) โ‰ค โˆ‘ ๐‘– โ‰  ๐‘— โ„™ ( ๐’Ÿ ๐‘– โˆฉ ๐’Ÿ ๐‘— โ‰  โˆ… )

(41)

Rewriting each term of the right-hand side using tower property and law of total probability gives

โ„™ โข ( ๐’Ÿ ๐‘– โˆฉ ๐’Ÿ ๐‘— โ‰  โˆ… )

๐”ผ โข [ โ„™ โข ( ๐’Ÿ ๐‘– โˆฉ ๐’Ÿ ๐‘— โ‰  โˆ… | ๐’Ÿ ๐‘— ) ]

(42)

โ‰ค ๐”ผ โข [ โˆ‘ ๐œ† โˆˆ ๐’Ÿ ๐‘— โ„™ โข ( ๐œ† โˆˆ ๐’Ÿ ๐‘– | ๐’Ÿ ๐‘— ) ]

๐”ผ โข [ โˆ‘ ๐œ† โˆˆ ๐’Ÿ ๐‘— โ„™ โข ( ๐œ† โˆˆ ๐’Ÿ ๐‘– ) ] ,

(43)

Where the second inequality is because the union of the events ๐œ† โˆˆ ๐’Ÿ ๐‘— is the entire sample space, but they are not disjoint, and the third equality is due to the independence between ๐’Ÿ ๐‘– and ๐’Ÿ ๐‘— . Rewriting in terms of the jump function and applying the assumption โ„™ โข ( ๐ฝ โข ( ๐ฟ ๐‘– , ๐œ† ) > 0 )

0 ,

๐”ผ โข [ โˆ‘ ๐œ† โˆˆ ๐’Ÿ ๐‘— โ„™ โข ( ๐œ† โˆˆ ๐’Ÿ ๐‘– ) ]

๐”ผ โข [ โˆ‘ ๐œ† โˆˆ ๐’Ÿ ๐‘— โ„™ โข ( ๐ฝ โข ( ๐ฟ ๐‘– , ๐œ† ) > 0 ) ]

0 .

(44)

Chaining the above inequalities yields โ„™ ( โˆƒ ๐œ† : ๐ฝ ( ๐‘… ^ ๐‘› , ๐œ† )

๐ต ๐‘› ) โ‰ค 0 , so

โ„™ ( โˆƒ ๐œ† : ๐ฝ ( ๐‘… ^ ๐‘› , ๐œ† ) > ๐ต ๐‘› )

0 . โˆŽ

Proof of Theorem 2.2.

If ๐ฟ ๐‘– โข ( ๐œ† max ) โ‰ฅ ๐›ผ โˆ’ 2 โข ๐ต / ( ๐‘› + 1 ) , then ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ฅ ๐›ผ โˆ’ 2 โข ๐ต / ( ๐‘› + 1 ) . Throughout the rest of the proof, we assume that ๐ฟ ๐‘– โข ( ๐œ† max ) < ๐›ผ โˆ’ 2 โข ๐ต / ( ๐‘› + 1 ) . Define the quantity

๐œ† ^ โ€ฒโ€ฒ

inf { ๐œ† : ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ) + ๐ต ๐‘› + 1 โ‰ค ๐›ผ } .

(45)

Since ๐ฟ ๐‘– โข ( ๐œ† max ) < ๐›ผ โˆ’ 2 โข ๐ต / ( ๐‘› + 1 ) < ๐›ผ โˆ’ ๐ต / ( ๐‘› + 1 ) , ๐œ† ^ โ€ฒโ€ฒ exists almost surely. Deterministically, ๐‘› ๐‘› + 1 โข ๐‘… ^ ๐‘› โข ( ๐œ† ) โ‰ค ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ) , which yields ๐œ† ^ โ‰ค ๐œ† ^ โ€ฒโ€ฒ . Again since ๐ฟ ๐‘– โข ( ๐œ† ) is non-increasing in ๐œ† ,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) ] โ‰ค ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ]

(46)

By exchangeability and the fact that ๐œ† ^ โ€ฒโ€ฒ is a symmetric function of ๐ฟ 1 , โ€ฆ , ๐ฟ ๐‘› + 1 ,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) ]

๐”ผ โข [ ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) ]

(47)

For the remainder of the proof we focus on lower-bounding ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) . We begin with the following identity:

๐›ผ

๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) + ๐ต ๐‘› + 1 โˆ’ ( ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) + ๐ต ๐‘› + 1 โˆ’ ๐›ผ ) .

(48)

Rearranging the identity,

๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ )

๐›ผ โˆ’ ๐ต ๐‘› + 1 + ( ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) + ๐ต ๐‘› + 1 โˆ’ ๐›ผ ) .

(49)

Using the Jump Lemma to bound ( ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) + ๐ต ๐‘› + 1 โˆ’ ๐›ผ ) below by โˆ’ ๐ต ๐‘› + 1 gives

๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) โ‰ฅ ๐›ผ โˆ’ 2 โข ๐ต ๐‘› + 1 .

(50)

Finally, chaining together the above inequalities,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ] โ‰ฅ ๐”ผ โข [ ๐‘… ^ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒโ€ฒ ) ] โ‰ฅ ๐›ผ โˆ’ 2 โข ๐ต ๐‘› + 1 .

(51)

โˆŽ

Proof of Proposition 2.1.

Without loss of generality, we assume ๐ต

1 . Assume ๐œ† ^ takes values in [ 0 , 1 ] and ๐›ผ โˆˆ ( 1 / ( ๐‘› + 1 ) , 1 ) . Let ๐‘ โˆˆ ( 0 , 1 ) , ๐‘ be any positive integer, and ๐ฟ ๐‘– โข ( ๐œ† ) be i.i.d. right-continuous piecewise constant (random) functions with

๐ฟ ๐‘– โข ( ๐‘ / ๐‘ )

0 , ( ๐ฟ ๐‘– โข ( 0 / ๐‘ ) , ๐ฟ ๐‘– โข ( 1 / ๐‘ ) , โ€ฆ , ๐ฟ ๐‘– โข ( ( ๐‘ โˆ’ 1 ) / ๐‘ ) ) โˆผ ๐‘– . ๐‘– . ๐‘‘ . Ber โข ( ๐‘ ) .

(52)

By definition, ๐œ† ^ is independent of ๐ฟ ๐‘› + 1 . Thus, for any ๐‘—

0 , 1 , โ€ฆ , ๐‘ โˆ’ 1 ,

{ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) โˆฃ ๐œ† ^

๐‘— / ๐‘ } โˆผ Ber โข ( ๐‘ ) , { ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) โˆฃ ๐œ† ^

1 } โˆผ ๐›ฟ 0 .

(53)

Then,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ]

๐‘ โ‹… โ„™ โข ( ๐œ† ^ โ‰  1 )

(54)

Note that

๐œ† ^ โ‰  1 โŸบ min ๐‘— โˆˆ { 0 , โ€ฆ , ๐‘ โˆ’ 1 } โก 1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› ๐ฟ ๐‘– โข ( ๐‘— / ๐‘ ) โ‰ค ๐›ผ โˆ’ 1 ๐‘› + 1 .

(55)

Since ๐›ผ

1 / ( ๐‘› + 1 ) ,

โ„™ โข ( ๐œ† ^ โ‰  1 )

1 โˆ’ โ„™ โข ( ๐œ† ^

1 )

1 โˆ’ โ„™ โข ( for all  โข ๐‘— ,  we have  โข 1 ๐‘› + 1 โข โˆ‘ ๐‘–

1 ๐‘› ๐ฟ ๐‘– โข ( ๐‘— / ๐‘ ) > ๐›ผ โˆ’ 1 ๐‘› + 1 )

1 โˆ’ ( โˆ‘ ๐‘˜

โŒˆ ( ๐‘› + 1 ) โข ๐›ผ โŒ‰ ๐‘› ( ๐‘› ๐‘˜ ) โข ๐‘ ๐‘˜ โข ( 1 โˆ’ ๐‘ ) ( ๐‘› โˆ’ ๐‘˜ ) ) ๐‘

1 โˆ’ ( 1 โˆ’ BinoCDF โข ( ๐‘› , ๐‘ , โŒˆ ( ๐‘› + 1 ) โข ๐›ผ โŒ‰ โˆ’ 1 ) ) ๐‘

As a result,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ]

๐‘ โข ( 1 โˆ’ ( 1 โˆ’ BinoCDF โข ( ๐‘› , ๐‘ , โŒˆ ( ๐‘› + 1 ) โข ๐›ผ โŒ‰ โˆ’ 1 ) ) ๐‘ ) .

(56)

Now let ๐‘ be sufficiently large such that

( 1 โˆ’ ( 1 โˆ’ BinoCDF โข ( ๐‘› , ๐‘ , โŒˆ ( ๐‘› + 1 ) โข ๐›ผ โŒ‰ โˆ’ 1 ) ) ๐‘ )

๐‘ .

(57)

Then

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ) ]

๐‘ 2

(58)

For any ๐›ผ

0 , we can take ๐‘ close enough to 1 to render the claim false. โˆŽ

Proof of Theorem A.1.

Define the monotonized population risk as

๐‘… โ†‘ โข ( ๐œ† )

sup ๐‘ก โ‰ฅ ๐œ† โข ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐‘ก ) ]

(59)

Note that the independence of ๐ฟ ๐‘› + 1 and ๐œ† ^ ๐‘› โ†‘ implies that for all ๐‘› ,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ๐‘› โ†‘ ) ] โ‰ค ๐”ผ โข [ ๐‘… โ†‘ โข ( ๐œ† ^ ๐‘› โ†‘ ) ] .

(60)

Since ๐‘… โ†‘ is bounded, monotone, and one-dimensional, a generalization of the Glivenko-Cantelli Theorem given in Theorem 1 of [dehardt1971generalizations] gives that uniformly over ๐œ† ,

lim ๐‘› โ†’ โˆž โข sup ๐œ† | ๐‘… ^ ๐‘› โข ( ๐œ† ) โˆ’ ๐‘… โข ( ๐œ† ) | โข โ†’ ๐‘Ž . ๐‘  . โข 0 .

(61)

As a result,

lim ๐‘› โ†’ โˆž โข sup ๐œ† | ๐‘… ^ ๐‘› โ†‘ โข ( ๐œ† ) โˆ’ ๐‘… โ†‘ โข ( ๐œ† ) | โข โ†’ ๐‘Ž . ๐‘  . โข 0 ,

(62)

which implies that

lim ๐‘› โ†’ โˆž โข | ๐‘… ^ ๐‘› โ†‘ โข ( ๐œ† ^ โ†‘ ) โˆ’ ๐‘… โ†‘ โข ( ๐œ† ^ โ†‘ ) | โข โ†’ ๐‘Ž . ๐‘  . โข 0 .

(63)

By definition, ๐‘… ^ โ†‘ โข ( ๐œ† ^ โ†‘ ) โ‰ค ๐›ผ almost surely and thus this directly implies

lim sup ๐‘› โ†’ โˆž โข ๐‘… โ†‘ โข ( ๐œ† ^ ๐‘› โ†‘ ) โ‰ค ๐›ผ a.s. .

(64)

Finally, since for all ๐‘› , ๐‘… โ†‘ โข ( ๐œ† ^ ๐‘› โ†‘ ) โ‰ค ๐ต , by Fatouโ€™s lemma,

lim ๐‘› โ†’ โˆž โข ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ ๐‘› โ†‘ ) ] โ‰ค lim sup ๐‘› โ†’ โˆž โข ๐”ผ โข [ ๐‘… โ†‘ โข ( ๐œ† ^ ๐‘› โ†‘ ) ] โ‰ค ๐”ผ โข [ lim sup ๐‘› โ†’ โˆž โข ๐‘… โ†‘ โข ( ๐œ† ^ ๐‘› โ†‘ ) ] โ‰ค ๐›ผ .

(65)

โˆŽ

Proposition 4.1.

Let

๐œ† ^ โ€ฒ

inf { ๐œ† : โˆ‘ ๐‘–

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘– ) โข ๐ฟ ๐‘– โข ( ๐œ† ) โˆ‘ ๐‘–

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘– ) โ‰ค ๐›ผ } .

(66)

Since inf ๐œ† ๐ฟ ๐‘– โข ( ๐œ† ) โ‰ค ๐›ผ , ๐œ† ^ โ€ฒ exists almost surely. Using the same argument as in the proof of Theorem 2.1, we can show that ๐œ† ^ โ€ฒ โ‰ค ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) . Since ๐ฟ ๐‘› + 1 โข ( ๐œ† ) is non-increasing in ๐œ† ,

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โข ( ๐‘‹ ๐‘› + 1 ) ) ] โ‰ค ๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒ ) ] .

Let ๐ธ be the multiset of loss functions { ( ๐‘‹ 1 , ๐‘Œ 1 ) , โ€ฆ , ( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) } . Then ๐œ† ^ โ€ฒ is a function of ๐ธ , or, equivalently, ๐œ† ^ โ€ฒ is a constant conditional on ๐ธ . Lemma 3 of [tibshirani2019conformal] implies that

( ๐‘‹ ๐‘› + 1 , ๐‘Œ ๐‘› + 1 ) โˆฃ ๐ธ โˆผ โˆ‘ ๐‘–

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘– ) โˆ‘ ๐‘—

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘— ) โข ๐›ฟ ( ๐‘‹ ๐‘— , ๐‘Œ ๐‘— ) โŸน ๐ฟ ๐‘› + 1 โˆฃ ๐ธ โˆผ โˆ‘ ๐‘–

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘– ) โˆ‘ ๐‘—

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘— ) โข ๐›ฟ ๐ฟ ๐‘–

where ๐›ฟ ๐‘ง denotes the Dirac measure at ๐‘ง . Together with the right-continuity of ๐ฟ ๐‘– , the above result implies

๐”ผ โข [ ๐ฟ ๐‘› + 1 โข ( ๐œ† ^ โ€ฒ ) โˆฃ ๐ธ ]

โˆ‘ ๐‘–

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘– ) โข ๐ฟ ๐‘– โข ( ๐œ† ^ โ€ฒ ) โˆ‘ ๐‘–

1 ๐‘› + 1 ๐‘ค โข ( ๐‘‹ ๐‘– ) โ‰ค ๐›ผ .

(67)

The proof is then completed by the law of total expectation. โˆŽ

Proposition 4.2.

Define the vector ๐‘ โ€ฒ

( ๐‘ 1 โ€ฒ , โ€ฆ , ๐‘ ๐‘› โ€ฒ , ๐‘ ๐‘› + 1 ) , where ๐‘ ๐‘– โ€ฒ โข โˆผ ๐‘– . ๐‘– . ๐‘‘ . โข โ„’ โข ( ๐‘ ๐‘› + 1 ) for all ๐‘– โˆˆ [ ๐‘› ] . Let

๐œ–

โˆ‘ ๐‘–

1 ๐‘› TV โข ( ๐‘ ๐‘– , ๐‘ ๐‘– โ€ฒ ) .

By sublinearity,

TV โข ( ๐‘ , ๐‘ โ€ฒ ) โ‰ค ๐œ– .

(68)

It is a standard fact that (68) implies

sup ๐‘“ โˆˆ โ„ฑ ๐Ÿ™ โข | ๐”ผ โข [ ๐‘“ โข ( ๐‘ ) ] โˆ’ ๐”ผ โข [ ๐‘“ โข ( ๐‘ โ€ฒ ) ] | โ‰ค ๐œ– ,

(69)

where โ„ฑ ๐Ÿ™

{ ๐‘“ : ๐’ต โ†ฆ [ 0 , 1 ] } . Let โ„“ : ๐’ต ร— ฮ› โ†’ [ 0 , ๐ต ] be a bounded loss function. Furthermore, let ๐‘” โข ( ๐‘ง )

โ„“ โข ( ๐‘ง ๐‘› + 1 ; ๐œ† ^ โข ( ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘› ) ) . Since ๐‘” โข ( ๐‘ ) โˆˆ [ 0 , ๐ต ] ,

| ๐”ผ โข [ ๐‘” โข ( ๐‘ ) ] โˆ’ ๐”ผ โข [ ๐‘” โข ( ๐‘ โ€ฒ ) ] | โ‰ค ๐ต โข ๐œ– .

(70)

Furthermore, since ๐‘ 1 โ€ฒ , โ€ฆ , ๐‘ ๐‘› + 1 โ€ฒ are exchangeable, we can apply Theorems 2.1 and 2.2 to ๐”ผ โข [ ๐‘” โข ( ๐‘ โ€ฒ ) ] , recovering

๐›ผ โˆ’ 2 โข ๐ต ๐‘› + 1 โ‰ค ๐”ผ โข [ ๐‘” โข ( ๐‘ โ€ฒ ) ] โ‰ค ๐›ผ .

(71)

A final step of triangle inequality implies the result:

๐›ผ โˆ’ 2 โข ๐ต ๐‘› + 1 โˆ’ ๐ต โข ๐œ– โ‰ค ๐”ผ โข [ ๐‘” โข ( ๐‘ ) ] โ‰ค ๐›ผ + ๐ต โข ๐œ– .

(72)

โˆŽ

Proposition 4.3.

It is left to prove that ๐ฟ ~ ๐‘– โข ( ๐œ† ) satisfies the conditions of Theorem 2.1. It is clear that ๐ฟ ~ ๐‘– โข ( ๐œ† ) โ‰ค 1 and ๐ฟ ~ ๐‘– โข ( ๐œ† ) is non-increasing in ๐œ† when ๐ฟ ๐‘– โข ( ๐œ† ) is. Since ๐ฟ ๐‘– โข ( ๐œ† ) is non-increasing and right-continuous, for any sequence ๐œ† ๐‘š โ†“ ๐œ† ,

๐ฟ ๐‘– โข ( ๐œ† ๐‘š ) โ†‘ ๐ฟ ๐‘– โข ( ๐œ† ) โŸน ๐Ÿ™ โข { ๐ฟ ๐‘– โข ( ๐œ† ๐‘š )

๐›ผ } โ†’ ๐Ÿ™ โข { ๐ฟ ๐‘– โข ( ๐œ† )

๐›ผ } .

Thus, ๐ฟ ~ ๐‘– โข ( ๐œ† ) is right-continuous. Finally, ๐ฟ ๐‘– โข ( ๐œ† max ) โ‰ค ๐›ผ implies ๐ฟ ~ ๐‘– โข ( ๐œ† max )

0 โ‰ค 1 โˆ’ ๐›ฝ . โˆŽ

Proposition 4.4.

Examining (30), for each ๐›พ โˆˆ ฮ“ , we have

๐”ผ โข [ ๐ฟ โข ( ๐œ† ^ , ๐›พ ) ] โ‰ค ๐”ผ โข [ ๐ฟ โข ( ๐œ† ^ ๐›พ , ๐›พ ) ] โ‰ค ๐›ผ โข ( ๐›พ ) .

(73)

Thus, dividing both sides by ๐›ผ โข ( ๐›พ ) and taking the supremum, we get that sup ๐›พ โˆˆ ฮ“ ๐”ผ โข [ ๐ฟ โข ( ๐œ† ^ , ๐›พ ) ๐›ผ โข ( ๐›พ ) ] โ‰ค 1 , and the worst-case risk is controlled. โˆŽ

Proposition 4.5.

Because ๐ฟ ๐‘– โข ( ๐œ† , ๐›พ ) is bounded and monotone in ๐œ† for all choices of ๐›พ , it is also true that ๐ฟ ๐‘– ~ โข ( ๐œ† ) is bounded and monotone. Furthermore, the pointwise supremum of right-continuous functions is also right-continuous. Therefore, the ๐ฟ ๐‘– ~ satisfy the assumptions of Theorem 2.1. โˆŽ

Proposition 4.6.

Let

๐œ† ^ ๐‘˜ โ€ฒ

inf { ๐œ† : ๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› + ๐‘˜ } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† ) โ‰ค ๐›ผ } .

Since ๐ฟ ๐’ฎ โข ( ๐œ† max ) โ‰ค ๐›ผ , ๐œ† ^ ๐‘˜ โ€ฒ exists almost surely. Since ๐ฟ ๐’ฎ โข ( ๐œ† ) โ‰ค ๐ต , we have

๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› + ๐‘˜ } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† )

โ‰ค ๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† ) + ๐ต โ‹… โˆ‘ ๐’ฎ โˆฉ { ๐‘› + 1 , โ€ฆ , ๐‘› + ๐‘˜ } โ‰  โˆ… , | ๐’ฎ |

๐‘˜ 1

๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† ) + ๐ต โข ( 1 โˆ’ ๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› } , | ๐’ฎ |

๐‘˜ 1 )

๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† ) + ๐ต โข ( 1 โˆ’ ( ๐‘› ! ) 2 ( ๐‘› + ๐‘˜ ) ! โข ( ๐‘› โˆ’ ๐‘˜ ) ! ) .

Since ๐ฟ ๐’ฎ โข ( ๐œ† ) is non-increasing in ๐œ† , we conclude that ๐œ† ^ ๐‘˜ โ€ฒ โ‰ค ๐œ† ^ ๐‘˜ if the right-hand side of (34) is not empty; otherwise, by definition, ๐œ† ^ ๐‘˜ โ€ฒ โ‰ค ๐œ† max

๐œ† ^ ๐‘˜ . Thus, ๐œ† ^ ๐‘˜ โ€ฒ โ‰ค ๐œ† ^ ๐‘˜ almost surely. Let ๐ธ be the multiset of loss functions { ๐ฟ ๐’ฎ : ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› + ๐‘˜ } , | ๐’ฎ |

๐‘˜ } . Using the same argument in the end of the proof of Theorem 2.1 and the right-continuity of ๐ฟ ๐’ฎ , we can show that

๐”ผ โข [ ๐ฟ { ๐‘› + 1 , โ€ฆ , ๐‘› + ๐‘˜ } โข ( ๐œ† ^ ๐‘˜ โ€ฒ ) โˆฃ ๐ธ ]

๐‘˜ ! โข ๐‘› ! ( ๐‘› + ๐‘˜ ) ! โข โˆ‘ ๐’ฎ โŠ‚ { 1 , โ€ฆ , ๐‘› + ๐‘˜ } , | ๐’ฎ |

๐‘˜ ๐ฟ ๐’ฎ โข ( ๐œ† ) โ‰ค ๐›ผ .

The proof is then completed by the law of iterated expectation. โˆŽ

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
56.3 kB
ยท
Xet hash:
68b1f6b23912b1c72bcf0554a41fe88d2312a67326d1697de3a36e5d6111af24

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.