SlowGuess's picture
Add Batch 2243897a-9167-4b3f-8de3-4f908914984f
694a82b verified

A Distribution Optimization Framework for Confidence Bounds of Risk Measures

Hao Liang12 Zhi-quan Luo12

Abstract

We present a distribution optimization framework that significantly improves confidence bounds for various risk measures compared to previous methods. Our framework encompasses popular risk measures such as the entropic risk measure, conditional value at risk (CVaR), spectral risk measure, distortion risk measure, equivalent certainty, and rank-dependent expected utility, which are well established in risk-sensitive decision-making literature. To achieve this, we introduce two estimation schemes based on concentration bounds derived from the empirical distribution, specifically using either the Wasserstein distance or the supremum distance. Unlike traditional approaches that add or subtract a confidence radius from the empirical risk measures, our proposed schemes evaluate a specific transformation of the empirical distribution based on the distance. Consequently, our confidence bounds consistently yield tighter results compared to previous methods. We further verify the efficacy of the proposed framework by providing tighter problem-dependent regret bound for the CVaR bandit.

1. Introduction

The conventional machine learning literature primarily relies on the expected value or mean of a random variable as the performance metric for a given algorithm. However, in certain critical applications such as finance or medical treatment, the decision-maker's focus extends beyond the expected value and emphasizes other characteristics of the distribution. For instance, a risk-averse portfolio manager may place greater importance on tail behavior than expected value. To capture this risk-aware perspective, the decision

$^{1}$ School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen $^{2}$ Shenzhen Research Institute of Big Data. Correspondence to: Hao Liang haoliang1@link.cuhk.edu.cn.

Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).

maker selects a risk measure (RM) as an alternative to the expected value, effectively representing their specific attitude towards risk.

In practice, however, it is often infeasible to directly evaluate the risk measure of the unknown underlying distribution. Instead, we must rely on constructing the point estimator based on finite samples. Consequently, the confidence interval that quantifies the coverage of the true risk measure becomes crucial in the risk-sensitive setting, as it certifies a trustworthy range for the decision maker.

In this paper, we aim to derive confidence bounds for several classes of risk measures: the Conditional Value at Risk (CVaR), the spectral risk measure (SRM), the distortion risk measure (DRM), the entropic risk measure (ERM), the certainty equivalent (CE), and the rank-dependent expected utility (RDEU). In safety-critical applications, such as medical treatment, CVaR is widely used, which represents the expected value within a fraction of the worst outcomes. Despite its practical utility, CVaR exhibits limitations in terms of expressing various risk preferences, as it assigns equal weight to all losses beyond a certain threshold. To address this, the SRM offers a notable generalization by incorporating a non-constant weighting function, enhancing flexibility in risk assessment. DRM came from insurance problems and later applied to investment risks. It encompasses CVaR as a special case and has gained attention in various fields. ERM is a well-known risk measure in mathematical finance and Markovian decision processes. Furthermore, CE serves as a generalization of ERM by replacing the exponential utility function with a more flexible function. This adaptation enhances the model's capability to capture a broader range of risk preferences. RDEU contributes to understanding decision-making under uncertainty and has been widely applied in diverse domains such as finance, psychology, and health economics1.

In the existing literature, the confidence interval is commonly obtained through the concentration inequality, which bounds the deviation between the point estimator and the true risk with high probability. This deviation, referred to as the confidence radius, depends on the sample size and confi

dence level. Conventionally, the upper or lower confidence bound is determined by adding or subtracting the confidence radius from the point estimator. In this paper, we present two innovative approaches that construct confidence bounds for risk measures without relying on concentration inequalities. Our main contribution is summarized as follows.

(1) We propose a unified framework to obtain refined confidence bounds for several classes of risk measures, specifically for bounded distributions. We recast the problem of determining the confidence bound for risk measures based on finite samples as a constrained optimization problem. In particular, we optimize the value of risk measure over a confidence ball of distributions centered around the empirical distribution function (EDF). Furthermore, we obtain the closed-form solution that can be viewed as a transformation of the EDF. We set the confidence bound as the optimal solution's risk measure value. Notably, the computational overhead increases only marginally.

(2) We introduce a new baseline approach that leverages the local Lipschitz constant of a risk measure over the confidence ball, which may be of independent interest. In contrast, the previous bounds rely on the global Lipschitz constant over the entire space of bounded distributions. In addition, we suggest a systematic way to compute the local Lipschitz constant and show that our bounds outperform the new baseline approach in certain scenarios.

(3) As a minor contribution, we propose a meta-algorithm that handles generic risk measures. Specifically, the meta-algorithm specializes in the CVaR-UCB algorithm (Tamkin et al., 2019) for CVaR bandit problems. Interestingly, Tamkin et al. (2019) empirically observes that CVaR-UCB outperforms the global Lipschitz constant-based algorithm U-UCB (Cassel et al., 2018) with an order of magnitude improvement. Still, they only provide a regret bound that matches that of U-UCB. We fill this gap by providing an improved regret upper bound, quantifying the magnitude of improvement.

1.1. Related Work

Confidence bounds of risk measures The concentration of CVaR has been extensively explored in the literature, cf. Brown (2007); Wang & Gao (2010); Thomas & Learned-Miller (2019); Kolla et al. (2019); Prashanth et al. (2020); LA & Bhat (2022). The first three references primarily focus on the bounded distributions, while the remaining references consider unbounded distributions, including the sub-Gaussian, sub-exponential and heavy tail distributions. Pandey et al. (2019); LA & Bhat (2022) provide tail bounds for bounded, sub-Gaussian, or sub-exponential distributions. The concentration bounds for DRM, CE, and RDEU are presented in LA & Bhat (2022).

Lipschitz constant-based methods Kock et al. (2021); LA & Bhat (2022) relate the estimation error to the Wasserstein distance between the true and empirical distributions and then use concentration bounds for the latter. LA & Bhat (2022) establishes concentration bounds for empirical estimates for a broad class of risk measures, including CVaR, SRM, DRM, RDEU, etc. They derive the concentration bounds via the global Lipschitz constant of the risk measure over the Wasserstein distance for bounded, sub-Gaussian, and sub-exponential distributions. Our bounds only apply to bounded distributions, but we demonstrate that our bounds are tighter than their results whenever they are valid. The computation of the global Lipschitz constant can be challenging, particularly for highly nonlinear risk measures. In many cases, one may only obtain its upper bound as a surrogate, which further loosens the resulting bounds. In contrast, our framework does not require knowledge of the Lipschitz constant. Kock et al. (2021) obtain the concentration bounds for general functionals using the supremum distance instead of the Wasserstein distance. While their work primarily focuses on inequality, poverty, and welfare measures, their methodology can be extended to encompass the risk measures mentioned above. The resulting bounds apply to bounded distributions and are looser than ours. In addition, Liang & Luo (2023) focuses on risk-sensitive reinforcement learning with dynamic risk measures and leverages the Lipschitz property of risk measures to derive regret upper bounds. By quantifying the Lipschitz constants, Liang & Luo (2023) provide regret bounds that depend on these constants.

Off-policy risk evaluation Chandak et al. (2021); Huang et al. (2021) study the off-policy evaluation of functionals of reward or return distribution in bandit or RL setting. Chandak et al. (2021) formulates the problem of interval estimation for various functionals as a constrained optimization problem over a confidence band, which bears similarity to Formulation 6 in our paper. Meanwhile, our work differs from Chandak et al. (2021) in two aspects. First, Chandak et al. (2021) focuses on various functionals and derives the optimal solution for different functionals by a case-by-case geometric analysis. In particular, their method applies to the mean, variance, quantiles, inter-quantile range, CVaR, and entropy. In contrast, our framework focuses on general risk measures, including but not limited to ERM, CVaR, SRM, DRM, CE, and RDEU. We leverage the intrinsic property of risk measures, namely monotonicity, to derive closed-form optimal solutions that are common across different risk measures. In particular, our derivation for confidence bounds of CVaR differs from that in Chandak et al. (2021). Notably, our work is complementary to Chandak et al. (2021) in terms of the applicability of functionals. Our framework can handle arbitrary risk measures using a common optimal solution, while Chandak et al. (2021) provides confidence

bounds for CVaR and other functionals that are not risk measures, where the optimal solution depends on the specific functional. Huang et al. (2021) deal with the off-policy evaluation of Lipschitz risk measures based on their global Lipschitz constant with respect to the supremum distance.

The rest of the paper is organized as follows. We introduce some basic concepts and notations in section 2. We present our new framework under the Wasserstein distance and the supremum distance in section 3, and suggest the closed-form solution in Section 4. We then provide a new baseline method, which bridges our framework and the previous global Lipschitz constant-based method in Section 5. We validate the proposed framework by applying it to the risk-sensitive bandit problems in Section 6, and provide numerical experiments in Section 7. Finally, we provide the concluding remarks in Section 8.

2. Preliminaries

We introduce some notations here. Let $a < b$ be two real numbers. We denote by $\mathcal{D}([a, b])$ and $\mathcal{D}$ the space of all cumulative distribution functions (CDFs) supported on $[a, b]$ and the space of all CDFs on reals respectively. For a CDF $F \in \mathcal{D}$ , let $X_1, X_2, \dots, X_n$ be $n$ i.i.d. samples from $F$ . We denote by $F_n$ the empirical distribution function corresponding to these samples:

Fn(β‹…)β‰œ1nβˆ‘i=1nI{Xi≀⋅}=1nβˆ‘i=1nΞ΄Xi, F _ {n} (\cdot) \triangleq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {I} \{X _ {i} \leq \cdot \} = \frac {1}{n} \sum_ {i = 1} ^ {n} \delta_ {X _ {i}},

where $\mathbb{I}$ is the indicator function and $\delta$ is the Dirac measure. We denote by $F^{-1}:(0,1]\mapsto \mathbb{R}$ is the inverse distribution function (IDF) of $F$ , i.e., the quantile function $F^{-1}(y)\triangleq \inf {x\in \mathbb{R}:F(x)\geq y}$ .

Supremum distance For two CDFs $F,G\in \mathcal{D}$ , the supremum distance between them is defined as

βˆ₯Fβˆ’Gβˆ₯βˆžβ‰œsup⁑x∈R∣F(x)βˆ’G(x)∣. \| F - G \| _ {\infty} \triangleq \sup _ {x \in \mathbb {R}} | F (x) - G (x) |.

The DKW inequality (Dvoretzky et al., 1956; Massart, 1990) bounds the deviation of the empirical distribution from the true distribution in terms of the supremum distance with high probability.

Fact 1 (Two-sided DKW inequality). Let $\delta \in (0,1]$ , then the following holds with probability at least $1 - \delta$

βˆ₯Fβˆ’Fnβˆ₯βˆžβ‰€cnβˆžβ‰œlog⁑(2/Ξ΄)2n,(1) \left\| F - F _ {n} \right\| _ {\infty} \leq c _ {n} ^ {\infty} \triangleq \sqrt {\frac {\log (2 / \delta)}{2 n}}, \tag {1}

where $c_{n}^{\infty}$ is the concentration radius.

The DKW inequality holds for any distribution, including discrete and unbounded distributions.

Wasserstein distance For CDFs $F, G \in \mathcal{D}$ , the Wasserstein distance between them is defined as

W1(F,G)β‰œβˆ«βˆ’βˆžβˆžβˆ£F(x)βˆ’G(x)∣dx. W _ {1} (F, G) \triangleq \int_ {- \infty} ^ {\infty} | F (x) - G (x) | d x.

$W_{1}(F,G)$ can be expressed as the $\ell_1$ norm between $F$ and $G$ . Therefore we also write $W_{1}(F,G) = | F - G|_{1}$ . Fournier & Guillin (2015) establishes the concentration bounds on the Wasserstein distance between the EDF and the underlying one without explicit constants. LA & Bhat (2022) gives the concentration results for sub-Gaussian distributions with explicit constants. As a corollary, Fact 2 provides the concentration bound for bounded distributions. Fact 2. Let $F\in \mathcal{D}([a,b])$ . With probability at least $1 - \delta$ , for every $n\geq \log (1 / \delta)$

βˆ₯Fβˆ’Fnβˆ₯1≀cn1β‰œ256(bβˆ’a)n+8(bβˆ’a)elog⁑(1/Ξ΄)n(2) \left\| F - F _ {n} \right\| _ {1} \leq c _ {n} ^ {1} \triangleq \frac {2 5 6 (b - a)}{\sqrt {n}} + 8 (b - a) \sqrt {\frac {e \log (1 / \delta)}{n}} \tag {2}

where $c_{n}^{1}$ is the concentration radius.

Risk measure In this paper, we interpret the random variable as a loss instead of a reward. For two random variables $X \sim F$ and $Y \sim G$ , we say that $Y$ dominates $X$ if $\forall x \in \mathbb{R}, F(x) \geq G(x)$ , and we write $Y \succeq X$ . A risk measure $\mathbf{T}$ is defined as a functional mapping from a set of r.v.s $\mathcal{X}$ to the reals that satisfy the following conditions (FΓΆllmer & Schied, 2010; Weber, 2006)

  • Monotonicity: $X \preceq Y \Rightarrow \mathbf{T}(X) \leq \mathbf{T}(Y)$
  • Translation-invariance: $\mathbf{T}(X + c) = \mathbf{T}(X) + c, c \in \mathbb{R}$

A risk measure $\mathrm{T}$ is said to be distribution-invariant if $\mathbf{T}(X) = \mathbf{T}(Y)$ when $X$ and $Y$ follow the same distribution (Acerbi, 2002; Weber, 2006). In this paper, we only consider distribution-invariant risk measures. We write $\mathbf{T}(F) = \mathbf{T}(X)$ for simplicity. We remark that there are other functionals mapping a r.v. to a real number, e.g., the inequality measures (Kock et al., 2021) that do not satisfy the monotonicity. In this paper, we derive the confidence bounds for several classes of risk measures. It turns out that the monotonicity of risk measures plays an essential role in our optimization framework.

Table 1 summarizes the relevant risk measures considered in this paper. These risk measures are grouped into classes, namely SRM, DRM, CE, and RDEU. CVaR and ERM belong to the SRM and CE classes, respectively, based on specific choices of the weighting function $\phi$ and the utility function $u$ . The specific conditions related to the definitions of these risk measures are listed below. Please refer to Appendix B for detailed descriptions.

  • SRM: $\phi : [0,1] \to [0,\infty)$ is increasing and satisfying $\int_0^1 \phi(y) dy = 1$ .

Table 1: List of risk measures

RMNotationDefinition
SRMMΟ†(F)∫01Ο†(y)F-1(y)dy
DRMρg(F)∫0∞g(1-F(x))dx
CEEu(F)u-1{∫R u(x)dF(x)}
RDEUV(F)∫ab v(x)dw(F(x))
CVaRCΞ±(F)infν∈R{Ξ½+1/1-Ξ±EΓ—F[(X-Ξ½)+]}
ERMUβ(F)1/β log{∫R exp(βx)dF(x)}
  • DRM: $g:[0,1] \to [0,1]$ is a continuous, concave and increasing function with $g(0) = 0$ and $g(1) = 1$ .
  • CE: $u$ is a continuous, convex, and strictly increasing function.
  • RDEU: $w:[0,1] \to [0,1]$ is an increasing weight function with $w(0) = 0$ and $w(1) = 1$ ; $v:\mathbb{R} \to \mathbb{R}$ is an (unbounded) increasing differentiable function with $u(0) = 0$ .
  • CVaR: an instance of SRM with $\phi(y) = \frac{1}{\alpha} \mathbb{I}{y \geq 1 - \alpha}$
  • ERM: an instance of CE with $u(x) = \exp(\beta x)$ .

It is more convenient to represent some risk measures using IDF, e.g., SRM $M_{\phi}(F) = \int_0^1 \phi(y) F^{-1}(y) dy$ . For this reason, we overload notation and write $\mathbf{T}(F^{-1}) = \mathbf{T}(F)$ whenever convenient for some $\mathbf{T}$ .

3. Distribution Optimization Framework

3.1. Global Lipschitz Constant-based Approach

Fact 2 and Fact 1 present the concentration bound of the empirical distribution in terms of the Wasserstein distance and the supremum distance, respectively. They can be written in a unified way: with probability at least $1 - \delta$ , we have

βˆ₯Fβˆ’Fnβˆ₯p≀cnp,(3) \left\| F - F _ {n} \right\| _ {p} \leq c _ {n} ^ {p}, \tag {3}

where $p = 1$ indicates the Wasserstein distance and $p = \infty$ indicates the supremum distance. To relate the concentration bound of EDF to that of risk measure, Kock et al. (2021); Bhat & LA (2019) use the Lipschitz property of the risk measure, i.e., for any two CDFs $F,G\in \mathcal{D}([a,b])$ , there exists $L_{p}(\mathbf{T}) > 0$ such that the risk measure $\mathbf{T}$ satisfies

∣T(F)βˆ’T(G)βˆ£β‰€Lp(T)βˆ₯Fβˆ’Gβˆ₯p.(4) | \mathbf {T} (F) - \mathbf {T} (G) | \leq L _ {p} (\mathbf {T}) \| F - G \| _ {p}. \tag {4}

$L_{p}(\mathbf{T})$ is called the global Lipschitz constant (GLC) of $\mathbf{T}$ w.r.t. $| \cdot | _p$ since the inequality holds for all possible pairs of CDFs. Combining Equation 3 and Equation 4, Kock

et al. (2021); Bhat & LA (2019) establish the concentration bounds of a class of Lipschitz functionals

T(Fn)βˆ’Lp(T)cnp≀T(F)≀T(Fn)+Lp(T)cnp. \mathbf {T} (F _ {n}) - L _ {p} (\mathbf {T}) c _ {n} ^ {p} \leq \mathbf {T} (F) \leq \mathbf {T} (F _ {n}) + L _ {p} (\mathbf {T}) c _ {n} ^ {p}.

The quality of the above bounds relies on the tightness of $L_{p}(\mathbf{T})$ , so the finest bounds one can get fall back on identifying the tightest GLC

Lp(T)β‰œsup⁑G,Gβ€²βˆˆD([a,b])T(G)βˆ’T(Gβ€²)βˆ₯Gβˆ’Gβ€²βˆ₯p, L _ {p} (\mathbf {T}) \triangleq \sup _ {G, G ^ {\prime} \in \mathcal {D} ([ a, b ])} \frac {\mathbf {T} (G) - \mathbf {T} (G ^ {\prime})}{\| G - G ^ {\prime} \| _ {p}},

where we overload the notation of $L_{p}(\mathbf{T})$ . The GLC-based approach suffers from several limitations. The GLC may not be easy to compute, especially for some highly nonlinear risk measures. In most cases, one may only obtain its upper bound as a surrogate. Meanwhile, the concentration bounds are far from optimal. The confidence bounds are set to be the product of the GLC and the confidence radius. However, the GLC is loose since it is evaluated over the whole space of bounded distributions.

3.2. Local Lipschitz Constant-based Approach

Before introducing our framework as a remedy, we propose a new baseline approach that improves the previous bounds. Observe that Equation 3 together with the boundedness of $F$ can be written as the norm ball constraint

Bp(Fn,cnp)β‰œ{F∣βˆ₯Fβˆ’Fnβˆ₯p≀cnp,F∈D([a,b])}. B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right) \triangleq \left\{F \mid \| F - F _ {n} \| _ {p} \leq c _ {n} ^ {p}, F \in \mathcal {D} ([ a, b ]) \right\}.

Define the local Lipschitz constant (LLC) over $B_{p}(F_{n},c_{n}^{p})$

Lp(T;Fn,cnp)β‰œsup⁑G,Gβ€²βˆˆBp(Fn,cnp)T(G)βˆ’T(Gβ€²)βˆ₯Gβˆ’Gβ€²βˆ₯p≀sup⁑G,Gβ€²βˆˆD([a,b])T(G)βˆ’T(Gβ€²)βˆ₯Gβˆ’Gβ€²βˆ₯p=Lp(T). \begin{array}{l} L _ {p} \left(\mathbf {T}; F _ {n}, c _ {n} ^ {p}\right) \triangleq \sup _ {G, G ^ {\prime} \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right)} \frac {\mathbf {T} (G) - \mathbf {T} \left(G ^ {\prime}\right)}{\| G - G ^ {\prime} \| _ {p}} \\ \leq \sup _ {G, G ^ {\prime} \in \mathcal {D} ([ a, b ])} \frac {\mathbf {T} (G) - \mathbf {T} \left(G ^ {\prime}\right)}{\| G - G ^ {\prime} \| _ {p}} = L _ {p} (\mathbf {T}). \\ \end{array}

For simplicity, we drop $\mathbf{T}$ from the Lipschitz constants. We thus obtain the tighter upper/ lower confidence bound (UCB/LCB)

T(Fn)+(βˆ’)Lp(Fn,cnp)cnp≀(β‰₯)T(Fn)+Lpcnp. \mathbf {T} (F _ {n}) + (-) L _ {p} (F _ {n}, c _ {n} ^ {p}) c _ {n} ^ {p} \leq (\geq) \mathbf {T} (F _ {n}) + L _ {p} c _ {n} ^ {p}.

As sample size increases, the confidence radius $c_{n}^{p}$ shrinks, leading to smaller LLC and sharper bounds. In contrast, the previous bounds do not adapt to the sample size.

3.3. Distribution Optimization Framework

We now propose our unified framework to derive confidence bounds for a broad range of risk measures. The idea is quite simple and intuitive. Given a risk measure, we maximize/minimize the risk measure value over the confidence ball and set the maximal/minimal value as the UCB/LCB. By recasting the problem of finding the confidence bounds

to a constrained optimization problem, we obtain the optimal bounds from Equation 3. Different choices of distances lead to two types of frameworks:

max⁑G∈D([a,b])T(G)s . t .βˆ₯Gβˆ’Fnβˆ₯1≀cn1(5) \begin{array}{l l} \max _ {G \in \mathcal {D} ([ a, b ])} & \mathbf {T} (G) \\ \text {s . t .} & \| G - F _ {n} \| _ {1} \leq c _ {n} ^ {1} \end{array} \tag {5}

and

max⁑G∈D([a,b])T(G)s . t .βˆ₯Gβˆ’Fnβˆ₯βˆžβ‰€cn∞(6) \begin{array}{l l} \max _ {G \in \mathcal {D} ([ a, b ])} & \mathbf {T} (G) \\ \text {s . t .} & \| G - F _ {n} \| _ {\infty} \leq c _ {n} ^ {\infty} \end{array} \tag {6}

We can obtain the LCB by reverting the maximization formulation to a minimization formulation. Denote by $\overline{F_n^p} (\underline{F_n^p})$ the optimal solution, then the UCB and LCB are set to be $\mathbf{T}\left(\overline{F_n^p}\right)$ and $\mathbf{T}\left(\underline{F_n^p}\right)$ . In the sequel, we may drop $p$ when the statement holds for either $p = 1$ or $p = \infty$ .

To demonstrate the optimality of our framework, observe that $\overline{F_n} \in B(F_n, c_n)$ , therefore

T(FΛ‰n)≀T(Fn)+L(Fn,cn)cn≀T(Fn)+Lcn.(7) \mathbf {T} \left(\bar {F} _ {n}\right) \leq \mathbf {T} \left(F _ {n}\right) + L \left(F _ {n}, c _ {n}\right) c _ {n} \leq \mathbf {T} \left(F _ {n}\right) + L c _ {n}. \tag {7}

$\mathbf{T}\left(\overline{F_n}\right)$ is tighter than the bound derived from the tightest LLC, our new baseline approach, which already improves the previous bounds.

One may wonder whether $\overline{F_n}$ and $\underline{F}_n$ are easy to obtain. Fortunately, we will show that they admit analytic form for almost all risk measures introduced in Section 2 in the next section. Moreover, we will use Equation 7 to quantify the tightness of our confidence bounds in Section 5. For ease of notation, we will omit $\mathcal{D}([a,b])$ .

4. Closed-form Solution

The following theorems present the closed-form solutions to Formulation 5-6. The proofs are deferred to Appendix C.

Theorem 4.1. For any risk measure satisfying the monotonicity, the optimal solution to Formulation 6 is given by

Fnβˆžβ€Ύ=Pcn∞∞Fn,Fnβˆžβ€Ύ=Ncn∞∞Fn,(8) \overline {{F _ {n} ^ {\infty}}} = \mathbf {P} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n}, \underline {{F _ {n} ^ {\infty}}} = \mathbf {N} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n}, \tag {8}

where $\mathbf{P}_c^\infty /\mathbf{N}_c^\infty : \mathcal{D}([a,b]) \to \mathcal{D}([a,b])$ is the positive/negative operator with coefficient $c > 0$ for the supremum distance, which is defined as follows

(Pc∞F)(x)β‰œmax⁑{F(x)βˆ’cI{x<b},0}, \left(\mathbf {P} _ {c} ^ {\infty} F\right) (x) \triangleq \max \left\{F (x) - c \mathbb {I} \{x < b \}, 0 \right\},

(Nc∞F)(x)β‰œmin⁑{F(x)+cI{xβ‰₯a},1}. \left(\mathbf {N} _ {c} ^ {\infty} F\right) (x) \triangleq \min \left\{F (x) + c \mathbb {I} \{x \geq a \}, 1 \right\}.

The supremum ball $B_{\infty}(F_n, c_n^{\infty})$ consists of the CDFs within the area sandwiched by $\mathbf{P}{c_n^\infty}^\infty F_n$ and $\mathbf{N}{c_n^\infty}^\infty F_n$ (see Figure 1). Since any risk measure $\mathbf{T}$ is monotonic, and

Pcn∞∞Fn(x)≀G(x)≀Ncn∞∞Fn,βˆ€x∈R,βˆ€G∈B∞(Fn,cn∞) \mathbf {P} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n} (x) \leq G (x) \leq \mathbf {N} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n}, \forall x \in \mathbb {R}, \forall G \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})

then $\mathbf{P}{c_n^\infty}^\infty F_n$ and $\mathbf{N}{c_n^\infty}^\infty F_n$ are the maximizer and the minimizer respectively. Another interpretation is that $\mathbf{P}_{c_n^\infty}^\infty$


Figure 1: $F_{n}$ (black), $\mathbf{P}_{\mathbf{c}n^\infty}^\infty F_n$ (blue) and $\mathbf{N}{\mathbf{c}_n^\infty}^\infty F_n$ (red).


Figure 2: $F_{n}$ (black) and $\mathbf{P}{\mathbf{c}n}^1 F_n$ (red). $\mathbf{P}{\mathbf{c}n}^1 F_n$ overlaps $F{n}$ for $x < X{(n^{+})}$ , and it has only two jumps at $X_{(n^{+})}$ and $b$ for $x \geq X_{(n^{+})}$ .

transports the leftmost atoms of $F_{n}$ with total mass of $c_{n}^{\infty}$ to the maximally possible atom $b$ , while $\mathbf{P}{c_n^\infty}^\infty$ transports the rightmost atoms of $F{n}$ with total mass of $c_{n}^{\infty}$ to the minimally possible atom $a$ . Although we can explicitly represent the optimal solutions in the PMF form, it is more convenient to work with the CDF form. Please refer to Appendix F for more details.

Remark 4.2. The positive operator in Equation 8 reduces to the optimistic operator introduced in the CVaR bandit/RL (Tamkin et al., 2019; Keramati et al., 2020). However, they only consider the case of CVaR, and we generalize it to arbitrary risk measures.

Theorem 4.3. For the risk measures except RDEU in Section 2, the optimal solution to Formulation 5 is given by

Fn1β€Ύ=Ocn11Fn,Fn1β€Ύ=Pcn11Fn,(9) \overline {{F _ {n} ^ {1}}} = \mathbf {O} _ {c _ {n} ^ {1}} ^ {\mathbf {1}} F _ {n}, \underline {{F _ {n} ^ {1}}} = \mathbf {P} _ {c _ {n} ^ {1}} ^ {\mathbf {1}} F _ {n}, \tag {9}

where $\mathbf{P}_c^1 / \mathbf{N}_c^1 : \mathcal{D}([a, b]) \to \mathcal{D}([a, b])$ is called the positive/negative operator for CDF with coefficient $c$ for the Wasserstein distance, which is defined as follows.

Fix $F_{n}$ and $c_{n}^{1} > 0$ . Let $X_{(1)} \leq X_{(2)} \cdots \leq X_{(n)}$ be the order statistic of ${X_{i}}$ . For $i \in [n]$ , we recursively define

S1+β‰œ1n(bβˆ’X(n)),Si+β‰œSiβˆ’1++1n(bβˆ’X(n+1βˆ’i)). S _ {1} ^ {+} \triangleq \frac {1}{n} (b - X _ {(n)}), S _ {i} ^ {+} \triangleq S _ {i - 1} ^ {+} + \frac {1}{n} (b - X _ {(n + 1 - i)}).

The geometric interpretation of $S_{i}^{+}$ is the area sandwiched between $F_{n}$ and the horizontal line $1 - \frac{i}{n}$ (see Figure 2). Define $i^{+} \triangleq \min {i : S_{i}^{+} \geq c_{n}^{1}}$ as the first index that $S_{i}^{+}$ exceeds $c_{n}^{1}$ . Let $n^{+} \triangleq n + 1 - i^{+}$ . Then $\mathbf{P}{c_n^1}^1 F_n$ is a categorical distribution with atoms ${X{(i)}}{i \in [n^+]} \cup {b}$ . The probability mass of $X{(n^+)}$ and $b$ are assigned to be

pn+β‰œ1bβˆ’Xn+(Si++βˆ’cn1),pbβ‰œi+nβˆ’pn+, p _ {n ^ {+}} \triangleq \frac {1}{b - X _ {n ^ {+}}} (S _ {i ^ {+}} ^ {+} - c _ {n} ^ {1}), p _ {b} \triangleq \frac {i ^ {+}}{n} - p _ {n ^ {+}},

meanwhile the probability mass of the first $n^+ -1$ atoms $\left{X_{(i)}\right}{i\in [n^{+} - 1]}$ remains $\frac{1}{n}$ .To be more precise, $\mathbf{P}{c_n^1}^1 F_n$ is described by the following probability mass function (PMF)

1nβˆ‘i=1n+βˆ’1Ξ΄X(i)+pn+β‹…Ξ΄X(n+)+pbβ‹…b. \frac {1}{n} \sum_ {i = 1} ^ {n ^ {+} - 1} \delta_ {X _ {(i)}} + p _ {n ^ {+}} \cdot \delta_ {X _ {(n ^ {+})}} + p _ {b} \cdot b.

The way of transforming $F_{n}$ into $\mathbf{P}{c_n^1}^1 F_n$ resembles the well-known water-filling algorithm (Telatar, 1999) in wireless commutations in the opposite direction. Imagine that the gravity is reversed to the upward direction, and we fill the water of amount $c{n}^{1}$ to a tank enclosed by $F_{n}$ and the vertical line $b$ . The water is sequentially filled in the bins from right to left, in which the $i$ -th bin corresponds to the $X_{(n + 1 - i)}$ until the water is filled up at the $i^{+}$ bin. By a volume argument, the water level is $\frac{n^{+} - 1}{n} + p_{n+}$ . We then recover the analytic form via the shape of $\mathbf{P}_{c_n^1}^1 F_n$ .

Another interpretation is that $\mathbf{P}{c_n^1}^1$ replaces the probability mass $\frac{1}{n} - p{n+}$ of $X_{(n+)}$ and all the atoms to its right $\left{X_{(i)}\right}{n^{+} < i \leq n}$ by the upper bound $b$ . For convenience, we let $X{(0)} = a$ . We recursively define for $i \in [n]$

S1βˆ’β‰œX(n)βˆ’X(nβˆ’1)n,Siβˆ’β‰œSiβˆ’1βˆ’+i(X(n+1βˆ’i)βˆ’X(nβˆ’i))n. S _ {1} ^ {-} \triangleq \frac {X _ {(n)} - X _ {(n - 1)}}{n}, S _ {i} ^ {-} \triangleq S _ {i - 1} ^ {-} + \frac {i (X _ {(n + 1 - i)} - X _ {(n - i)})}{n}.

Now $S_{i}^{-}$ represents the area sandwiched between $F_{n}$ and the vertical line $X_{(n - i)}$ (see Figure 3). Define $i^{-} \triangleq \min {i : S_{i}^{-} \geq c_{n}^{1}}$ as the first index that $S_{i}^{-}$ exceeds $c_{n}^{1}$ . Let $n^{-} \triangleq n + 1 - i^{-}$ . Then $\mathbf{N}{c_n^1}^1 F_n$ is a categorical distribution with atoms ${X{(i)}}_{i \in [n^{-} - 1]} \cup {b^{-}}$ , where $b^{-}$ is given by

bβˆ’β‰œX(nβˆ’βˆ’1)+niβˆ’(Snβˆ’βˆ’cn1). b ^ {-} \triangleq X _ {(n ^ {-} - 1)} + \frac {n}{i ^ {-}} (S _ {n} ^ {-} - c _ {n} ^ {1}).

$\mathbf{N}_{c_n^1}^1 F_n$ is given by the PMF

1nβˆ‘i=1nβˆ’1Ξ΄X(i)+iβˆ’nβ‹…bβˆ’. \frac {1}{n} \sum_ {i = 1} ^ {n - 1} \delta_ {X _ {(i)}} + \frac {i ^ {-}}{n} \cdot b ^ {-}.

$\mathbf{N}{c_n^1}^1 F_n$ mirrors the water-filling, but we now fill the water rightward until it is filled out with water level $b^{-}$ . It can also be interpreted as replacing the atoms to the right of $X{(n^{-})}$ with a total mass of $\frac{i^{-}}{n}$ by a single atom $b^{-} < X_{(n^{-})}$ .


Figure 3: $F_{n}$ (black) and $\mathbf{N}_{\pmb{c}n^1}^1 F_n$ (blue). $\mathbf{N}{\pmb{c}n^1}^1 F_n$ overlaps $F{n}$ for $x < b^{-}$ and has a single jump at $b^{-}$ with height $\frac{i^{-}}{n}$ .

Computational issue. We present the algorithms to actualize Equation 5-6 in practice in Appendix F. We demonstrate that their computational complexity increases slightly more than the LC-based methods.

Remark 4.4. For both distances, we only require $F$ to be bounded above by a known constant $b$ to perform $\mathbf{P}{c_n}F_n$ , and require $F$ to be bounded below by a known constant $a$ to perform $\mathbf{N}{c_n}F_n$ . Thus we only require $F$ to be bounded on one side to obtain the one-sided confidence bound.

5. Improvement of Confidence Bounds

5.1. Derivation of the LLC

We present a systematic way of computing the LLC over the confidence ball $B(F_n, c_n)$ , which bridges our framework and the GLC-based method. Define $\psi(t; F, G) \triangleq \mathrm{T}((1 - t)F + tG)$ for $F, G \in \mathcal{D}([a, b])$ and $t \in [0, 1]$ . For simplicity, we may drop $F, G$ and write $\psi(t)$ if it is clear from the context. Note that $\psi(0) = \mathrm{T}(F)$ , $\psi(1) = \mathrm{T}(G)$ and $(1 - t)F + tG \in \mathcal{D}([a, b])$ for all $t \in [0, 1]$ . It can be shown that $\psi$ is continuously differentiable under some mild conditions on $\mathbf{T}$ . Observe that

L(T;Fn,cn)=sup⁑F,G∈B(Fn,cn)T(F)βˆ’T(G)βˆ₯Fβˆ’Gβˆ₯=sup⁑F,G∈Bp(Fn,cn)ψ(1;F,G)βˆ’Οˆ(0;F,G)βˆ₯Fβˆ’Gβˆ₯≀sup⁑F,G∈B(Fn,cn),t∈[0,1]Οˆβ€²(t;F,G)βˆ₯Fβˆ’Gβˆ₯≀sup⁑F,G∈B(Fn,cn),t∈[0,1]v((1βˆ’t)F+tG)=sup⁑F∈B(Fn,cn)v(F), \begin{array}{l} L(\mathbf{T};F_{n},c_{n}) = \sup_{F,G\in B(F_{n},c_{n})}\frac{\mathbf{T}(F) - \mathbf{T}(G)}{\|F - G\|} \\ = \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n}\right)} \frac {\psi (1 ; F , G) - \psi (0 ; F , G)}{\| F - G \|} \\ \leq \sup _ {F, G \in B (F _ {n}, c _ {n}), t \in [ 0, 1 ]} \frac {\psi^ {\prime} (t ; F , G)}{\| F - G \|} \\ \leq \sup _ {F, G \in B \left(F _ {n}, c _ {n}\right), t \in [ 0, 1 ]} \boldsymbol {v} ((1 - t) F + t G) \\ = \sup _ {F \in B (F _ {n}, c _ {n})} \boldsymbol {v} (F), \\ \end{array}

where $\pmb{v}$ is a functional that satisfies for any $F,G$

Οˆβ€²(t;F,G))≀v((1βˆ’t)F+tG)βˆ₯Fβˆ’Gβˆ₯. \boldsymbol {\psi} ^ {\prime} (t; F, G)) \leq \boldsymbol {v} ((1 - t) F + t G) \| F - G \|.

Note that $\pmb{v}$ implicitly depends on $p$ and the risk measure $\mathbf{T}$ . Consequently, we can obtain an upper bound on the LLC

by bounding the last term. We can obtain the upper bound on the GLC by removing the ball constraint. For interested readers, please refer to in Table 4 in Appendix D for the functional $\upsilon$ for different risk measures. We will use SRM as an example to illustrate the procedure.

5.1.1. AN EXAMPLE: SRM

Here we consider an alternative form of SRM $M_{\phi}(F) = \int_{a}^{b}\phi (F(x))xdF(x)$ . $\psi$ is continuously differentiable with derivative

Οˆβ€²(t)=ddt∫abΟ•((1βˆ’t)F+tG)(x))xd((1βˆ’t)F+tG)(x)=βˆ’βˆ«ab(Gβˆ’F)(x)Ο•((1βˆ’t)F+tG)(x))dx≀βˆ₯Gβˆ’Fβˆ₯pβˆ₯Ο•(F+tH)βˆ₯q. \begin{array}{l} \psi^ {\prime} (t) = \frac {d}{d t} \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d ((1 - t) F + t G) (x) \\ = - \int_ {a} ^ {b} (G - F) (x) \phi ((1 - t) F + t G) (x)) d x \\ \leq \left\| G - F \right\| _ {p} \left\| \phi (F + t H) \right\| _ {q}. \\ \end{array}

We omit the details in the second equality and leave the full derivations to Appendix D. Since

Οˆβ€²(t;F,G)βˆ₯Fβˆ’Gβˆ₯p≀βˆ₯Ο•((1βˆ’t)F+tG)βˆ₯q, \frac {\psi^ {\prime} (t ; F , G)}{\| F - G \| _ {p}} \leq \| \phi ((1 - t) F + t G) \| _ {q},

where $| \cdot | _q$ is the dual norm of $| \cdot | p$ , we obtain $\pmb {v}(F) =$ $| \phi (F)| q$ . Consider the case $p = \infty$ . Since $\underline{F_n^\infty}\preceq F$ for $F\in B{\infty}(F_n,c_n^{\infty})$ , then $\phi \left(\underline{F_n^\infty} (x)\right)\geq \phi (F(x)),\forall F\in$ $B{\infty}(F_n,c_n^{\infty}),\forall x\in [a,b]$ . It holds that

max⁑F∈B∞(Fn,cn∞)βˆ₯Ο•(F)βˆ₯1=max⁑F∈B∞(Fn,cn∞)∫abΟ•(F(x))dx=∫abΟ•(Fnβˆžβ€Ύ(x))dx=βˆ₯Ο•(Fnβˆžβ€Ύ)βˆ₯1. \begin{array}{l} \max _ {F \in B _ {\infty} \left(F _ {n}, c _ {n} ^ {\infty}\right)} \| \phi (F) \| _ {1} = \max _ {F \in B _ {\infty} \left(F _ {n}, c _ {n} ^ {\infty}\right)} \int_ {a} ^ {b} \phi (F (x)) d x \\ = \int_ {a} ^ {b} \phi (\underline {{F _ {n} ^ {\infty}}} (x)) d x = \left\| \phi (\underline {{F _ {n} ^ {\infty}}}) \right\| _ {1}. \\ \end{array}

In contrast, the GLC can be bounded by choosing $F = \delta_{a}$

max⁑F∈D([a,b])∫abΟ•(F(x))dx=(bβˆ’a)βˆ₯Ο•βˆ₯∞=(bβˆ’a)Ο•(1). \max _ {F \in \mathcal {D} ([ a, b ])} \int_ {a} ^ {b} \phi (F (x)) d x = (b - a) \| \phi \| _ {\infty} = (b - a) \phi (1).

Following such principle, we obtain the LLCs for other risk measures (cf. Table 2).

5.2. Improvement of Distribution Optimization Framework

Equation 7 qualitatively establishes that the confidence bounds derived from our framework are tighter than that based on the LLC

T(Fnβ€Ύ)βˆ’T(Fn)≀L(Fn,cn)cn. \mathbf {T} \left(\overline {{F _ {n}}}\right) - \mathbf {T} (F _ {n}) \leq L (F _ {n}, c _ {n}) c _ {n}.

Furthermore, we can quantitatively show the improvement

T(Fnβ€Ύ)βˆ’T(Fn)=ψ(1;Fn,Fnβ€Ύ)βˆ’Οˆ(0;Fn,Fnβ€Ύ)≀max⁑t∈[0,1]Οˆβ€²(t;Fn,Fnβ€Ύ)≀max⁑t∈[0,1]v((1βˆ’t)Fn+tβ‹…Fnβ€Ύ)cn, \begin{array}{l} \mathbf {T} \left(\overline {{F _ {n}}}\right) - \mathbf {T} (F _ {n}) = \psi \left(1; F _ {n}, \overline {{F _ {n}}}\right) - \psi \left(0; F _ {n}, \overline {{F _ {n}}}\right) \\ \leq \max _ {t \in [ 0, 1 ]} \boldsymbol {\psi} ^ {\prime} (t; F _ {n}, \overline {{F _ {n}}}) \\ \leq \max _ {t \in [ 0, 1 ]} \boldsymbol {v} \left((1 - t) F _ {n} + t \cdot \overline {{F _ {n}}}\right) c _ {n}, \\ \end{array}

where the second to the last inequality follows from the definition of $\pmb{v}$ . The following also holds

T(Fn)βˆ’T(Fnβ€Ύ)≀max⁑t∈[0,1]v((1βˆ’t)Fnβ€Ύ+tFn)cn. \mathbf {T} (F _ {n}) - \mathbf {T} \left(\underline {{F _ {n}}}\right) \leq \max _ {t \in [ 0, 1 ]} \boldsymbol {v} \left((1 - t) \underline {{F _ {n}}} + t F _ {n}\right) c _ {n}.

Therefore, it is more convenient to compare $\mathbf{T}\left(\overline{F_n}\right) - \mathbf{T}(F_n)$ and $L(F_{n},c_{n})c_{n}$ , which is shown for the supremum distance in Table 3. For convenience, we normalize them by $c_{n}$ and state the results for general CDF $F$ . Our UCBs are strictly and consistently tighter than the LLC-based bounds for the supremum distance. Due to the space limit, the results for the Wasserstein distance are shown in Appendix D.

5.3. Illustrating example: CVaR

We use CVaR to illustrate Table 2 and Table 3. Table 2 compares the LLC with GLC for different risk measures. The second row in Table 2 shows that the LLC and GLC of CVaR for the Wasserstein distance are identical. In addition, the GLC of CVaR for supremum distance is $\frac{b - a}{\alpha}$ , which is larger than its LLC $\frac{b - F^{-1}((1 - \alpha - c)^{+})}{\alpha}$ .

Table 3 presents the improvement of our bound for the supremum distance. The second column in Table 3 implies

CΞ±(Fβˆžβ€Ύ)βˆ’CΞ±(F)c=bβˆ’Fβˆ’1(1βˆ’Ξ±)1βˆ’Ξ±<L∞(CΞ±;F,c)=bβˆ’Fβˆ’1(1βˆ’Ξ±βˆ’c)Ξ±<L∞(CΞ±)=bβˆ’aΞ±. \begin{array}{l} \frac {C _ {\alpha} \left(\overline {{F ^ {\infty}}}\right) - C _ {\alpha} (F)}{c} = \frac {b - F ^ {- 1} (1 - \alpha)}{1 - \alpha} \\ < L _ {\infty} \left(\boldsymbol {C} _ {\boldsymbol {\alpha}}; F, c\right) = \frac {b - F ^ {- 1} (1 - \alpha - c)}{\alpha} \\ < L _ {\infty} (\boldsymbol {C} _ {\alpha}) = \frac {b - a}{\alpha}. \\ \end{array}

Our upper bound is strictly tighter than the LLC-based and GLC-based bound. The improvement depends on the distribution $F$ and $\alpha$ . For small $\alpha$ and distribution $F$ with non-fat upper tail, $b - F^{-1}(1 - \alpha) \ll b - a$ , leading to a much finer bound. In particular, consider a uniform distribution

CΞ±(Fβˆžβ€Ύ)βˆ’CΞ±(F)c=Ξ±(bβˆ’a)Ξ±<L∞(CΞ±;F,c)=(Ξ±+c)(bβˆ’a)Ξ±<L∞(CΞ±)=bβˆ’aΞ±. \begin{array}{l} \frac {C _ {\boldsymbol {\alpha}} \left(\overline {{F ^ {\infty}}}\right) - C _ {\boldsymbol {\alpha}} (F)}{c} = \frac {\alpha (b - a)}{\alpha} \\ < L _ {\infty} \left(\boldsymbol {C} _ {\boldsymbol {\alpha}}; F, c\right) = \frac {(\alpha + c) (b - a)}{\alpha} \\ < L _ {\infty} (\boldsymbol {C} _ {\boldsymbol {\alpha}}) = \frac {b - a}{\alpha}. \\ \end{array}

Our bound for uniform distribution considerably improves by a factor of $1 / \alpha$ and $(\alpha + c) / \alpha$ compared to the GLC-based and LLC-based bound, respectively.

6. Application to Risk-sensitive Bandits

Now we consider the risk-sensitive multi-armed bandit (MAB) problems. The quality of each arm is measured by the risk measure value of its loss distribution. The loss distribution of the $i$ -th arm is denoted by $F_{i}$ , and the risk value associated with $\mathbf{T}$ is $\mathbf{T}(F_i)$ . The algorithm interacts

Table 2: Comparison between the LLC and the GLC

RMLocal (p=1)Global (p=1)ImprovementLocal (p=∞)Global (p=∞)Improvement
CVaR1/Ξ±1/Ξ±Xb-Fn-1((1-Ξ±-c)+)/Ξ±b-a/Ξ±βœ“
SRMΟ†(1)Ο†(1)X||Ο†(Fn∞)||1(b-a)Ο†(1)βœ“
DRM||g'||∞||g'||∞X||g'(1-Fn∞)||1(b-a) ||g'||βˆžβœ“
ERMexp(Ξ²b)/∫ab exp(Ξ²x)dFn1(x))exp(Ξ²(b-a))βœ“exp(Ξ²b)-exp(Ξ²a)/Ξ² ∫ab exp(Ξ²x)dFn∞(x))exp(Ξ²(b-a))-1/Ξ²βœ“
RDEUN/A2||w'||∞ ||u'||∞N/A||w'(Fn∞)u'||1||w'||∞ ||u'||1βœ“

Table 3: Improvement of confidence bounds for supremum distance over LLC

RMCVaRSRMDRMERMRDEU
L∞(T; F, c)b-F-1(1-Ξ±-c)/1-Ξ±||Ο†(F∞)||1||g'(1-F∞)||1exp(Ξ²b)-exp(Ξ²a)/∫axb exp(Ξ²x)dF∞(x)||w'(F∞)u'||1
T(F∞)-T(F)/cb-F-1(1-Ξ±)/1-Ξ±||Ο†(F)||1||g'(1-F)||1exp(Ξ²b)-exp(Ξ²a)/β∫ab exp(Ξ²x)dF(x)||w'(F)u'||1
Improvementβœ“βœ“βœ“βœ“βœ“

with a bandit instance $\nu = (F_{i}){i\in [K]}$ for $N$ rounds. In each round $t\in [N]$ , the algorithm $\pi$ chooses an arm $I{t}\in [K]$ and observes the loss $X_{t}\sim F_{I_{t}}$ . The performance of an algorithm $\pi$ is measured by the cumulative regret

Regret⁑(Ο€,Ξ½,N)β‰œE[βˆ‘t∈[N]T(FIt)βˆ’min⁑i∈[K]T(Fi)]. \operatorname {R e g r e t} (\pi , \nu , N) \triangleq \mathbb {E} \left[ \sum_ {t \in [ N ]} \mathbf {T} \left(F _ {I _ {t}}\right) - \min _ {i \in [ K ]} \mathbf {T} \left(F _ {i}\right) \right].

While the UCB-type algorithms (Auer et al., 2002) are widely applied to the risk-neutral MAB problems, they rely on the concentration bound of the mean. We propose a meta-algorithm (cf. Algorithm 1) to deal with generic risk measures, where the LCB is derived from our framework. The algorithm maintains the EDF $\hat{F}_{i,t}$ for each arm $i$

F^i,tβ‰œ1si(t)βˆ‘tβ€²=1tβˆ’1I{Xt′≀⋅,Itβ€²=i},(10) \hat {F} _ {i, t} \triangleq \frac {1}{s _ {i} (t)} \sum_ {t ^ {\prime} = 1} ^ {t - 1} \mathbb {I} \left\{X _ {t ^ {\prime}} \leq \cdot , I _ {t ^ {\prime}} = i \right\}, \tag {10}

where $s_i(t) \triangleq \sum_{t' = 1}^{t - 1}\mathbb{I}{I_{t'} = i}$ is the number of times of pulling arm $i$ up to time $t$ . For convenience, we assume the first arm is the optimal arm, i.e., $\mathbf{T}(F_1) < \mathbf{T}(F_i), i \neq 1$ . When specializing the risk measure to the CVaR, we obtain a distribution-dependent regret upper bound.

Proposition 6.1. The expected regret of Algorithm 1 on a instance $\nu \in \mathcal{D}([a,\infty])$ with $\mathbf{T} = C_{\alpha}$ is bounded as

Regret(LCB,Ξ½,N)≀4log⁑(2N)Ξ±2βˆ‘i>1K(bβˆ’Fiβˆ’1(1βˆ’Ξ±βˆ’2ciβˆ—))2Ξ”i+3βˆ‘i=1KΞ”i, \begin{array}{l} R e g r e t (L C B, \nu , N) \\ \leq \frac {4 \log (\sqrt {2} N)}{\alpha^ {2}} \sum_ {i > 1} ^ {K} \frac {\left(b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i} ^ {*})\right) ^ {2}}{\Delta_ {i}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i}, \\ \end{array}

where the sub-optimality gap $\Delta_i \triangleq C_\alpha(F_i) - C_\alpha(F_1) <$

Algorithm 1 Lower Confidence Bound

1: Input: $N, a$
2: for round $t \in [K]$ do
3: Pull arm $I_{t} \gets t$
4: end for
5: for round $t = K + 1, K + 2, \dots, N$ do
6: Compute $\hat{F}{i,t}$ via Equation 10
7: Set $c
{i}(t) \gets \sqrt{\frac{\log(2KN^{2})}{s_{i}(t)}}$ for all $i \in [K]$ .
8: $\underline{F}{i,t}\gets \mathbf{N}{\boldsymbol {c}i(\boldsymbol {t})}^\infty \hat{F}{i,t}$
9: $I_{t}\gets \arg \min_{i\in [K]}\mathbf{T}\left(\underline{F_{i,t}}\right)$
10: end for

$b - a$ , and $c_{i}^{*}$ is a $F_{i}$ -dependent constant that solves the equation $2\frac{b - F_i^{-1}(1 - \alpha - 2c)}{\alpha} c = \Delta_i$ .

The proof is deferred to Appendix E.

Remark 6.2. The meta-algorithm for CVaR reduces to the CVaR-UCB algorithm (Tamkin et al., 2019). Interestingly, Tamkin et al. (2019) empirically observes that CVaR-UCB outperforms the GLC-based algorithm U-UCB (Cassel et al., 2018) with an order of magnitude improvement, but they only provide a regret bound of

4log⁑(2N)Ξ±2βˆ‘i>1K(bβˆ’a)2Ξ”i+3βˆ‘i=1KΞ”i, \frac {4 \log (\sqrt {2} N)}{\alpha^ {2}} \sum_ {i > 1} ^ {K} \frac {(b - a) ^ {2}}{\Delta_ {i}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i},

which matches that of U-UCB. We fill this gap by quantify

ing the improvement in the magnitude

βˆ‘i>1(bβˆ’Fiβˆ’1(1βˆ’Ξ±βˆ’2ciβˆ—))2Ξ”i2/βˆ‘i>1(bβˆ’a)2Ξ”i2<1. \sum_ {i > 1} \frac {\left(b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i} ^ {*})\right) ^ {2}}{\Delta_ {i} ^ {2}} / \sum_ {i > 1} \frac {(b - a) ^ {2}}{\Delta_ {i} ^ {2}} < 1.

Remark 6.3. Baudry et al. (2021) introduces a Thompson Sampling algorithm B-CVTS for CVaR bandit with bounded rewards, which is the first asymptotic optimal CVaR bandit algorithm. Notably, our main contribution in this paper is the framework to improve confidence bounds rather than designing optimal CVaR bandit algorithms. Additionally, CVaR-UCB has several advantages. CVaR-UCB can perform incremental updates to compute the CVaR values, but B-CVTS needs to maintain and sample from the posterior of distributions. Thus the time and space complexity of CVaR-UCB is quite low. Moreover, CVaR-UCB can be applied to semi-unbounded distributions, e.g., log-normal distribution, while B-CVTS assumes bounded rewards.

7. Numerical Experiments

To better visualize the benefits of our framework relative to those of LLC and GLC, we conducted a series of empirical comparisons. Details and complete figures are deferred to Appendix G.

Confidence bounds. We consider five different beta distributions and two risk measures: CVaR and ERM. Due to space limitations, we provide the results for one typical beta distribution and one particular distance in Figure 4. Our bounds are consistently tighter than LC-based ones for various risk measures and varying sample sizes.

CVaR bandits. We compare CVaR-UCB with the UCB algorithm using GLC (GLC-UCB) and using LLC (LLC-UCB) in Figure 5. It shows that CVaR-UCB outperforms LLC-UCB and GLC-UCB.

8. Conclusion

We propose a distribution optimization framework to obtain improved confidence bounds of several risk measures. By viewing the solutions as certain transformations of the EDF, we design efficient algorithms to compute the confidence bounds. The tightness of our bounds is further illustrated via comparisons with the new baseline method.

The major limitation is that our framework only deals with bounded distribution in general. However, it is applicable to semi-unbounded distributions for CVaR, SRM, DRM, and RDEU. It would be interesting to study the distribution optimization framework under more general assumptions, e.g., sub-Gaussian or sub-exponential distributions. Another promising future direction is to generalize the framework to the multivariate setting. One may apply the multivariate DKW inequality to the multivariate risk measures.


(a) CVaR UCB via supremum distance


(b) CVaR LCB via supremum distance


(c) ERM UCB via Wasserstein distance


(d) ERM LCB via Wasserstein distance


Figure 4: Comparisons of CIs for CVaR and ERM with varying sample sizes.
Figure 5: Cumulative CVaR-regret of CVaR-UCB (red), LLC-UCB (blue), and GLC-UCB (green).

Acknowledgements

We thank all the anonymous reviewers for their helpful comments and suggestions. The work of Zhi-quan Luo was supported in part by the National Key Research and Development Project under grant 2022YFA1003900 and in part by the Guangdong Provincial Key Laboratory of Big Data Computing.

References

Acerbi, C. Spectral measures of risk: A coherent representation of subjective risk aversion. Journal of Banking & Finance, 26(7):1505-1518, 2002.
Acerbi, C. and Tasche, D. On the coherence of expected shortfall. Journal of Banking & Finance, 26(7):1487-1503, 2002.
Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235-256, 2002.
Baudry, D., Gautron, R., Kaufmann, E., and Maillard, O. Optimal thompson sampling strategies for support-aware cvar bandits. In International Conference on Machine Learning, pp. 716-726. PMLR, 2021.
BΓ€uerle, N. and Rieder, U. More risk-sensitive markov decision processes. Mathematics of Operations Research, 39(1):105-120, 2014.
Bhat, S. P. and LA, P. Concentration of risk measures: A Wasserstein distance approach. Advances in Neural Information Processing Systems, 32, 2019.
Brown, D. B. Large deviations bounds for estimating conditional value-at-risk. Operations Research Letters, 35(6): 722-730, 2007.
Cassel, A., Mannor, S., and Zeevi, A. A general approach to multi-armed bandits under risk criteria. In Conference On Learning Theory, pp. 1295–1306. PMLR, 2018.
Chandak, Y., Niekum, S., da Silva, B., Learned-Miller, E., Brunskill, E., and Thomas, P. S. Universal off-policy evaluation. Advances in Neural Information Processing Systems, 34:27475-27490, 2021.
Dvoretzky, A., Kiefer, J., and Wolfowitz, J. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics, pp. 642-669, 1956.
FΓΆllmer, H. and Schied, A. Convex and coherent risk measures. Encyclopedia of Quantitative Finance, pp. 355-363, 2010.
FΓΆllmer, H. and Schied, A. Stochastic finance. In Stochastic Finance. de Gruyter, 2016.
Fournier, N. and Guillin, A. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3):707-738, 2015.
Huang, A., Leqi, L., Lipton, Z., and Azizzadenesheli, K. Off-policy risk assessment in contextual bandits. Advances in Neural Information Processing Systems, 34: 23714-23726, 2021.

Keramati, R., Dann, C., Tamkin, A., and Brunskill, E. Being optimistic to be conservative: Quickly learning a cvar policy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 4436-4443, 2020.
Kock, A. B., Preinerstorfer, D., and Veliyev, B. Functional sequential treatment allocation. Journal of the American Statistical Association, pp. 1-13, 2021.
Kolla, R. K., Prashanth, L., Bhat, S. P., and Jagannathan, K. Concentration bounds for empirical conditional value-at-risk: The unbounded case. Operations Research Letters, 47(1):16-20, 2019.
Krokhmal, P., Palmquist, J., and Uryasev, S. Portfolio optimization with conditional value-at-risk objective and constraints. Journal of risk, 4:43-68, 2002.
LA, P. and Bhat, S. P. A wasserstein distance approach for concentration of empirical risk estimates. J. Machine Learn. Res, 23(238):1-61, 2022.
Liang, H. and Luo, Z.-Q. Regret bounds for risk-sensitive reinforcement learning with lipschitz dynamic risk measures. arXiv preprint arXiv:2306.02399, 2023.
Massart, P. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. The annals of Probability, pp. 1269-1283, 1990.
Pandey, A. K., Prashanth, L., and Bhat, S. P. Estimation of spectral risk measures. arXiv preprint arXiv:1912.10398, 2019.
Prashanth, L., Jagannathan, K., and Kolla, R. K. Concentration bounds for cvar estimation: The cases of light-tailed and heavy-tailed distributions. In Proceedings of the 37th International Conference on Machine Learning, pp. 5577-5586, 2020.
Quiggin, J. Generalized expected utility theory: The rank-dependent model. Springer Science & Business Media, 2012.
Rockafellar, R. T., Uryasev, S., et al. Optimization of conditional value-at-risk. Journal of risk, 2:21-42, 2000.
Tamkin, A., Keramati, R., Dann, C., and Brunskill, E. Distributionally-aware exploration for cvar bandits. In NeurIPS 2019 Workshop on Safety and Robustness on Decision Making, 2019.
Telatar, E. Capacity of multi-antenna gaussian channels. European transactions on telecommunications, 10(6):585-595, 1999.
Thomas, P. and Learned-Miller, E. Concentration inequalities for conditional value at risk. In International Conference on Machine Learning, pp. 6225-6233. PMLR, 2019.

Wang, S. Premium calculation by transforming the layer premium density. ASTIN Bulletin: The Journal of the IAA, 26(1):71-92, 1996.
Wang, S. S. Cat bond pricing using probability transforms. Geneva Papers: Etudes et Dossiers, 278:19-29, 2004.
Wang, Y. and Gao, F. Deviation inequalities for an estimator of the conditional value-at-risk. Operations Research Letters, 38(3):236-239, 2010.
Weber, S. Distribution-invariant risk measures, information, and dynamic consistency. Mathematical Finance: An International Journal of Mathematics, Statistics and Financial Economics, 16(2):419-441, 2006.
Zhu, S. and Fukushima, M. Worst-case conditional value-at-risk with application to robust portfolio management. Operations research, 57(5):1155-1168, 2009.

A. Table of Notation

SymbolExplanation
DThe space of all CDFs
D([a,b])The space of all CDFs supported on [a,b]
Bp(F,c)The ||Β·||p norm ball centered at F with radius c
FnThe empirical distribution function corresponding to n samples from F
cpnThe confidence radius w.r.t. ||Β·||p for n samples
TRisk measure
Lp(T)The global Lipschitz constant of T w.r.t. ||Β·||p
Lp(T;F,c)The local Lipschitz constant of T w.r.t. ||Β·||p over Bp(Fn, cpn)
FnPThe maximizer of Formulation 5 or 6
FnPThe minimizer of Formulation 5 or 6
P1cThe positive operator w.r.t. ||Β·||1 with coefficient c
N1cThe negative operator w.r.t. ||Β·||1 with coefficient c
P∞cThe positive operator w.r.t. ||·||∞ with coefficient c
NC∞The negative operator w.r.t. ||·||∞ with coefficient c
Ξ½Bandit instance
NNumber of total rounds
KNumber of total arms
Ο€Bandit algorithm
si(t)The number of times of pulling arm i up to time t

B. Risk Measures

Conditional Value at Risk (CVaR) CVaR (Rockafellar et al., 2000) is a popular risk measure in financial portfolio optimization (Krokhmal et al., 2002; Zhu & Fukushima, 2009). Formally, the CVaR value at level $\alpha \in (0,1)$ for a distribution $F$ is defined as

CΞ±(F)β‰œinf⁑ν∈R{Ξ½+11βˆ’Ξ±EX∼F[(Xβˆ’Ξ½)+]}. C _ {\boldsymbol {\alpha}} (F) \triangleq \inf _ {\nu \in \mathbb {R}} \left\{\nu + \frac {1}{1 - \alpha} \mathbb {E} _ {X \sim F} [ (X - \nu) ^ {+} ] \right\}.

Acerbi & Tasche (2002) showed that when $F$ is a continuous distribution, $C_{\alpha}(F) = \mathbb{E}_{X\sim F}[X|X\geq F^{-1}(1 - \alpha)]$

Spectral risk measure (SRM) SRM is a generalization of CVaR that adopts a non-constant weighting function (Acerbi, 2002). The SRM of $F$ is defined as

SΟ•(F)β‰œβˆ«01Ο•(y)Fβˆ’1(y)dy, \boldsymbol {S} _ {\phi} (F) \triangleq \int_ {0} ^ {1} \phi (y) F ^ {- 1} (y) d y,

where $\phi :[0,1]\to [0,\infty)$ is weighting function. $\phi$ is said to be admissible if it is increasing and satisfies that $\int_0^1\phi (y)dy = 1$ . Acerbi (2002) showed that $S_{\phi}(F)$ is a coherent risk measure if $\phi$ is admissible. SRM can be viewed as a weighted average of the quantiles $F^{-1}$ , with weight specified by $\phi (y)$ . In fact, $S_{\phi}(F)$ specializes in $C_\alpha (F)$ for $\phi (y) = \frac{1}{1 - \alpha}\mathbb{I}{y\geq 1 - \alpha }$ .

Distortion risk measure (DRM) DRM is originally from the insurance problems and later applied to investment risks (Wang, 1996; 2004). For a distribution $F \in \mathcal{D}([0,\infty))$ , the DRM $\rho_g(F)$ is defined as

ρg(F)β‰œβˆ«0∞g(1βˆ’F(x))dx, \boldsymbol {\rho} _ {\boldsymbol {g}} (F) \triangleq \int_ {0} ^ {\infty} g (1 - F (x)) d x,

where $g:[0,1] \to [0,1]$ is a continuous increasing function with $g(0) = 0$ and $g(1) = 1$ . We refer to $g$ as the distortion function. Similar to SRM, DRM can also recover CVaR by setting $g(y) = \min \left(\frac{x}{1 - \alpha}, 1\right)$ .

Entropic risk measure (ERM) ERM is a well-known risk measure in risk-sensitive decision-making, including mathematical finance (FΓΆllmer & Schied, 2016), Markovian decision processes (BΓ€uerle & Rieder, 2014). The ERM value of $F$

with coefficient $\beta \neq 0$ is defined as

UΞ²(F)β‰œ1Ξ²log⁑(EX∼F[exp⁑(Ξ²X)])=1Ξ²log⁑(∫Rexp⁑(Ξ²x)dF(x)). \boldsymbol {U} _ {\boldsymbol {\beta}} (F) \triangleq \frac {1}{\beta} \log (\mathbb {E} _ {X \sim F} [ \exp (\beta X) ]) = \frac {1}{\beta} \log \left(\int_ {\mathbb {R}} \exp (\beta x) d F (x)\right).

Certainty equivalent Certainty equivalent can be viewed as a generalization of ERM, which replace the exponential utility function with a more general function. Let $u$ be a continuous and strictly increasing function such that its inverse $u^{-1}$ exists, then the certainty equivalent $C_u(F)$ of $F$ associated with a function $u$ is given by

Cu(F)β‰œuβˆ’1(EX∼F[u(X)])=uβˆ’1(∫Ru(x)dF(x)). \boldsymbol {C} _ {\boldsymbol {u}} (F) \triangleq u ^ {- 1} \left(\mathbb {E} _ {X \sim F} [ u (X) ]\right) = u ^ {- 1} \left(\int_ {\mathbb {R}} u (x) d F (x)\right).

It is trivial that the certainty equivalent $C_u(F)$ reduces to the ERM when $u(x) = \exp(\beta x)$ .

Rank dependent expected utility (RDEU) RDEU value (Quiggin, 2012) of $F \in \mathcal{D}([a,b])$ is defined as

V(F)β‰œβˆ«abv(x)dw(F(x)), \boldsymbol {V} (F) \triangleq \int_ {a} ^ {b} v (x) d w (F (x)),

where $w:[0,1] \to [0,1]$ is an increasing weight function such that $w(0) = 0$ and $w(1) = 1$ , and $v:\mathbb{R} \to \mathbb{R}$ be an (unbounded) increasing differentiable function with $v(0) = 0$ .

C. Proof of Theorems

For CDFs $F,G\in \mathcal{D}$ , the Wasserstein distance between them can be represented by their IDSf

W1(F,G)=βˆ«βˆ’βˆžβˆžβˆ£F(x)βˆ’G(x)∣dx=∫01∣Fβˆ’1(y)βˆ’Gβˆ’1(y)∣dy, W _ {1} (F, G) = \int_ {- \infty} ^ {\infty} | F (x) - G (x) | d x = \int_ {0} ^ {1} \left| F ^ {- 1} (y) - G ^ {- 1} (y) \right| d y,

With slight abuse of notation, we write $W_{1}(F,G) = \left| F^{-1} - G^{-1}\right|_{1}$ . We will prove the theorems for the more general formulations in the following.

max⁑G∈D([a,b])T(G)(11) \max _ {G \in \mathcal {D} ([ a, b ])} \quad \mathbf {T} (G) \tag {11}

s . t .βˆ₯Gβˆ’Fβˆ₯1≀c \begin{array}{l} \text {s . t .} \quad \| G - F \| _ {1} \leq c \end{array}

and

max⁑G∈D([a,b])T(G)(12) \max _ {G \in \mathcal {D} ([ a, b ])} \quad \mathbf {T} (G) \tag {12}

s . t .βˆ₯Gβˆ’Fβˆ₯βˆžβ‰€c \begin{array}{l l} \text {s . t .} & \| G - F \| _ {\infty} \leq c \end{array}

Observe that $| G - F| _1 = | G^{-1} - F^{-1}| _1$ , thus we can recast Formulation 5 as

max⁑G∈D([a,b])T(Gβˆ’1)s . t .βˆ₯Gβˆ’1βˆ’Fβˆ’1βˆ₯1≀c(13) \begin{array}{l l} \max _ {G \in \mathcal {D} ([ a, b ])} & \mathbf {T} (G ^ {- 1}) \\ \text {s . t .} & \| G ^ {- 1} - F ^ {- 1} \| _ {1} \leq c \end{array} \tag {13}

Proposition C.1. For risk measures ERM and CE defined in Section 2, the optimal solution to Formulation 11 is given by

F1β€Ύ=Pc1F,Fβ€Ύ1=Nc1F,(14) \overline {{F ^ {1}}} = \mathbf {P} _ {\mathbf {c}} ^ {1} F, \underline {{F}} ^ {1} = \mathbf {N} _ {\mathbf {c}} ^ {1} F, \tag {14}

where $\mathbf{P}{\mathbf{c}}^{\mathbf{1}} / \mathbf{N}{\mathbf{c}}^{\mathbf{1}}: \mathcal{D}([a, b]) \to \mathcal{D}([a, b])$ is the positive/negative operator for CDF for the Wasserstein distance, which defined as follows.

Define $S^{+}(F,x) \triangleq \int_{x}^{b} (F(z) - F(x)) dz$ . Its geometric meaning is the area sandwiched between $F$ and a constant $F(x)$ from $x$ (see Figure 4 (a)). Notice that $S^{+}$ may be discontinuous w.r.t. $x$ since $F$ can be discontinuous w.r.t. $x$ ( $S^{+}(x) < S^{+}(x^{-})$ ). For $c > 0$ , we define $g^{+}(F,c) \triangleq \max {x \geq a : S^{+}(F,x) \leq c} \in [a,b)$ . For simplicity, we drop $F$ from the notations if it is clear from the context. Given $F \in \mathcal{D}([a,b])$ as input, $\mathrm{O}_c^1$ outputs a CDF

(Pc1F)(x)β‰œ{F(g+(c))βˆ’cβˆ’S+(g+(c))bβˆ’g+(c),x∈[g+(c),b),F(x),o t h e r w i s e. \left(\mathbf {P} _ {\boldsymbol {c}} ^ {\mathbf {1}} F\right) (x) \triangleq \left\{ \begin{array}{l} F (g ^ {+} (c)) - \frac {c - S ^ {+} (g ^ {+} (c))}{b - g ^ {+} (c)}, x \in [ g ^ {+} (c), b), \\ F (x), \text {o t h e r w i s e}. \end{array} \right.

Analogously, we define $S^{-}(F,x) \triangleq \int_{x}^{b} (1 - F(z)) dz$ and $g^{-}(F,c) \triangleq \max {x \geq a : S^{-}(F,x) \leq c} \in [a,b)$ . We omit $F$ for simplicity. Note that $S^{-}$ is continuous w.r.t. $x$ . Hence $g^{-}(c) = {x : S^{-}(x) = c}$ . For $F \in \mathcal{D}([a,b])$ , $\mathbf{N}_c^1$ outputs a CDF

(Nc1F)(x)β‰œ{1,x∈[gβˆ’(c),b),F(x),o t h e r w i s e. \left(\mathbf {N} _ {\boldsymbol {c}} ^ {\mathbf {1}} F\right) (x) \triangleq \left\{ \begin{array}{l} 1, x \in [ g ^ {-} (c), b), \\ F (x), \text {o t h e r w i s e}. \end{array} \right.

Proposition C.2. For risk measures CVaR, SRM and DRM defined in Section 2, the optimal solution to 13 is given by

(F1β€Ύβˆ’1)=Pc1Fβˆ’1,(Fβ€Ύ1)βˆ’1=Nc1Fβˆ’1,(15) \left(\overline {{F ^ {1}}} ^ {- 1}\right) = \mathbf {P} _ {c} ^ {\mathbf {1}} F ^ {- 1}, \left(\underline {{F}} ^ {1}\right) ^ {- 1} = \mathbf {N} _ {c} ^ {\mathbf {1}} F ^ {- 1}, \tag {15}

where we overload notations to denote by $\mathbf{P}_c^1 / \mathbf{N}_c^1 : (\mathcal{D}([a, b]))^{-1} \to (\mathcal{D}([a, b]))^{-1}$ the positive/negative operator for $IDF$ for the Wasserstein distance.

We overload notations and define $S^{+}(F^{-1},y) \triangleq \int_{y}^{1}\left(b - F^{-1}(z)\right)dz$ . For $c > 0$ , we define $g^{+}(F^{-1},c) \triangleq \min {y : S^{+}(F^{-1},y) \geq c} \in (0,1)$ . For simplicity, we drop $F$ from the notations if it is clear from the context. Given $F^{-1} \in \mathcal{D}([a,b])^{-1}$ as input, $\mathbf{P}_c^1$ outputs a IDF

(Pc1Fβˆ’1)(y)β‰œ{1,y∈(g+(c),1],Fβˆ’1(y),o t h e r w i s e. \left(\mathbf {P} _ {\boldsymbol {c}} ^ {\mathbf {1}} F ^ {- 1}\right) (y) \triangleq \left\{ \begin{array}{l} 1, y \in (g ^ {+} (c), 1 ], \\ F ^ {- 1} (y), \text {o t h e r w i s e}. \end{array} \right.

Analogously, we define $S^{-}(F^{-1},y) \triangleq \int_y^1 (F^{-1}(z) - F^{-1}(y))dz$ and $g^{-}(F^{-1},c) \triangleq \min {y:S^{-}(F^{-1},y) \geq c} \in (0,1)$ for $c > 0$ . Notice that $S^{-}$ may be discontinuous w.r.t. $y$ ( $S^{-}(y+) < S^{-}(y)$ ). Given $F^{-1} \in \mathcal{D}([a,b])^{-1}$ as input, $\mathbf{N}_c^1$ outputs a IDF

(Nc1Fβˆ’1)(y)β‰œ{Fβˆ’1(gβˆ’(c))+Sβˆ’1(gβˆ’1(c))βˆ’c1βˆ’gβˆ’(c),y∈(gβˆ’(c),1],Fβˆ’1(y),o t h e r w i s e. \left(\mathbf {N} _ {\boldsymbol {c}} ^ {\mathbf {1}} F ^ {- 1}\right) (y) \triangleq \left\{ \begin{array}{l} F ^ {- 1} (g ^ {-} (c)) + \frac {S ^ {- 1} (g ^ {- 1} (c)) - c}{1 - g ^ {-} (c)}, y \in (g ^ {-} (c), 1 ], \\ F ^ {- 1} (y), \text {o t h e r w i s e}. \end{array} \right.

Proposition C.3. For any risk measure, the optimal solution to Formulation 12 is given by

Fβˆžβ€Ύ=Pc∞F,Fβ€Ύβˆž=Nc∞F,(16) \overline {{F ^ {\infty}}} = \mathbf {P} _ {c} ^ {\infty} F, \underline {{F}} ^ {\infty} = \mathbf {N} _ {c} ^ {\infty} F, \tag {16}

where $\mathbf{P}_c^\infty /\mathbf{N}_c^\infty :\mathcal{D}([a,b])\to \mathcal{D}([a,b])$ is the positive/negative operator with coefficient $c > 0$ for the supremum distance, which is defined as follows

(Pc∞F)(x)β‰œmax⁑{F(x)βˆ’cI{x∈[a,b),0}, \left(\mathbf {P} _ {c} ^ {\infty} F\right) (x) \triangleq \max \left\{F (x) - c \mathbb {I} \{x \in [ a, b), 0 \}, \right.

(Nc∞F)(x)β‰œmin⁑{F(x)+cI{x∈[a,b),1}. \left(\mathbf {N} _ {\boldsymbol {c}} ^ {\infty} F\right) (x) \triangleq \min \left\{F (x) + c \mathbb {I} \{x \in [ a, b), 1 \}. \right.

Figure 4 illustrates how the operators defined in Proposition C.1-C.3 transform a typical continuous CDF $F \in \mathcal{D}([a, b])$ .

C.1. Proof of Proposition C.1

We only provide the proof for CE because ERM is a special case of CE by choosing $u(x) = \exp (\beta x)$ . For simplicity, we write the optimization problem as

max⁑G∈D([a,b])Cu(G) \max _ {G \in \mathcal {D} ([ a, b ])} C _ {u} (G)

s . t .βˆ₯Gβˆ’Fβˆ₯1≀c \begin{array}{l} \text {s . t .} \quad \| G - F \| _ {1} \leq c \end{array}

Proof. We consider the maximization problem first. The objective function can be written as

Cu(G)=uβˆ’1(∫abu(x)dG(x))=uβˆ’1(u(b)βˆ’βˆ«abG(x)uβ€²(x)dx). C _ {u} (G) = u ^ {- 1} \left(\int_ {a} ^ {b} u (x) d G (x)\right) = u ^ {- 1} \left(u (b) - \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x\right).

Since $u^{-1}$ is monotonically increasing, we can reformulate the original optimization problem as

min⁑G∫abG(x)uβ€²(x)dx \min _ {G} \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x

s . t .G∈B1(F,c) \begin{array}{l l} \text {s . t .} & G \in B _ {1} (F, c) \end{array}

The fact that the ball constraint $G \in B_1(F, c)$ is symmetric about $F$ implies $G^*(x) \leq F(x), \forall x \in [a, b]$ . Suppose $G^*(y) > F(y)$ for $y$ in a set that is a union of disjoint intervals $\cup_i I_i = \cup_i (a_i, b_i)$ , and $G^*(y) \leq F(y)$ otherwise. We can choose

H(y)={max⁑{2F(y)βˆ’Gβˆ—(y),F(ai)},y∈Ii,Gβˆ—(y),o t h e r w i s e, H (y) = \left\{ \begin{array}{l} \max \{2 F (y) - G ^ {*} (y), F (a _ {i}) \}, y \in I _ {i}, \\ G ^ {*} (y), \text {o t h e r w i s e}, \end{array} \right.

which satisfies the ball constraint. However, $C_u(H) > C_u(G^*)$ since $\int_{I_i} H(x) u'(x) dx \leq \int_{I_i} F(x) u'(x) dx < \int_{I_i} G^*(x) \exp(\beta x) dx$ , which leads to a contradiction. Hence we have $G^*(x) \leq F(x), \forall x \in [a, b]$ . Define

G~(x)β‰œ(Pc1F)(x)={F(g+(c))βˆ’cβˆ’S+(g+(c))bβˆ’g+(c),x∈[g+(c),b),F(x),o t h e r w i s e. \tilde {G} (x) \triangleq \left(\mathbf {P} _ {\mathbf {c}} ^ {\mathbf {1}} F\right) (x) = \left\{ \begin{array}{l} F (g ^ {+} (c)) - \frac {c - S ^ {+} (g ^ {+} (c))}{b - g ^ {+} (c)}, x \in [ g ^ {+} (c), b), \\ F (x), \text {o t h e r w i s e}. \end{array} \right.

Notice that the (new) minimization problem has a linear objective and convex ball constraint. Thus, the optimal solution exists in the boundary of the ball constraint. It suffices to consider the following optimization problem

min⁑G∫abG(x)uβ€²(x)dxs . t .βˆ₯Fβˆ’Gβˆ₯1=c,Gβͺ°F. \begin{array}{l} \min _ {G} \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x \\ \begin{array}{l l} \text {s . t .} & \| F - G \| _ {1} = c, \end{array} \\ G \succeq F. \\ \end{array}

It is easy to check that $\tilde{G}$ satisfies the constraints. It remains to show that $\int_{a}^{b}\tilde{G}(x)u'(x)dx\leq \int_{a}^{b}G(x)u'(x)dx$ for any feasible $G$ . Consider a feasible $G\neq \tilde{G}$ . It is obvious that $G(b^{-})\geq \tilde{G} (g^{+}(c))$ , otherwise $| F - G| _1 = \int_a^b F(x) - G(x)dx > \int_a^b F(x) - \tilde{G} (x)dx = c$ , which contradicts with $G\in B_1(F,c)$ . If $G(b^{-}) = \tilde{G} (g^{+}(c))$ , then we again have $G = \tilde{G}$ . Otherwise $| F - G| _1 > c$ . Therefore it holds that $G(b^{-}) > \tilde{G} (g^{+}(c))$ . We have $G(g^{+}(c)) < \tilde{G} (g^{+}(c))$ , since otherwise $| F - G| _1 < \left| F - \tilde{G}\right| _1 = c$ . Since $G$ is monotonically increasing and right continuous, there exists $g^{\prime}(c)\in (g^{+}(c),b)$ such that $G(x) < \tilde{G} (x) = \tilde{G} (g^{+}(c))$ for $x\in [g^{+}(c),g^{\prime}(c))$ and $G(x) > \tilde{G} (x) = \tilde{G} (g^{+}(c))$ for $x\in (g^{\prime}(c),b)$ . Moreover, $G(x)\leq \tilde{G} (x) = F(x)$ for $x\in [a,g^{+}(c))$ . It follows that

∫abG(x)uβ€²(x)dxβˆ’βˆ«abG~(x)uβ€²(x)dx=∫agβ€²(c)(G(x)βˆ’G~(x))uβ€²(x)dx+∫gβ€²(c)b(G(x)βˆ’G~(x))uβ€²(x)dxβ‰₯uβ€²(gβ€²(c))∫gβ€²(c)b(G(x)βˆ’G~(x))dxβˆ’uβ€²(gβ€²(c))∫agβ€²(c)(G~(x)βˆ’G(x))dx=0. \begin{array}{l} \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x - \int_ {a} ^ {b} \tilde {G} (x) u ^ {\prime} (x) d x \\ = \int_ {a} ^ {g ^ {\prime} (c)} (G (x) - \tilde {G} (x)) u ^ {\prime} (x) d x + \int_ {g ^ {\prime} (c)} ^ {b} (G (x) - \tilde {G} (x)) u ^ {\prime} (x) d x \\ \geq u ^ {\prime} \left(g ^ {\prime} (c)\right) \int_ {g ^ {\prime} (c)} ^ {b} \left(G (x) - \tilde {G} (x)\right) d x - u ^ {\prime} \left(g ^ {\prime} (c)\right) \int_ {a} ^ {g ^ {\prime} (c)} \left(\tilde {G} (x) - G (x)\right) d x \\ = 0. \\ \end{array}

The last equality follows from that $| F - G| _1 - \left| F - \tilde{G}\right| _1 = \int_a^b F(x) - G(x)dx - \int_a^b F(x) - \tilde{G} (x)dx = \int_a^b\tilde{G} (x) -$ $G(x)dx = 0.$

Similarly, we can prove that the optimal solution to the minimization problem is given by

(Nc1Fβˆ’1)(y)β‰œ{Fβˆ’1(gβˆ’(c))+Sβˆ’1(gβˆ’1(c))βˆ’c1βˆ’gβˆ’(c),y∈(gβˆ’(c),1],Fβˆ’1(y),o t h e r w i s e. \left(\mathbf {N} _ {\mathbf {c}} ^ {\mathbf {1}} F ^ {- 1}\right) (y) \triangleq \left\{ \begin{array}{l} F ^ {- 1} (g ^ {-} (c)) + \frac {S ^ {- 1} (g ^ {- 1} (c)) - c}{1 - g ^ {-} (c)}, y \in (g ^ {-} (c), 1 ], \\ F ^ {- 1} (y), \text {o t h e r w i s e}. \end{array} \right.

C.2. Proof of Proposition C.2

We only provide proof for SRM and DRM because CVaR is a special case of SRM or DRM.

C.2.1. PROOF FOR SRM

Proof. The objective function is given by

MΟ•(G)=∫01Ο•(y)Gβˆ’1(y)dy, M _ {\phi} (G) = \int_ {0} ^ {1} \phi (y) G ^ {- 1} (y) d y,

Table 4: $\mathbf{v}\left( F\right)$

RMv(F)
CVaR1/Ξ±||I{F(Β·)β‰₯1-Ξ±}||q
SRM||Ο†(F)||q
DRM||g'(1-F)||q
ERM||exp(β·)||q/∫a^b u(x)dF(x)
CE||u'||_q(u^-1)'(∫a^b u(x)dF(x))
RDEU||w'(F)u'||_q

which is linear in $G^{-1}$ . Meanwhile, the constraint $| G^{-1} - F^{-1}|_1 \leq c$ is also a convex ball constraint w.r.t. $G^{-1}$ . Furthermore, $\phi(y)$ is an increasing function. Using analogous arguments to the proof of Theorem 4.3 completes the proof.

C.2.2. PROOF FOR DRM

Proof. By a change of variable $y = F(x)$ , the DRM can be represented as

ρg(G)=∫01g(1βˆ’y)dGβˆ’1(y)=g(1βˆ’y)Fβˆ’1(y)∣01βˆ’βˆ«01Gβˆ’1(y)dg(1βˆ’y)=βˆ’a+∫01Gβˆ’1(y)gβ€²(1βˆ’y)dy. \begin{array}{l} \rho_ {g} (G) = \int_ {0} ^ {1} g (1 - y) d G ^ {- 1} (y) = g (1 - y) F ^ {- 1} (y) | _ {0} ^ {1} - \int_ {0} ^ {1} G ^ {- 1} (y) d g (1 - y) \\ = - a + \int_ {0} ^ {1} G ^ {- 1} (y) g ^ {\prime} (1 - y) d y. \\ \end{array}

Again, the objective function is linear in $G^{-1}$ , and the constraint is also a convex ball constraint. Besides, $g'(1 - y)$ is increasing in $y$ since $g$ is concave. Using analogous arguments to the proof of Theorem 4.3 completes the proof.

C.3. Proof of Proposition C.3

It is easy to verify that $\mathbf{P}_c^\infty F\in B_\infty (F,c)$ . Consider an arbitrary $F\in \mathcal{D}([a,b])$ and $c > 0$ . For $x\in [a,b)$ , we have

(Pc∞F)(x)=max⁑{F(x)βˆ’c,0}≀G(x),βˆ€G∈B∞(F,c). \left(\mathbf {P} _ {c} ^ {\infty} F\right) (x) = \max \{F (x) - c, 0 \} \leq G (x), \forall G \in B _ {\infty} (F, c).

Besides, $\left(\mathbf{P}c^\infty F\right)(b) = G(b) = 1$ for any $G \in B{\infty}(F, c)$ . Therefore $G \preceq \mathbf{P}c^\infty F$ for any $G \in B{\infty}(F, c)$ . The result follows from the monotonicity of $\mathbf{T}$ .

D. Derivations of Results in Section 5

D.1. Identification of $\nu$

We list functional $\pmb{\nu}$ for different risk measures in Table 4. The functional $\pmb{\nu}$ is crucial to compute the LLC and to show the tightness of our method. We provide detailed derivations of $\pmb{\nu}$ in the following. Since the CVaR (ERM) is a special case of SRM (CE), we omit the derivation of CVaR and ERM. Recall that $v((1 - t)F + tG, p)$ is a functional satisfying that for any $F, G$

Οˆβ€²(t;F,G)≀v((1βˆ’t)F+tG,p)βˆ₯Fβˆ’Gβˆ₯p. \psi^ {\prime} \left(t; F, G\right) \leq v \big ((1 - t) F + t G, p \big) \| F - G \| _ {p}.

D.1.1. SRM

Consider $M_{\phi}$ in the form of

MΟ•(F)=∫abΟ•(F(x))xdF(x), M _ {\phi} (F) = \int_ {a} ^ {b} \phi (F (x)) x d F (x),

where $\phi$ is increasing and integrates to 1. $\psi$ is continuously differentiable with derivative

Οˆβ€²(t;F,G)=ddt∫abΟ•((1βˆ’t)F+tG)(x))xd((1βˆ’t)F+tG)(x)=ddt[∫abΟ•((1βˆ’t)F+tG)(x))xdF(x)+t∫abΟ•((1βˆ’t)F+tG)(x))xd(Gβˆ’F)(x)]=∫abΟ•β€²((1βˆ’t)F+tG)(x))(G(x)βˆ’F(x))xdF(x)⏟(a)+∫abΟ•((1βˆ’t)F+tG)(x))xd(Gβˆ’F)(x)⏟(b)+t∫abΟ•β€²((1βˆ’t)F+tG)(x))(G(x)βˆ’F(x))xd(Gβˆ’F)(x)⏟(c). \begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d ((1 - t) F + t G) (x) \\ = \frac {d}{d t} \left[ \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d F (x) + t \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d (G - F) (x) \right] \\ = \underbrace {\int_ {a} ^ {b} \phi^ {\prime} ((1 - t) F + t G) (x)) (G (x) - F (x)) x d F (x)} _ {(a)} + \underbrace {\int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d (G - F) (x)} _ {(b)} \\ + \underbrace {t \int_ {a} ^ {b} \phi^ {\prime} ((1 - t) F + t G) (x)) (G (x) - F (x)) x d (G - F) (x)} _ {(c)}. \\ \end{array}

Since

(b)=Ο•((1βˆ’t)F+tG)(x))x(Gβˆ’F)(x)∣abβˆ’βˆ«ab(Gβˆ’F)(x)d[Ο•((1βˆ’t)F+tG)(x))x]=βˆ’βˆ«ab(Gβˆ’F)(x)d[Ο•((1βˆ’t)F+tG)(x))x] \begin{array}{l} (b) = \phi ((1 - t) F + t G) (x)) x (G - F) (x) | _ {a} ^ {b} - \int_ {a} ^ {b} (G - F) (x) d [ \phi ((1 - t) F + t G) (x)) x ] \\ = - \int_ {a} ^ {b} (G - F) (x) d [ \phi ((1 - t) F + t G) (x)) x ] \\ \end{array}

and

(c)=∫ab(Gβˆ’F)(x)xdΟ•((1βˆ’t)F+tG)(x))βˆ’βˆ«abΟ•β€²((1βˆ’t)F+tG)(x))(G(x)βˆ’F(x))xdF(x) (c) = \int_ {a} ^ {b} (G - F) (x) x d \phi ((1 - t) F + t G) (x)) - \int_ {a} ^ {b} \phi^ {\prime} ((1 - t) F + t G) (x)) (G (x) - F (x)) x d F (x)

We have

Οˆβ€²(t)=(a)+(b)+(c)=βˆ’βˆ«ab(Gβˆ’F)(x)Ο•((1βˆ’t)F+tG)(x))dx=⟨Gβˆ’F,βˆ’Ο•((1βˆ’t)F+tG)βŸ©β‰€βˆ₯Gβˆ’Fβˆ₯pβˆ₯Ο•((1βˆ’t)F+tG)βˆ₯q. \begin{array}{l} \psi^ {\prime} (t) = (a) + (b) + (c) = - \int_ {a} ^ {b} (G - F) (x) \phi ((1 - t) F + t G) (x)) d x = \langle G - F, - \phi ((1 - t) F + t G) \rangle \\ \leq \| G - F \| _ {p} \| \phi ((1 - t) F + t G) \| _ {q}. \\ \end{array}

Hence we can choose $\upsilon (F,p) = | \phi (F)| _q$

D.1.2. DRM

A distortion risk measure associated with distortion function $g$ for a distribution $F$ is

ρg(F)=∫abg(1βˆ’F(x))dx, \rho_ {g} (F) = \int_ {a} ^ {b} g (1 - F (x)) d x,

where $g:[0,1] \to [0,1]$ is a non-decreasing function with $g(0) = 0$ and $g(1) = 1$ . Thus $g'$ is non-negative. $\psi$ is continuously differentiable with derivative

Οˆβ€²(t;F,G)=ddt∫abg(1βˆ’(1βˆ’t)F(x)βˆ’tG(x))dx=∫abgβ€²(1βˆ’(1βˆ’t)F(x)βˆ’tG(x))(F(x)βˆ’G(x))dx=⟨Fβˆ’G,gβ€²(1βˆ’(1βˆ’t)Fβˆ’tG)βŸ©β‰€βˆ₯Fβˆ’Gβˆ₯pβˆ₯gβ€²(1βˆ’(1βˆ’t)Fβˆ’tG)βˆ₯q. \begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} \int_ {a} ^ {b} g (1 - (1 - t) F (x) - t G (x)) d x = \int_ {a} ^ {b} g ^ {\prime} (1 - (1 - t) F (x) - t G (x)) (F (x) - G (x)) d x \\ = \langle F - G, g ^ {\prime} (1 - (1 - t) F - t G) \rangle \\ \leq \| F - G \| _ {p} \| g ^ {\prime} (1 - (1 - t) F - t G) \| _ {q}. \\ \end{array}

Hence we can choose $v(F, p) = | g'(1 - F) |_q$ .

D.1.3. CE

For a CE $C_u$ , we define $E_{u}(F) \triangleq \int u(x)dF(x)$ . Notice that $E_{u}$ is a linear functional, i.e.,

Eu((1βˆ’t)F+tG)=(1βˆ’t)Eu(F)+tEu(G) E _ {u} ((1 - t) F + t G) = (1 - t) E _ {u} (F) + t E _ {u} (G)

for any $F, G \in D([a, b])$ . $\psi$ for $E_{u}$ is continuously differentiable with derivative

Οˆβ€²(t;F,G)=ddtEu((1βˆ’t)F+tG)=ddt(1βˆ’t)Eu(F)+tEu(G)=Eu(G)βˆ’Eu(F)=∫abu(x)dG(x)βˆ’βˆ«abu(x)dF(x)=u(x)G(x)∣abβˆ’βˆ«abG(x)du(x)βˆ’u(x)F(x)∣ab+∫abF(x)du(x)=∫abF(x)βˆ’G(x)du(x)=⟨Fβˆ’G,uβ€²βŸ© \begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} E _ {u} ((1 - t) F + t G) = \frac {d}{d t} (1 - t) E _ {u} (F) + t E _ {u} (G) \\ = E _ {u} (G) - E _ {u} (F) \\ = \int_ {a} ^ {b} u (x) d G (x) - \int_ {a} ^ {b} u (x) d F (x) \\ = u (x) G (x) \left| _ {a} ^ {b} - \int_ {a} ^ {b} G (x) d u (x) - u (x) F (x) \right| _ {a} ^ {b} + \int_ {a} ^ {b} F (x) d u (x) \\ = \int_ {a} ^ {b} F (x) - G (x) d u (x) \\ = \left\langle F - G, u ^ {\prime} \right\rangle \\ \end{array}

where the last equality follows from that $F(b) = G(b) = 1$ and $F(a) = G(a) = 0$ . The certainty equivalent of a distribution $F$ with utility function $u$ is defined as $C_u(F) = u^{-1}(E_u(F))$ . It follows that

Οˆβ€²(t;F,G)=(uβˆ’1)β€²(Eu((1βˆ’t)F+tG))β‹…βŸ¨Fβˆ’G,uβ€²βŸ©β‰€(uβˆ’1)β€²(Eu((1βˆ’t)F+tG))β‹…βˆ₯Fβˆ’Gβˆ₯p,βˆ₯uβ€²βˆ₯q \begin{array}{l} \psi^ {\prime} (t; F, G) = \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} ((1 - t) F + t G)\right) \cdot \langle F - G, u ^ {\prime} \rangle \\ \leq \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} ((1 - t) F + t G)\right) \cdot \| F - G \| _ {p}, \| u ^ {\prime} \| _ {q} \\ \end{array}

Hence we can choose $v(F, p) = \left(u^{-1}\right)'(E_u(F))| u'|_q$ .

D.1.4. RDEU

Let $w:[0,1] \to [0,1]$ be an increasing weight function such that $w(0) = 0$ and $w(1) = 1$ . Let $u:\mathbb{R} \to \mathbb{R}$ be an (unbounded) increasing differentiable function with $u(0) = 0$ . The RDEU value of $F \in \mathcal{D}([a,b])$ is given by

V(F)=∫abu(x)dw(F(x))=u(x)w(F(x))∣abβˆ’βˆ«abw(F(x))du(x)=u(b)βˆ’βˆ«abw(F(x))uβ€²(x)dx. \begin{array}{l} V (F) = \int_ {a} ^ {b} u (x) d w (F (x)) \\ = u (x) w (F (x)) \mid_ {a} ^ {b} - \int_ {a} ^ {b} w (F (x)) d u (x) \\ = u (b) - \int_ {a} ^ {b} w (F (x)) u ^ {\prime} (x) d x. \\ \end{array}

We have

Οˆβ€²(t;F,G)=ddt[u(b)βˆ’βˆ«abw((1βˆ’t)F(x)+tG(x))uβ€²(x)dx]=βˆ’βˆ«abwβ€²((1βˆ’t)F(x)+tG(x))(G(x)βˆ’F(x))uβ€²(x)dx=⟨Fβˆ’G,wβ€²((1βˆ’t)F+tG)uβ€²βŸ©β‰€βˆ₯Fβˆ’Gβˆ₯pβˆ₯wβ€²((1βˆ’t)F+tG)uβ€²βˆ₯q \begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} \left[ u (b) - \int_ {a} ^ {b} w ((1 - t) F (x) + t G (x)) u ^ {\prime} (x) d x \right] \\ = - \int_ {a} ^ {b} w ^ {\prime} ((1 - t) F (x) + t G (x)) (G (x) - F (x)) u ^ {\prime} (x) d x \\ = \langle F - G, w ^ {\prime} ((1 - t) F + t G) u ^ {\prime} \rangle \\ \leq \| F - G \| _ {p} \| w ^ {\prime} ((1 - t) F + t G) u ^ {\prime} \| _ {q} \\ \end{array}

Hence we can choose $v(F, p) = | w'(F)u' |_q$ .

D.2. Derivation of the LLC

In Section 5.1, we have shown that

Lp(T;Fn,cnp)=sup⁑F,G∈Bp(Fn,cnp)T(F)βˆ’T(G)βˆ₯Fβˆ’Gβˆ₯p=sup⁑F,G∈Bp(Fn,cnp)ψ(1;F,G)βˆ’Οˆ(0;F,G)βˆ₯Fβˆ’Gβˆ₯p≀sup⁑F,G∈Bp(Fn,cnp),t∈[0,1]Οˆβ€²(t;F,G)βˆ₯Fβˆ’Gβˆ₯p≀sup⁑F,G∈Bp(Fn,cnp),t∈[0,1]v((1βˆ’t)F+tG,p)=sup⁑F∈Bp(Fn,cnp)v(F,p). \begin{array}{l} L_{p}(\mathbf{T};F_{n},c_{n}^{p}) = \sup_{F,G\in B_{p}(F_{n},c_{n}^{p})}\frac{\mathbf{T}(F) - \mathbf{T}(G)}{\|F - G\|_{p}} \\ = \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right)} \frac {\psi (1 ; F , G) - \psi (0 ; F , G)}{\| F - G \| _ {p}} \\ \leq \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right), t \in [ 0, 1 ]} \frac {\psi^ {\prime} (t ; F , G)}{\| F - G \| _ {p}} \\ \leq \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right), t \in [ 0, 1 ]} v ((1 - t) F + t G, p) \\ = \sup _ {F \in B _ {p} (F _ {n}, c _ {n} ^ {p})} v (F, p). \\ \end{array}

We have derived the $\upsilon$ function in the previous subsection, which enables us to obtain an upper bound on $L_{p}(\mathbf{T};F,c)$ . We only provide the derivations for DRM, CE, and RDEU here because SRM is presented as an illustrating example in Section 5.1. We first give a useful fact to deal with the Wasserstein distance.

Fact 3. Fix real numbers $a < b$ . For any $G \in \mathcal{D}([a, b])$ and any $c$ , there exists a continuous and monotonically increasing distribution $F \in \mathcal{D}([a, b])$ such that $| F - F_n |_1 \leq c$ .

D.2.1.DRM

For DRM, $v(F,p) = | g'(1 - F)|_q$ . Since $g$ is concave, $g'$ is monotonically decreasing.

Case $p = 1$ . By Fact 3, there exists a continuous CDF $F \in B_{1}(F_{n}, r_{n}^{1})$ . Such $F$ attains all possible value in [0, 1], hence

max⁑F∈B1(Fn,cn1)βˆ₯gβ€²(1βˆ’F)βˆ₯∞=max⁑F∈B1(Fn,cn1)max⁑x∈Rgβ€²(1βˆ’F(x))=max⁑y∈[0,1]gβ€²(y)=βˆ₯gβ€²βˆ₯∞. \max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \| g ^ {\prime} (1 - F) \| _ {\infty} = \max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \max _ {x \in \mathbb {R}} g ^ {\prime} (1 - F (x)) = \max _ {y \in [ 0, 1 ]} g ^ {\prime} (y) = \| g ^ {\prime} \| _ {\infty}.

Case $p = \infty$ Since

max⁑F∈B∞(Fn,cn∞)βˆ₯gβ€²(1βˆ’F)βˆ₯1=max⁑F∈B∞(Fn,cn∞)∫abgβ€²(1βˆ’F(x))dx, \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \| g ^ {\prime} (1 - F) \| _ {1} = \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \int_ {a} ^ {b} g ^ {\prime} (1 - F (x)) d x,

it follows that

max⁑F∈B∞(Fn,cn∞)∫abgβ€²(1βˆ’F(x))dx=βˆ₯gβ€²(1βˆ’Ncn∞∞Fn)βˆ₯1 \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \int_ {a} ^ {b} g ^ {\prime} (1 - F (x)) d x = \left\| g ^ {\prime} (1 - \mathrm {N} _ {c _ {n} ^ {\infty}} ^ {\infty} F _ {n}) \right\| _ {1}

D.2.2. CE

For CE, $v(F,p) = \left(u^{-1}\right)'(E_u(F))| u'| _q$ . Since $u$ is convex, $\left(u^{-1}\right)'$ is a decreasing function. It follows that

sup⁑F∈Bp(Fn,cnp)(uβˆ’1)β€²(Eu(F))βˆ₯uβ€²βˆ₯q=(uβˆ’1)β€²(Eu(NcnppFn))βˆ₯uβ€²βˆ₯q. \sup _ {F \in B _ {p} (F _ {n}, c _ {n} ^ {p})} \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} (F)\right) \| u ^ {\prime} \| _ {q} = \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} \left(\mathrm {N} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right)\right) \| u ^ {\prime} \| _ {q}.

D.2.3. RDEU

For RDEU, $v(F,p) = | w'(F)u'|_q$ . Recall that $w:[0,1] \to [0,1]$ is an increasing weight function with $w(0) = 0$ and $w(1) = 1$ , and $v:\mathbb{R} \to \mathbb{R}$ is an (unbounded) increasing differentiable function with $u(0) = 0$ . We further assume that $w$ is convex so that $w'$ is an increasing function.

Case $p = 1$ . By Fact 3, there exists a continuous CDF $F \in B_{1}(F_{n}, r_{n}^{1})$ . Such $F$ attains all possible value in [0, 1], hence

max⁑F∈B1(Fn,cn1)βˆ₯wβ€²(F)uβ€²βˆ₯∞=max⁑F∈B1(Fn,cn1)max⁑x∈Rwβ€²(F(x))uβ€²(x), \max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \| w ^ {\prime} (F) u ^ {\prime} \| _ {\infty} = \max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \max _ {x \in \mathbb {R}} w ^ {\prime} (F (x)) u ^ {\prime} (x),

which might not admit closed form in general. Meanwhile, the GLC (holds without monotonicity)

L1=max⁑F∈D([a,b])βˆ₯wβ€²(F)uβ€²βˆ₯∞=βˆ₯wβ€²βˆ₯∞βˆ₯uβ€²βˆ₯∞. L _ {1} = \max _ {F \in \mathcal {D} ([ a, b ])} \| w ^ {\prime} (F) u ^ {\prime} \| _ {\infty} = \| w ^ {\prime} \| _ {\infty} \| u ^ {\prime} \| _ {\infty}.

Case $p = \infty$ . The following holds

max⁑F∈B∞(Fn,cn∞)βˆ₯wβ€²(F)uβ€²βˆ₯1=max⁑F∈B∞(Fn,cn∞)∫abwβ€²(F(x))uβ€²(x)dx=βˆ₯wβ€²(Ncn∞∞F)uβ€²βˆ₯1. \begin{array}{l} \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \| w ^ {\prime} (F) u ^ {\prime} \| _ {1} = \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \int_ {a} ^ {b} w ^ {\prime} (F (x)) u ^ {\prime} (x) d x \\ = \left\| w ^ {\prime} \left(\mathrm {N} _ {c _ {n} ^ {\infty}} ^ {\infty} F\right) u ^ {\prime} \right\| _ {1}. \\ \end{array}

The global Lipschitz constant (holds without monotonicity)

L∞=max⁑F∈D([a,b])βˆ₯wβ€²(F)uβ€²βˆ₯1=βˆ₯wβ€²βˆ₯∞βˆ₯uβ€²βˆ₯1. L _ {\infty} = \max _ {F \in \mathcal {D} ([ a, b ])} \| w ^ {\prime} (F) u ^ {\prime} \| _ {1} = \| w ^ {\prime} \| _ {\infty} \| u ^ {\prime} \| _ {1}.

The last equality uses that for any $d \in [0,1]$ , $F(x) = d$ for any $x \in (a,b)$ if $F = d\psi_{a} + (1 - d)\psi_{b}$ (shifted Bernoulli).

D.3. Improved Confidence Bound

In Section 5.2, we established that

T(PcnppFn)βˆ’T(Fn)=ψ(1;Fn,PcnppFn)βˆ’Οˆ(0;Fn,PcnppFn)≀max⁑t∈[0,1]Οˆβ€²(t;Fn,PcnppFn)≀max⁑t∈[0,1]v((1βˆ’t)Fn+tPcnppFn,p)cnp \begin{array}{l} \mathbf {T} \left(\mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) - \mathbf {T} (F _ {n}) = \psi \left(1; F _ {n}, \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) - \psi \left(0; F _ {n}, \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) \\ \leq \max _ {t \in [ 0, 1 ]} \psi^ {\prime} \left(t; F _ {n}, \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) \\ \leq \max _ {t \in [ 0, 1 ]} v \left((1 - t) F _ {n} + t \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}, p\right) c _ {n} ^ {p} \\ \end{array}

as well as

T(Fn)βˆ’T(NcnppFn)≀max⁑t∈[0,1]v((1βˆ’t)NcnppFn+tFn,p)cnp. \mathbf {T} (F _ {n}) - \mathbf {T} \left(\mathrm {N} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) \leq \max _ {t \in [ 0, 1 ]} v \left((1 - t) \mathrm {N} _ {c _ {n} ^ {p}} ^ {p} F _ {n} + t F _ {n}, p\right) c _ {n} ^ {p}.

The above inequalities lead to the following bounds.

D.3.1. SUPREMUM DISTANCE

Proposition D.1 (CE). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $\beta > 0$ )

Cu(Pc∞F)βˆ’Cu(F)≀βˆ₯uβ€²βˆ₯1(uβˆ’1)β€²(∫abu(x)dF(x))β‹…c≀βˆ₯uβ€²βˆ₯1(uβˆ’1)β€²(∫abu(x)dNc∞F(x))β‹…c, \begin{array}{l} C _ {u} \left(\mathrm {P} _ {c} ^ {\infty} F\right) - C _ {u} (F) \leq \| u ^ {\prime} \| _ {1} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d F (x)\right) \cdot c \\ \leq \| u ^ {\prime} \| _ {1} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {\infty} F (x)\right) \cdot c, \\ \end{array}

Cu(F)βˆ’Cu(Nc∞F)≀βˆ₯uβ€²βˆ₯1(uβˆ’1)β€²(∫abu(x)dNc∞F(x))β‹…c. C _ {u} (F) - C _ {u} \left(\mathrm {N} _ {c} ^ {\infty} F\right) \leq \| u ^ {\prime} \| _ {1} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {\infty} F (x)\right) \cdot c.

Corollary D.2 (ERM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $u$ convex)

UΞ²(Pc∞F)βˆ’UΞ²(F)≀exp⁑(Ξ²b)βˆ’exp⁑(Ξ²a)β∫abexp⁑(Ξ²x)dF(x)β‹…c≀L∞(UΞ²)c∫abexp⁑(Ξ²x)dNc∞F(x), U _ {\beta} \left(\mathrm {P} _ {c} ^ {\infty} F\right) - U _ {\beta} (F) \leq \frac {\exp (\beta b) - \exp (\beta a)}{\beta \int_ {a} ^ {b} \exp (\beta x) d F (x)} \cdot c \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {\infty} F (x)},

UΞ²(F)βˆ’UΞ²(Pc∞F)≀L∞(UΞ²)c∫abexp⁑(Ξ²x)dNc∞F(x). U _ {\beta} (F) - U _ {\beta} (\mathrm {P} _ {c} ^ {\infty} F) \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {\infty} F (x)}.

Proposition D.3 (SRM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $\phi$ increasing)

MΟ•(Pc∞F)βˆ’MΟ•(F)β‰€βˆ«abΟ•(F(x))dxβ‹…cβ‰€βˆ«abΟ•(Nc∞F(x))dxβ‹…c=L∞(MΟ•;F,c)c M _ {\phi} \left(\mathrm {P} _ {c} ^ {\infty} F\right) - M _ {\phi} (F) \leq \int_ {a} ^ {b} \phi (F (x)) d x \cdot c \leq \int_ {a} ^ {b} \phi \left(\mathrm {N} _ {c} ^ {\infty} F (x)\right) d x \cdot c = L _ {\infty} \left(M _ {\phi}; F, c\right) c

MΟ•(F)βˆ’MΟ•(Nc∞F)β‰€βˆ«abΟ•(Nc∞F(x))dxβ‹…c=L∞(MΟ•;F,c)c. M _ {\phi} (F) - M _ {\phi} (\mathrm {N} _ {c} ^ {\infty} F) \leq \int_ {a} ^ {b} \phi (\mathrm {N} _ {c} ^ {\infty} F (x)) d x \cdot c = L _ {\infty} (M _ {\phi}; F, c) c.

Proposition D.4 (DRM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $g$ concave)

ρg(Pc∞F)βˆ’Οg(F)β‰€βˆ«abgβ€²(1βˆ’F(x))dxβ‹…cβ‰€βˆ«abgβ€²(1βˆ’Nc∞F(x))dxβ‹…c=L∞(ρg;F,c)c, \rho_ {g} (\mathrm {P} _ {c} ^ {\infty} F) - \rho_ {g} (F) \leq \int_ {a} ^ {b} g ^ {\prime} (1 - F (x)) d x \cdot c \leq \int_ {a} ^ {b} g ^ {\prime} (1 - \mathrm {N} _ {c} ^ {\infty} F (x)) d x \cdot c = L _ {\infty} (\rho_ {g}; F, c) c,

ρg(F)βˆ’Οg(Nc∞F)β‰€βˆ«abgβ€²(1βˆ’Nc∞F(x))dxβ‹…c=L∞(ρg;F,c)c. \rho_ {g} (F) - \rho_ {g} (\mathrm {N} _ {c} ^ {\infty} F) \leq \int_ {a} ^ {b} g ^ {\prime} (1 - \mathrm {N} _ {c} ^ {\infty} F (x)) d x \cdot c = L _ {\infty} (\rho_ {g}; F, c) c.

Corollary D.5 (CVaR). For $F \in \mathcal{D}([a, b])$ , it holds that

CΞ±(Pc∞F)βˆ’CΞ±(F)≀bβˆ’Fβˆ’1(1βˆ’Ξ±)Ξ±c≀bβˆ’Fβˆ’1(1βˆ’Ξ±βˆ’c)Ξ±c=L∞(CΞ±;F,c)c, C _ {\alpha} (\mathrm {P} _ {c} ^ {\infty} F) - C _ {\alpha} (F) \leq \frac {b - F ^ {- 1} (1 - \alpha)}{\alpha} c \leq \frac {b - F ^ {- 1} (1 - \alpha - c)}{\alpha} c = L _ {\infty} (C _ {\alpha}; F, c) c,

CΞ±(F)βˆ’CΞ±(Nc∞F)≀bβˆ’Fβˆ’1(1βˆ’Ξ±βˆ’c)Ξ±c=L∞(CΞ±;F,c)c. C _ {\alpha} (F) - C _ {\alpha} (\mathbb {N} _ {c} ^ {\infty} F) \leq \frac {b - F ^ {- 1} (1 - \alpha - c)}{\alpha} c = L _ {\infty} (C _ {\alpha}; F, c) c.

Proposition D.6 (RDEU). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $w$ convex)

V(Pc∞F)βˆ’V(F)β‰€βˆ«abwβ€²(F(x))uβ€²(x)dxβ‹…cβ‰€βˆ«abwβ€²(Nc∞F(x))uβ€²(x)dxβ‹…c=L∞(V;F,c)c, V (\mathrm {P} _ {c} ^ {\infty} F) - V (F) \leq \int_ {a} ^ {b} w ^ {\prime} (F (x)) u ^ {\prime} (x) d x \cdot c \leq \int_ {a} ^ {b} w ^ {\prime} (\mathrm {N} _ {c} ^ {\infty} F (x)) u ^ {\prime} (x) d x \cdot c = L _ {\infty} (V; F, c) c,

V(F)βˆ’V(Nc∞F)β‰€βˆ«abwβ€²(Nc∞F(x))uβ€²(x)dxβ‹…c. V (F) - V (\mathrm {N} _ {c} ^ {\infty} F) \leq \int_ {a} ^ {b} w ^ {\prime} (\mathrm {N} _ {c} ^ {\infty} F (x)) u ^ {\prime} (x) d x \cdot c.

D.3.2. WASSERSTEIN DISTANCE

Proposition D.7 (SRM). For $F \in \mathcal{D}([a, b])$ , it holds that

MΟ•(Pc1F)βˆ’MΟ•(F)=∫g+(c)1(Pc1Fβˆ’1(y)βˆ’Fβˆ’1(y))Ο•(y)dy≀ϕ(1)c=L1(MΟ•;F,c)c M _ {\phi} \left(\mathrm {P} _ {c} ^ {1} F\right) - M _ {\phi} (F) = \int_ {g ^ {+} (c)} ^ {1} \left(\mathrm {P} _ {c} ^ {1} F ^ {- 1} (y) - F ^ {- 1} (y)\right) \phi (y) d y \leq \phi (1) c = L _ {1} \left(M _ {\phi}; F, c\right) c

MΟ•(F)βˆ’MΟ•(Nc1F)=∫gβˆ’(c)1(Fβˆ’1(y)βˆ’Nc1Fβˆ’1(y))Ο•(y)dy≀ϕ(1)c. M _ {\phi} (F) - M _ {\phi} (\mathrm {N} _ {c} ^ {1} F) = \int_ {g ^ {-} (c)} ^ {1} (F ^ {- 1} (y) - \mathrm {N} _ {c} ^ {1} F ^ {- 1} (y)) \phi (y) d y \leq \phi (1) c.

Proposition D.8 (DRM). For $F \in \mathcal{D}([a, b])$ , it holds that

ρg(Pc1F)βˆ’Οg(F)=∫g+(c)1(Pc1Fβˆ’1(y)βˆ’Fβˆ’1(y))gβ€²(1βˆ’y)dy≀gβ€²(0)c=L1(ρg;F,c)c \rho_ {g} (\mathrm {P} _ {c} ^ {1} F) - \rho_ {g} (F) = \int_ {g ^ {+} (c)} ^ {1} (\mathrm {P} _ {c} ^ {1} F ^ {- 1} (y) - F ^ {- 1} (y)) g ^ {\prime} (1 - y) d y \leq g ^ {\prime} (0) c = L _ {1} (\rho_ {g}; F, c) c

ρg(F)βˆ’Οg(Nc1F)=∫gβˆ’(c)1(Fβˆ’1(y)βˆ’Nc1Fβˆ’1(y))gβ€²(1βˆ’y)dy≀gβ€²(0)c. \rho_ {g} (F) - \rho_ {g} (\mathrm {N} _ {c} ^ {1} F) = \int_ {g ^ {-} (c)} ^ {1} (F ^ {- 1} (y) - \mathrm {N} _ {c} ^ {1} F ^ {- 1} (y)) g ^ {\prime} (1 - y) d y \leq g ^ {\prime} (0) c.

Corollary D.9 (CVaR). For $F \in \mathcal{D}([a,b])$ , it holds that

CΞ±(Pc1F)βˆ’CΞ±(F)=cΞ±=L1(CΞ±;F,c)c, C _ {\alpha} (\mathrm {P} _ {c} ^ {1} F) - C _ {\alpha} (F) = \frac {c}{\alpha} = L _ {1} (C _ {\alpha}; F, c) c,

CΞ±(F)βˆ’CΞ±(Nc1F)=cΞ±=L1(CΞ±;F,c)c. C _ {\alpha} (F) - C _ {\alpha} (\mathrm {N} _ {c} ^ {1} F) = \frac {c}{\alpha} = L _ {1} (C _ {\alpha}; F, c) c.

Proposition D.10 (CE). For $F \in \mathcal{D}([a, b])$ , it holds that

Cu(Pc1F)βˆ’Cu(F)≀βˆ₯uβ€²βˆ₯∞(uβˆ’1)β€²(∫abu(x)dF(x))β‹…c≀βˆ₯uβ€²βˆ₯∞(uβˆ’1)β€²(∫abu(x)dNc1F(x))β‹…c, \begin{array}{l} C _ {u} \left(\mathrm {P} _ {c} ^ {1} F\right) - C _ {u} (F) \leq \| u ^ {\prime} \| _ {\infty} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d F (x)\right) \cdot c \\ \leq \| u ^ {\prime} \| _ {\infty} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {1} F (x)\right) \cdot c, \\ \end{array}

Cu(F)βˆ’Cu(Nc1F)≀βˆ₯uβ€²βˆ₯∞(uβˆ’1)β€²(∫abu(x)dNc1F(x))β‹…c. C _ {u} (F) - C _ {u} (\mathrm {N} _ {c} ^ {1} F) \leq \| u ^ {\prime} \| _ {\infty} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {1} F (x)\right) \cdot c.

Corollary D.11 (ERM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $\beta > 0$ )

UΞ²(Pc1F)βˆ’UΞ²(F)≀exp⁑(Ξ²b)∫abexp⁑(Ξ²x)dF(x)β‹…c≀L∞(UΞ²)c∫abexp⁑(Ξ²x)dNc1F(x), U _ {\beta} \left(\mathrm {P} _ {c} ^ {1} F\right) - U _ {\beta} (F) \leq \frac {\exp (\beta b)}{\int_ {a} ^ {b} \exp (\beta x) d F (x)} \cdot c \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {1} F (x)},

UΞ²(F)βˆ’UΞ²(Nc1F)≀L∞(UΞ²)c∫abexp⁑(Ξ²x)dNc1F(x). U _ {\beta} (F) - U _ {\beta} (\mathrm {N} _ {c} ^ {1} F) \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {1} F (x)}.

E. Proof of Proposition 6.1

Proof. Observe that the regret decomposes as $\mathrm{Regret}(\mathrm{LCB},\nu ,T) = \sum_{i = 1}^{K}\Delta_{i}\mathbb{E}[N_{i}(T)]$ . We will bound $\mathbb{E}[N_i(T)]$ for each suboptimal arm $i\neq 1$ . Recall that $\underline{F}{i,t} = \mathrm{N}{c_i(t)}^{\infty}\hat{F}_{i,t}$ . Denote by $\hat{F}i^s$ the empirical CDF corresponding to arm $i$ after observing $s$ samples, therefore we have $\hat{F}{i,t} = \hat{F}_i^{N_i(t)}$ . Without loss of generality, we assume the first arm is the optimal arm, i.e.,

CΞ±(F1)≀CΞ±(Fi),βˆ€i∈[K]. C _ {\alpha} (F _ {1}) \leq C _ {\alpha} (F _ {i}), \forall i \in [ K ].

Define the good event for arm $i$ as

Gi={CΞ±(F1)>max⁑t∈[T]CΞ±(Fβ€Ύ1,t)}∩{CΞ±(Nlog⁑(2/Ξ΄)2ui∞F^iui)>CΞ±(F1)}. G _ {i} = \{C _ {\alpha} (F _ {1}) > \max _ {t \in [ T ]} C _ {\alpha} (\underline {{F}} _ {1, t}) \} \cap \{C _ {\alpha} (\mathrm {N} _ {\sqrt {\frac {\log (2 / \delta)}{2 u _ {i}}}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}) > C _ {\alpha} (F _ {1}) \}.

We claim that if $G_{i}$ occurs, then $N_{i}(T) \leq u_{i}$ . The proof follows from a contradiction. Suppose $N_{i}(T) > u_{i}$ , then there exists some round $t \in [T]$ such that $N_{i}(t) = u_{i}$ and $I_{t} = i$ . It follows that

CΞ±(Fβ€Ύi,t)=CΞ±(Nci(t)∞F^i,t)=CΞ±(N∞log⁑(2/Ξ΄)2uiF^iui)>CΞ±(F1)>CΞ±(Fβ€Ύ1,t), \begin{array}{l} C _ {\alpha} (\underline {{F}} _ {i, t}) = C _ {\alpha} (\mathrm {N} _ {c _ {i} (t)} ^ {\infty} \hat {F} _ {i, t}) \\ = C _ {\alpha} (\mathrm {N} ^ {\infty} \sqrt {\frac {\log (2 / \delta)}{2 u _ {i}}} \hat {F} _ {i} ^ {u _ {i}}) \\ > C _ {\alpha} (F _ {1}) \\ > C _ {\alpha} (\underline {{F}} _ {1, t}), \\ \end{array}

where the inequalities come from the definition of $G_{i}$ . Hence $I_{t} = \arg \min_{i\in [K]}C_{\alpha}(\underline{F}_{i,t})\neq i$ , which leads to a contradiction. Using the tower property,

E[Ni(T)]=E[Ni(T)∣Gi]P(Gi)+E[Ni(T)∣Gic]P(Gic)≀ui+TP(Gic). \mathbb {E} \left[ N _ {i} (T) \right] = \mathbb {E} \left[ N _ {i} (T) \mid G _ {i} \right] \mathbb {P} \left(G _ {i}\right) + \mathbb {E} \left[ N _ {i} (T) \mid G _ {i} ^ {c} \right] \mathbb {P} \left(G _ {i} ^ {c}\right) \leq u _ {i} + T \mathbb {P} \left(G _ {i} ^ {c}\right).

Next, we show that $\mathbb{P}(G_i^c)$ is small. By using union bound, we have

P(Gic)≀P(CΞ±(F1)≀max⁑t∈[T]CΞ±(F~1,t))+P(CΞ±(Nlog⁑(2/Ξ΄)2ui∞F^iui)≀CΞ±(F1)). \mathbb {P} (G _ {i} ^ {c}) \leq \mathbb {P} \left(C _ {\alpha} (F _ {1}) \leq \max _ {t \in [ T ]} C _ {\alpha} (\tilde {F} _ {1, t})\right) + \mathbb {P} \left(C _ {\alpha} \left(\mathrm {N} _ {\sqrt {\frac {\log (2 / \delta)}{2 u _ {i}}}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \leq C _ {\alpha} (F _ {1})\right).

By Theorem 4.1, for any $t \in [T]$ , if $F_1 \in B(\hat{F}{1,t}, c_1(t))$ then $\underline{F}{1,t} = \mathrm{N}{c_1(t)}^{\infty} \hat{F}{1,t} \succeq F_1$ , and $C_{\alpha}(\underline{F}{1,t}) \leq C{\alpha}(F_1)$ . Hence the first term on the r.h.s. can be bounded as

P(CΞ±(F1)≀max⁑t∈[T]CΞ±(F~1,t))=P(βˆƒt∈[T]:CΞ±(F1)≀CΞ±(Fβ€Ύ1,t))≀P(βˆƒt∈[T]:βˆ₯F1βˆ’F^1,tβˆ₯β‰₯log⁑(2/Ξ΄)2Ni(t))≀P(βˆͺs∈[T]{βˆ₯F1βˆ’F^1sβˆ₯β‰₯log⁑(2/Ξ΄)2s})≀TΞ΄, \begin{array}{l} \mathbb {P} \left(C _ {\alpha} (F _ {1}) \leq \max _ {t \in [ T ]} C _ {\alpha} (\tilde {F} _ {1, t})\right) = \mathbb {P} \left(\exists t \in [ T ]: C _ {\alpha} (F _ {1}) \leq C _ {\alpha} (\underline {{F}} _ {1, t})\right) \\ \leq \mathbb {P} \left(\exists t \in [ T ]: \left\| F _ {1} - \hat {F} _ {1, t} \right\| \geq \sqrt {\frac {\log (2 / \delta)}{2 N _ {i} (t)}}\right) \\ \leq \mathbb {P} \left(\cup_ {s \in [ T ]} \left\{\left\| F _ {1} - \hat {F} _ {1} ^ {s} \right\| \geq \sqrt {\frac {\log (2 / \delta)}{2 s}} \right\}\right) \\ \leq T \delta , \\ \end{array}

where the last inequality follows from a union bound and the DKW inequality. Denote by $c_{i} \triangleq \sqrt{\frac{\log(2 / \delta)}{2u_{i}}}$ . By Corollary D.5, we have that $C_{\alpha}(F) - C_{\alpha}(\mathrm{N}_{c}^{\infty}F) \leq \frac{b - F^{-1}(1 - \alpha - c)}{\alpha} c$ . If the event $\left{\left| \hat{F}_i^{u_i} - F_i\right|_\infty < c_i\right}$ occurs, then

CΞ±(F^iui)βˆ’CΞ±(Nci∞F^iui)≀1βˆ’(F^iui)βˆ’1(1βˆ’Ξ±βˆ’ci)Ξ±ci C _ {\alpha} (\hat {F} _ {i} ^ {u _ {i}}) - C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \leq \frac {1 - (\hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} c _ {i}

and

CΞ±(Fi)βˆ’CΞ±(F^iui)≀CΞ±(Pci∞F^iui)βˆ’CΞ±(F^iui)≀bβˆ’(Pci∞F^iui)βˆ’1(1βˆ’Ξ±βˆ’ci)Ξ±ci. C _ {\alpha} (F _ {i}) - C _ {\alpha} (\hat {F} _ {i} ^ {u _ {i}}) \leq C _ {\alpha} (\mathrm {P} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}) - C _ {\alpha} (\hat {F} _ {i} ^ {u _ {i}}) \leq \frac {b - (\mathrm {P} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} c _ {i}.

Combining these inequalities,

CΞ±(PciF^iui)βˆ’CΞ±(Fi)≀(bβˆ’(F^iui)βˆ’1(1βˆ’Ξ±βˆ’ci)Ξ±+bβˆ’(NciF^iui)βˆ’1(1βˆ’Ξ±βˆ’ci)Ξ±)ci≀2bβˆ’(F^iui)βˆ’1(1βˆ’Ξ±βˆ’ci)Ξ±ci≀2bβˆ’Fiβˆ’1(1βˆ’Ξ±βˆ’2ci)Ξ±ci \begin{array}{l} C _ {\alpha} \left(\mathrm {P} _ {c _ {i}} \hat {F} _ {i} ^ {u _ {i}}\right) - C _ {\alpha} (F _ {i}) \leq \left(\frac {b - (\hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} + \frac {b - (\mathrm {N} _ {c _ {i}} \hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha}\right) c _ {i} \\ \leq 2 \frac {b - (\hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} c _ {i} \\ \leq 2 \frac {b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i})}{\alpha} c _ {i} \\ \end{array}

We choose $u_{i}$ such that $\Delta_{i} = C_{\alpha}(F_{i}) - C_{\alpha}(F_{1}) \geq 2\frac{b - F_{i}^{-1}(1 - \alpha - 2c_{i})}{\alpha} c_{i}$ , then the second term on the r.h.s. can be bounded as

P(CΞ±(Nci∞F^iui)≀CΞ±(F1))=P(CΞ±(Fi)βˆ’CΞ±(Nci∞F^iui)β‰₯Ξ”i)≀P(CΞ±(Fi)βˆ’CΞ±(Nci∞F^iui)β‰₯2bβˆ’Fiβˆ’1(1βˆ’Ξ±βˆ’2ci)Ξ±ci)≀P(βˆ₯F^iuiβˆ’Fiβˆ₯∞β‰₯ci)≀δ. \begin{array}{l} \mathbb {P} \left(C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \leq C _ {\alpha} (F _ {1})\right) = \mathbb {P} \left(C _ {\alpha} (F _ {i}) - C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \geq \Delta_ {i}\right) \\ \leq \mathbb {P} \left(C _ {\alpha} (F _ {i}) - C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \geq 2 \frac {b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i})}{\alpha} c _ {i}\right) \\ \leq \mathbb {P} \left(\left\| \hat {F} _ {i} ^ {u _ {i}} - F _ {i} \right\| _ {\infty} \geq c _ {i}\right) \\ \leq \delta . \\ \end{array}

Hence, the probability of $G_{i}^{c}$ is bounded as $\mathbb{P}(G_i^c)\leq (T + 1)\delta$ . It follows that

E[Ni(T)]≀ui+T(T+1)Ξ΄. \mathbb {E} \left[ N _ {i} (T) \right] \leq u _ {i} + T (T + 1) \delta .

Define $h_i(c) \triangleq 2\frac{b - F_i^{-1}(1 - \alpha - 2c)}{\alpha} c$ . Let $c_i^* \triangleq h_i^{-1}(\Delta_i)$ be the solution to the equation

hi(c)=2bβˆ’Fiβˆ’1(1βˆ’Ξ±βˆ’2c)Ξ±c=Ξ”i. h _ {i} (c) = 2 \frac {b - F _ {i} ^ {- 1} (1 - \alpha - 2 c)}{\alpha} c = \Delta_ {i}.

Note that $c_i^*$ is a distribution-dependent constant. We let $u_i := \left\lceil \frac{\log(2 / \delta)}{2(c_i^*)^2} \right\rceil$ and let $\delta = \frac{1}{T^2}$ :

E[Ni(T)]β‰€βŒˆlog⁑(2T2)2(ciβˆ—)2βŒ‰+2≀log⁑(2T)(ciβˆ—)2+3. \mathbb {E} [ N _ {i} (T) ] \leq \left\lceil \frac {\log (2 T ^ {2})}{2 (c _ {i} ^ {*}) ^ {2}} \right\rceil + 2 \leq \frac {\log (\sqrt {2} T)}{(c _ {i} ^ {*}) ^ {2}} + 3.

Substituting it into the regret decomposition, we get

βˆ‘i=1KΞ”iE[Ni(T)]≀log⁑(2T)βˆ‘i=2KΞ”i(ciβˆ—)2+3βˆ‘i=1KΞ”i=4log⁑(2T)Ξ±2βˆ‘i>1K(bβˆ’Fiβˆ’1(1βˆ’Ξ±βˆ’2ciβˆ—))2Ξ”i+3βˆ‘i=1KΞ”i. \begin{array}{l} \sum_ {i = 1} ^ {K} \Delta_ {i} \mathbb {E} [ N _ {i} (T) ] \leq \log (\sqrt {2} T) \sum_ {i = 2} ^ {K} \frac {\Delta_ {i}}{\left(c _ {i} ^ {*}\right) ^ {2}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i} \\ = \frac {4 \log (\sqrt {2} T)}{\alpha^ {2}} \sum_ {i > 1} ^ {K} \frac {\left(b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i} ^ {*})\right) ^ {2}}{\Delta_ {i}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i}. \\ \end{array}

F. Algorithms

We present several comprehensive algorithms that output the confidence bounds for a given risk measure when given $n$ i.i.d. samples and confidence radius as input. Algorithm 2 and Algorithm 3 compute the UCB and LCB of a risk measure via Wasserstein distance respectively. Meanwhile, Algorithm 4 and Algorithm 5 compute the UCB and LCB of a risk measure via supremum distance respectively.

Algorithm 2 Wasserstein upper confidence bound
1: Input: $b$ , samples $\mathbf{X} = X_{1},X_{2},\dots ,X_{n}$ , risk measure $\mathbf{T}$ , $c > 0$ 2: Sort the $n$ samples in ascent order $X_{(1)}\leq X_{(2)}\dots \leq X_{(n)}$ 3: Initialize $S_{1} = \frac{b - X_{(n)}}{n}$ 4: for $i = 1:n$ do 5: if $S_{i}\leq c$ then 6: $S_{i + 1} = S_{i} + \frac{1}{n} (b - X_{(n - i)})$ 7: else 8: $n^{\prime} = n + 1 - i$ 9: break 10: end if 11: end for 12: $p_{n^{\prime}} = \frac{S_i - c}{b - X_{(n^{\prime})}}, p_b = \frac{i}{n} - p_{n^{\prime}}$ 13: $\overline{F_n^1} = \frac{1}{n}\sum_{i}^{n' - 1}\mathbb{I}{X_{(i)}\leq \cdot } + p_{n'}\mathbb{I}{X_{(n')} \leq \cdot } + p_b\mathbb{I}{b \leq \cdot }$ 14: Output: $\mathbf{T}\left(\overline{F_n^1}\right)$

Algorithm 3 Wasserstein lower confidence bound
1: Input: $a$ , samples $X = X_{1}, X_{2}, \dots, X_{n}$ , risk measure $T$ , $c > 0$
2: Sort the $n$ samples in ascent order $X_{(1)} \leq X_{(2)} \dots \leq X_{(n)}$
3: Initialize $S_{1} = \frac{X_{(n)} - X_{(n-1)}}{n}$
4: for $i = 1:n-1$ do
5: if $S_{i} \leq c$ then
6: $S_{i+1} = S_{i} + \frac{i+1}{n}(X_{(n-i)} - X_{(n-1-i)})$
7: else
8: $n' = n+1-i$
9: break
10: end if
11: end for
12: $b^{-} = X_{(n'-1)} + \frac{n(S_{i}-c)}{i}$
13: $\underline{F_{n}^{1}} = \frac{1}{n}\sum_{i}^{n'-1}\mathbb{I}{X_{i} \leq \cdot} + \frac{i}{n}\mathbb{I}{b^{-} \leq \cdot}$
14: Output: $T\left(\underline{F_{n}^{1}}\right)$

F.1. Time Complexity

We start with Algorithm 2. The sorting of $n$ samples incurs $\mathcal{O}(n\log n)$ . The for-loop costs $\mathcal{O}(n)$ since the cost in each iteration is $\mathcal{O}(1)$ . Therefore the total time complexity is $\mathcal{O}(n\log n + \log n)$ . The time complexity of Algorithm 3-Algorithm 5 is $\mathcal{O}(n\log n + \log n)$ .

F.2. Space Complexity

Consider Algorithm 2. The space complexity of storing the samples is $\mathcal{O}(n)$ . In addition, storing $S_{i}$ , $p_{n'}$ , and $p_{b}$ costs $\mathcal{O}(n)$ . The total space complexity is $\mathcal{O}(n)$ . It is easy to check that the time complexity of Algorithm 3-Algorithm 5 is $\mathcal{O}(n)$ .

G. Experiments

G.1. Confidence Bounds

We consider five beta distributions with different parameters. The specific parameters $(A,B)$ is shown above in each figure. Unless otherwise specified, we always use $N = 10^{5}$ samples, $\alpha = 0.05$ , $\beta = 1$ , and $\delta = 0.05$ . For convenience, we use $c_{n}^{1} = (b - a)c_{n}^{\infty}$ . We plot the CIs for ERM and CVaR for varying sample size and varying risk parameter in Figure 6-13.

Algorithm 4 Supremum upper confidence bound
1: Input: b, samples X = X1, X2, ..., Xn, risk measure T, c > 0
2: Sort the n samples in ascent order X(1) ≀ X(2) ≀ ... ≀ X(n)
3: Initialize i = 1
4: while i/n ≀ c do
5: i = i + 1
6: l = i
7: end while
8: Fn∞ = (l/n - c)I{X(l) ≀ Β·} + 1/n βˆ‘i=l+1n I{X(i) ≀ Β·} + cI{b ≀ Β·}
9: Output: T (Fn∞)
Algorithm 5 Supremum lower confidence bound
1: Input: a, samples X = X1, X2, ..., Xn, risk measure T, c > 0
2: Sort the n samples in ascent order X(1) ≀ X(2) ≀ ... ≀ X(n)
3: Initialize i = n
4: while i/n + c β‰₯ 1 do
5: i = i - 1
6: l = i
7: end while
8: Fn∞ = cI{a ≀ Β·} + 1/n βˆ‘i=1n I{X(i) ≀ Β·} + (1 - l/n - c)I{X(l+1) ≀ Β·}
9: Output: T (Fn∞)

G.2. CVaR Bandit

We adopt the same bandit instances as Tamkin et al. (2019). The parameters of these distributions are given in Table 1 in Tamkin et al. (2019). The left, middle, and right part of Figure 14 plots the results for easy bandit instance with $\alpha = 0.25$ , hard bandit instance with $\alpha = 0.25$ , and hard bandit instance with $\alpha = 0.05$ . As expected, CVaR-UCB consistently outperforms LLC-UCB and GLC-UCB.


Figure 6: CVaR UCB with varying sample size


Figure 7: CVaR LCB with varying sample size


Figure 8: CVaR UCB with varying $\alpha$


Figure 9: CVaR LCB with varying $\alpha$


Figure 10: ERM CI with varying sample size


Figure 11: ERM CI with varying $\beta$


Figure 12: ERM CI with varying sample size


Figure 13: ERM CI with varying $\beta$


Figure 14: CVaR bandit