SlowGuess's picture
Add Batch 0f62b3e2-269a-4f59-a832-198e02eeb29b
13cfecc verified

A Closer Look at Generalized BH Algorithm for Out-of-Distribution Detection

Xinsong Ma1 Jie Wu1 Weiwei Liu1

Abstract

Out-of-distribution (OOD) detection is a crucial task in reliable and safety-critical applications. Previous studies primarily focus on developing score functions while neglecting the design of decision rules based on these scores. A recent work (Ma et al., 2024) is the first to highlight this issue and proposes the generalized BH (g-BH) algorithm to address it. The g-BH algorithm relies on empirical p-values, with the calibrated set playing a central role in their computation. However, the impact of calibrated set on the performance of g-BH algorithm has not been thoroughly investigated. This paper aims to uncover the underlying mechanisms between them. Theoretically, we demonstrate that conditional expectation of true positive rate (TPR) on calibrated set for the g-BH algorithm follows a beta distribution, which depends on the prescribed level and size of calibrated set. This indicates that a small calibrated set tends to degrade the performance of g-BH algorithm. To address the limitation of g-BH algorithm on small calibrated set, we propose a novel ensemble g-BH (eg-BH) algorithm which integrates various empirical p-values for making decisions. Finally, extensive experimental results validate the effectiveness of our theoretical findings and demonstrate the superiority of our method over g-BH algorithm on small calibrated set.

1. Introduction

Out-of-Distribution (OOD) detection is a critical task in machine learning and computer vision (Hendrycks & Gimpel, 2017; Liu et al., 2020). It addresses the challenge of determining whether a given input sample belongs to the same

$^{1}$ School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, China. Correspondence to: Weiwei Liu liuweiwei863@gmail.com.

Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).

distribution as the training data (Hendrycks et al., 2022; Djurisic et al., 2023). In real-world applications, models trained on in-distribution (ID) data from a specific domain often encounter OOD data from unseen distributions during deployment. This discrepancy between the training and deployment leads to poor model performance (Liang et al., 2018; Sastry & Oore, 2020; Kaur et al., 2022). The importance of OOD detection has grown with the increasing reliance on deep learning models in safety-critical applications, such as autonomous driving (Li et al., 2022) and medical diagnosis (Frolova et al., 2022).

Numerous studies have proposed various methods to address the OOD detection problem (Liu et al., 2020; 2023; Regmi et al., 2024; Lu et al., 2024). These methods mainly focus on designing the score functions which enables to learn critical discriminative information in training data. A recent work (Ma et al., 2024) is the first to point out that existing OOD detection methods neglect the systematic study of the decision rule based on the score functions, and propose a novel generalized BH (g-BH) algorithm to tackle this problem. The g-BH algorithm establishes a connection between score functions and multiple hypothesis testing framework through the empirical p-values. The calibrated set is crucial for computing empirical p-values and thus has a profound impact on the detection performance of the g-BH algorithm. However, to the best of our knowledge, the impact of the calibrated set on the performance of the g-BH algorithm remains unexplored. This paper aims to address the above issue.

Intuitively, a larger calibration set enhances the performance of the g-BH algorithm. Our experimental results in Figure 1 confirm this conjecture: as the size of calibrated set increases, both the TPR and F1-score monotonically increase. Theoretically, we demonstrate that the TPR expectation conditional on calibrated set for the g-BH algorithm follows a beta distribution, with its shape parameters determined by the prescribed significance level and the size of calibrated set. This shows that a smaller calibrated set tends to degrade the detection performance of the g-BH algorithm. To address the limitation of the g-BH algorithm on small calibrated set, we propose a novel ensemble g-BH (eg-BH) algorithm which integrates multiple empirical p-values for decision-making. Moreover, we extend the theoretical results on the g-BH algorithm from (Ma et al., 2024) and


(a) SVHN


(b) Place365
Figure 1. OOD detection performance of the g-BH algorithm with varying size of calibrated set. The score function is the Energy (Liu et al., 2020). The x-axis corresponds to the size of the calibrated set, and the y-axis represents the values of metrics.


(c) TinyImageNet

demonstrate that the eg-BH algorithm controls the false discovery rate (FDR) for p-values without clear structural dependence.

Finally, we conduct extensive experiments to verify the theoretical results on the conditional expectation of TPR and the effectiveness of the eg-BH algorithm. Experimental results demonstrate the superiority of our method over the g-BH algorithm on small calibrated set.

We summarize our core contributions as follows:

  • We empirically find that a larger calibrated set improves the performance of the g-BH algorithm, whereas a smaller calibrated set adversely affects its performance.
  • We theoretically demonstrate that the TPR expectation conditional on the calibrated set follows a beta distribution, with its shape parameters determined by the prescribed significance level and the size of the calibrated set, which supports our findings.
  • To address the limitation of the g-BH algorithm on the small calibrated set, we propose a novel eg-BH algorithm that integrates multiple empirical p-values for decision-making. Besides, our theoretical results provide a statistical guarantee for the integrated p-values in our eg-BH algorithm.
  • Extensive experimental results demonstrate the superiority of our method over the g-BH algorithm on small calibrated set.

2. Background

We denote by $\mathcal{X} \subseteq \mathbb{R}^d$ the feature space and $\mathcal{Y} = {1,2,3,\ldots,K}$ the label space with unknown joint distribution $\mathcal{P}$ , and $\mathcal{X}$ has marginal distribution $\mathcal{D}_x$ .

During the prediction phase, it is typically assumed that the testing data are drawn from the same distribution $\mathcal{D}_x$ as the training data. However, in practical applications, test inputs

may originate from unseen distributions, where the corresponding label space may be disjoint from $\mathcal{V}$ . These OOD samples should be identified and excluded from prediction.

The objective of OOD detection is to identify OOD examples in the testing set. In prior work, the OOD detection task is formulated as a binary decision problem:

ϕ(x)={ID,i fs(x)sOOD,i fs(x)<s(1) \phi (x) = \left\{ \begin{array}{l l} I D, & \text {i f} s (x) \geq s ^ {*} \\ O O D, & \text {i f} s (x) < s ^ {*} \end{array} \right. \tag {1}

where $s(\cdot)$ is the score function and the threshold $s^*$ is empirically selected so that the ture positive rate (TPR) on ID validation set is $95%$ before testing (Sun et al., 2022; Wei et al., 2022).

Prior work on OOD detection has primarily focused on designing powerful score functions to capture discriminative information in ID data (Hendrycks & Gimpel, 2017; Liu et al., 2020; Djurisic et al., 2023; Liu et al., 2023). However, Ma et al. (2024) highlight that these studies lack systematic research on the decision rule on the score functions. Moreover, the decision rule in Eq (1) is empirical and lacks theoretical guarantee for its outputs. Different from the previous studies, Ma et al. (2024) studies the OOD detection problem from the perspective of multiple hypothesis testing, and propose the g-BH algorithm to tackle it.

3. Multiple Hypothesis Testing Framework for OOD Detection

We first introduce the hypothesis testing framework for OOD detection in Ma et al. (2024). For mathematical convenience, we follow the notations of Ma et al. (2024). Given a testing set $\mathcal{T}^{test} = {X_1^{test}, X_2^{test}, \ldots, X_n^{test}}$ , For $i = 1, \dots, n$ , the OOD detection task is formulated as the following multiple hypothesis testing problem:

H1;0:X1t e s tDx,H1;1:X1t e s tDx(2) \begin{array}{l} H _ {1; 0}: X _ {1} ^ {\text {t e s t}} \sim \mathcal {D} _ {x}, \quad H _ {1; 1}: X _ {1} ^ {\text {t e s t}} \sim \mathcal {D} _ {x} \\ \dots \dots \tag {2} \\ \end{array}

Hn;0:Xnt e s tDx,Hn;1:Xnt e s tDx H _ {n; 0}: X _ {n} ^ {\text {t e s t}} \sim \mathcal {D} _ {x}, \quad H _ {n; 1}: X _ {n} ^ {\text {t e s t}} \nsim \mathcal {D} _ {x}

where $H_{i;0}$ and $H_{i;1}$ are called null hypothesis and alternative hypothesis, respectively. Then, if $H_{i;0}$ is rejected, we declare that $X_{i}^{test}$ is OOD.

In statistics, the decision to accept or reject the null hypothesis is made based on the concept of the $p$ -value, which is generally defined as follows:

Definition 3.1. [p-value (Casella & Berger, 2002)] Given a sample $\widetilde{X}^4$ . A statistic $p(\widetilde{X})$ is called p-value corresponding to the null hypothesis $H_0$ , if $p(\widetilde{X})$ satisfies

P[p(X~)tH0]t(3) \mathbb {P} [ p (\widetilde {X}) \leq t | H _ {0} ] \leq t \tag {3}

for every $0 \leq t \leq 1$ .

If the statistic $p(\widetilde{X})$ follows the uniform distribution on (0, 1) under the null hypothesis, it is a valid p-value. A small p-value typically provides strong evidence against the null hypothesis. It is noteworthy that the p-value has clear statistical interpretation. For example, if the p-value of a OOD testing example $X_{i}^{test}$ is 0.01, this implies that, for any subsequent testing example $X_{j}^{test}$ , the probability that $X_{j}^{test}$ is more similar to OOD data than $X_{i}^{test}$ is 0.01. In other words, it is highly unlikely to find an example less similar to the OOD data than $X_{i}^{test}$ . Hence, it provides strong evidence that $X_{i}^{test}$ is OOD.

Remark 3.2. In statistics, the following terminology characterizes the distribution of null p-values: if $\mathcal{P}[p(\widetilde{X})\leq t|H_0] = t$ , the p-value $p(\widetilde{X})$ is called exact or uniform; if $\mathcal{P}[p(\widetilde{X})\leq t|H_0] < t$ , $p(\widetilde{X})$ is called conservative. Compared to an exact p-value, a conservative one tends to understate the evidence against the null Hypothesis.

Based on the work (Benjamini & Hochberg, 1995), Ma et al. (2024) propose the g-BH algorithm to tackle the OOD detection problem. Define two function classes:

F1={f(x):f+(0)=0,f(x)>0,011f(x)dx1}F2={f(x):f+(0)=0,f(x)1}, \begin{array}{l} \mathcal {F} _ {1} = \{f (x): f _ {+} (0) = 0, f ^ {\prime} (x) > 0, \int_ {0} ^ {1} \frac {1}{f (x)} d x \leq 1 \} \\ \mathcal {F} _ {2} = \left\{f (x): f _ {+} (0) = 0, f ^ {\prime} (x) \geq 1 \right\}, \\ \end{array}

where $f_{+}(0) = \lim_{x\to 0 + }f(x)$ for $x\in (0,1)$ . Based on $\mathcal{F}_1$ and $\mathcal{F}_2$ , the g-BH algorithm is defined as follows:

Definition 3.3 (g-BH algorithm (Ma et al., 2024)). Given the p-values $p_1, p_2, \dots, p_n$ corresponding to the null hypotheses $H_{1;0}, H_{2;0}, \dots, H_{n;0}$ , let $p_{(i)}$ be the $i$ -th order statistics from the smallest to the largest. For a pre-specified level $\alpha \in (0,1)$ , define

igBH=max{i[n]:f(p(i))inα},(4) i _ {g - B H} ^ {*} = \max \{i \in [ n ]: f (p _ {(i)}) \leq \frac {i}{n} \alpha \}, \tag {4}

where $f(\cdot)\in \mathcal{F}1\cup \mathcal{F}2$ . Then, the null hypothesis $H{(i);0}$ is rejected if $i\leq i{g - BH}^{*}$

Ma et al. (2024) demonstrate that if p-values are independent or positive regression dependence on subset (PRDS), the g-BH algorithm controls the FDR at prescribed level $\alpha$ . FDR is defined as

FDR=E[RH0max{1,R}] \mathrm {F D R} = \mathbb {E} \left[ \frac {| \mathcal {R} \cap \mathcal {H} _ {0} |}{\max \{1 , | \mathcal {R} | \}} \right]

where $\mathcal{R}$ is the set of indices of the rejected null hypotheses and $\mathcal{H}_0$ is the set of indices for the true null hypotheses. (Ma et al., 2024) demonstrates that the g-BH algorithm can control the FDR at a prescribed level if p-values are mutually independent or satisfy the positive regression dependence on subset (PRDS) condition (Benjamini & Yekutieli, 2001).

4. Impact of Calibrated Set on Generalized BH Algorithm

In most multiple hypothesis testing literature (Benjamini & Hochberg, 1995; Benjamini & Yekutieli, 2001; Blanchard & Roquain, 2008; Delattre & Roquain, 2015; Cao et al., 2022), the p-values or the distribution of the testing statistic are assumed to be known. Denote by $F(\cdot)$ the cumulative distribution function of $s(X)$ where $s(\cdot)$ is the score function and $X \sim \mathcal{D}_x$ . Then, for a given example $X^{test}$ , its p-value can be expressed as

p(Xtest)=PXDx(s(X)s(Xtest))=F(s(Xtest)).(5) \begin{array}{l} p \left(X ^ {t e s t}\right) = \mathbb {P} _ {X \sim \mathcal {D} _ {x}} (s (X) \leq s \left(X ^ {t e s t}\right)) \\ = F \left(s \left(X ^ {t e s t}\right)\right). \tag {5} \\ \end{array}

According to the Definition 2, under the $H_0$ ( $X^{test}$ is the ID data), we have

P(F(s(Xtest))x)=P(s(Xtest)F1(x))=F(F1(x))=x, \begin{array}{l} \mathbb {P} \left(F (s (X ^ {t e s t})) \leq x\right) = \mathbb {P} \left(s (X ^ {t e s t}) \leq F ^ {- 1} (x)\right) \\ = F (F ^ {- 1} (x)) = x, \\ \end{array}

where $F^{-1}(\cdot)$ is the inverse function of $F(\cdot)$ . Therefore, the random variable $F(s(X^{test}))$ follows the uniform distribution on $(0, 1)$ , namely, $p(X^{test})$ is a valid p-value and is exact. Obviously, small score $s(X^{test})$ results in a small p-value, which aligns with the classical setting of OOD detection in Eq.(1) and the interpretation of the p-value.

However, in the context of the OOD detection, we often have little prior information about underlying distribution $F(\cdot)$ . Hence, Ma et al. (2024) propose using the empirical p-values in the g-BH algorithm, which is a nonparametric estimation method for the p-value $p(X^{test})$ . Given a calibrated set $\mathcal{T}^{cal} = {X_1^{cal}, X_2^{cal}, \ldots, X_m^{cal}}$ consisting of the ID examples, for a testing example $X_i^{test}$ , the empirical p-value $p_i$ corresponding to null hypothesis $H_{i;0}$ is defined as

pi=p^(Xit e s t)=j=1m1(s(Xjc a l)s(Xit e s t))+1m+1,(6) p _ {i} = \hat {p} \left(X _ {i} ^ {\text {t e s t}}\right) = \frac {\sum_ {j = 1} ^ {m} \mathbb {1} \left(s \left(X _ {j} ^ {\text {c a l}}\right) \leq s \left(X _ {i} ^ {\text {t e s t}}\right)\right) + 1}{m + 1}, \tag {6}


(a) $\alpha = 0.05$


(b) $\alpha = 0.08$
Figure 2. Density distribution functions of the TPR conditional on calibrated set for the g-BH algorithm with varying level $\alpha$ and size $m$ of calibrated set.


(c) $\alpha = 0.12$

where $s(\cdot)$ is a certain score function. According to Arlot et al. (2010), we can easily verify that empirical p-value in Eq. (6) satisfies the Definition 3.1.

Note that the g-BH algorithm directly makes the decisions based on the empirical p-values. Therefore, the calibrated set plays a critical role in the OOD detection performance of g-BH algorithm. However, to the best of our knowledge, there is no literature that thoroughly investigates the influence of calibrated set on the performance of the g-BH algorithm. This paper aims to systematically study this important problem.

In this paper, we focus on the situation where only ID data is available before testing. Thus, we first investigate how the size $m$ of calibrated set influences the conditional expectation of TPR on the calibrated set $\mathcal{T}^{cal}$ for the g-BH algorithm: $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal})$ .

To derive the distribution characteristics of conditional expectation of TPR, we need the concept of empirical distribution function. Given the calibrated set $\mathcal{T}^{cal} = {X_1^{cal}, X_2^{cal}, \ldots, X_m^{cal}}$ , for any input $x$ , the empirical distribution $\hat{F}(\cdot)$ of score function $s(\cdot)$ on $\mathcal{T}^{cal}$ can be expressed as

F^(x)=1mi=1n1(s(Xical)s(x))={0,i fs(x)<s(X(1)cal)kmi fs(X(k)cal)s(x)<s(X(k+1)cal)1,i fs(x)s(X(m)cal), \begin{array}{l} \hat {F} (x) = \frac {1}{m} \sum_ {i = 1} ^ {n} \mathbb {1} \left(s \left(X _ {i} ^ {c a l}\right) \leq s (x)\right) \\ = \left\{ \begin{array}{l l} 0, & \text {i f} \quad s (x) < s (X _ {(1)} ^ {c a l}) \\ \frac {k}{m} & \text {i f} \quad s (X _ {(k)} ^ {c a l}) \leq s (x) < s (X _ {(k + 1)} ^ {c a l}) \\ 1, & \text {i f} \quad s (x) \geq s (X _ {(m)} ^ {c a l}), \end{array} \right. \\ \end{array}

where $k = 1,2,\dots ,m$ and $X_{(k)}^{cal}$ is the $k$ -th order statistic of $X_{1}^{cal},X_{2}^{cal},\ldots ,X_{m}^{cal}$ from the smallest to the largest. Obviously, we have

F^(X(k)cal)=kn. \hat {F} (X _ {(k)} ^ {c a l}) = \frac {k}{n}.

In addition, we denote $[\cdot]$ the floor function. Then, we have the following theoretical result.

Theorem 4.1. Given the calibrated set $\mathcal{T}^{cal} = {X_1^{cal}, X_2^{cal}, \ldots, X_m^{cal}}$ and the score function $s(\cdot)$ that is continuous. For a prescribed level $\alpha$ , the density function of $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal})$ for the $g$ -BH algorithm can be expressed as

ftpr(x)={m(m1β1)xmβ(1x)β1i f0<x<10o t h e r w i s e, f _ {t p r} (x) = \left\{ \begin{array}{l l} m \binom {m - 1} {\beta - 1} x ^ {m - \beta} (1 - x) ^ {\beta - 1} & \text {i f} 0 < x < 1 \\ 0 & \text {o t h e r w i s e}, \end{array} \right.

where

β=[f1(α)(m+1)]1 \beta = [ f ^ {- 1} (\alpha) (m + 1) ] - 1

and $f\in \mathcal{F}_1\cup \mathcal{F}_2$ ,namely $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal})$ follows the beta distribution $\mathcal{B}e(m - \beta +1,\beta)$

The proof of the Theorem 4.1 is presented in Appendix A.1. Theorem 4.1 indicates that the pre-specified level $\alpha$ and the size of the calibrated set $m$ play pivotal roles in determining the distributional characteristics of the g-BH algorithm. We visualize the probability distribution of $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal})$ with different $\alpha$ and the size $m$ of calibrated set in Figure 2. From Figure 2, we find that if we choose a larger $\alpha$ , the g-BH algorithm tends to achieve smaller TPR with appreciable probability. For example, with the calibration examples $m = 200$ , we have $\mathbb{P}(\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal}) \leq 0.9) = 0.14$ for $\alpha = 0.08$ and $\mathbb{P}(\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal}) \leq 0.85) = 0.81$ for $\alpha = 0.12$ . The reason behind this phenomenon is that a larger $\alpha$ induces the g-BH algorithm to adopt more aggressive decision-making strategies. In other words, the g-BH algorithm tends to classify more testing examples as OOD. More importantly, a small calibrated set causes the g-BH algorithm to achieve poor detection performance in terms of TPR. In contrast, a large calibrated set easily ensures a high TPR with a high probability. For example, with $\alpha = 0.05$ , we obtain $\mathbb{P}(\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal}) \leq 0.9) = 0.32$ for $m = 200$ , and $\mathbb{P}(\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal}) \leq 0.9) < 0.01$ for $m = 500$ . Therefore, a large calibrated set significantly improves the performance of the g-BH algorithm. In Section 6, we also conduct extensive experiments on the real datasets to validate this conclusion.

5. Ensemble Generalized BH Algorithm for Small Calibrated Set

In Section 4, we demonstrate that a large calibrated set tends to improve the detection performance of the g-BH algorithm, but a small calibrated set easily leads to a poor results. In practice, we usually separate a portion of data from the training set to serve as the calibrated set. If the training set is small, a large calibrated set takes up more of the training data and further leads to the underfitting of neural networks. To address this problem, we propose the ensemble g-BH (eg-BH) algorithm to address this challenges faced by the g-BH algorithm.

Our motivation arises from the practical significance of p-values. A small calibrated set leads to under-representative empirical p-values, which fail to capture the distributional characteristics of the ID data. To address this issue, a natural approach is to generate multiple empirical p-values using the entire training set, thereby fully utilizing the available information. We then integrate these empirical p-values to make decisions.

For the given training data $\mathcal{T}$ , we first partition $\mathcal{T}$ into $\mathcal{T}_1, \mathcal{T}_2, \dots, \mathcal{T}_L$ . Denote $\mathcal{T}i^{cal} = \mathcal{T}i$ and $\mathcal{T}{-i}^{train} = \mathcal{T} \setminus \mathcal{T}i$ where “ $\setminus$ ” is the difference operation. Then, we train the score function $s(\cdot)$ based on $\mathcal{T}{-i}^{train}$ . For simplicity, we denote $s_i(\cdot)$ the score trained on $\mathcal{T}{-i}^{train}$ . Besides, denote by $|\mathcal{T}_i^{cal}|$ the size of $\mathcal{T}_i^{cal}$ . Hence, for a testing example $X^{test}$ , we enable to compute various empirical p-values using trained score function and calibrated set pairs ${s_i(\cdot), \mathcal{T}i^{cal}}{i=1}^L$ :

p^i(Xtest)={XcalTical:si(Xcal)si(Xtest)}+1Tical+1 \hat {p} _ {i} (X ^ {t e s t}) = \frac {| \{X ^ {c a l} \in \mathcal {T} _ {i} ^ {c a l} : s _ {i} (X ^ {c a l}) \leq s _ {i} (X ^ {t e s t}) \} | + 1}{| \mathcal {T} _ {i} ^ {c a l} | + 1}

for $i = 1,2,\dots ,L$ . After computing the empirical p-values $\hat{p}_1(X^{test}),\hat{p}_2(X^{test}),\dots ,\hat{p}_L(X^{test})$ , the next problem is how to integrate these empirical p-values. A direct approach is to average them:

pˉ(Xtest)=p^1(Xtest)+p^2(Xtest)++p^L(Xtest)L. \bar {p} (X ^ {t e s t}) = \frac {\hat {p} _ {1} (X ^ {t e s t}) + \hat {p} _ {2} (X ^ {t e s t}) + \cdots + \hat {p} _ {L} (X ^ {t e s t})}{L}.

Unfortunately, $\bar{p}(X^{test})$ does not necessarily satisfy the definition of the p-value (Ruschendorf, 1982; Meng, 1994).

To obtain a more general method of integrating the p-values, we first introduce a universal notion of average (Kolmogorov & Castelnuovo, 1930): given the p-values $\mathbf{p} = {p_1,p_2,\dots ,p_L}$ and weights $\mathbf{w} = {w_{1},w_{2},\dots ,w_{L}}$ where $w_{i} > 0$ and $\sum_{i = 1}^{L}w_{i} = 1$ , define

Ω(p,w)=g1(w1g(p1)+w2g(p2)+,wLg(pL)) \Omega (\mathbf {p}, \mathbf {w}) = g ^ {- 1} \left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right)

where $g(\cdot)$ is a continuous and strictly monotonic function, and $g^{-1}(\cdot)$ is its inverse function. If $w_{i} = \frac{1}{L}$ , $\Omega (\cdot)$ is the

Algorithm 1 eg-BH algorithm

1: Input: Training data $\mathcal{T}$ , testing set $\mathcal{T}^{test} = {X_1^{test}, X_2^{test}, \ldots, X_n^{test}}$ , prescribed level $\alpha \in (0, 1)$ .
2: partition $\mathcal{T}$ into $\mathcal{T}_1, \mathcal{T}_2, \dots, \mathcal{T}_L$ , and let $\mathcal{T}_i^{cal} = \mathcal{T}i$ and $\mathcal{T}{-i}^{train} = \mathcal{T} \backslash \mathcal{T}_i$ .

3: for $j = 1$ to L do

4: Train the score function $s(x)$ on $\mathcal{T}{-j}^{train}$ , denote by $s_i(\cdot)$ the score trained on $\mathcal{T}{-j}^{train}$ .

5: end for
6: for $i = 1$ to $n$ do
7: for $j = 1$ to $L$ do
8: Compute the empirical p-values for testing example $X_{i}^{test}$ based on $s_j(\cdot)$ and $T_{j}^{cal}$ :

p^i,j={XTjcal:sj(X)sj(Xitest)}+1Tjcal+1 \hat {p} _ {i, j} = \frac {| \{X \in \mathcal {T} _ {j} ^ {c a l} : s _ {j} (X) \leq s _ {j} (X _ {i} ^ {t e s t}) \} | + 1}{| \mathcal {T} _ {j} ^ {c a l} | + 1}

9: end for

10: Integrate the empirical p-values $\hat{p}{i,1},\dots ,\hat{p}{i,L}$ for $X_{i}^{test}$ :

pˉi=((κ+1)j=1Lwjp^i,jκ)1κ(7) \bar {p} _ {i} = \left(\left(\kappa + 1\right) \sum_ {j = 1} ^ {L} w _ {j} \hat {p} _ {i, j} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} \tag {7}

11: end for

12: Compute $i^{*} = \max {i\in [n]:f(\bar{p}{(i)})\leq \frac{i}{n}\alpha }$ where $\bar{p}{(i)}$ is the $i$ -th order statistic from the smallest to the largest for $\bar{p}_1,\cdot ,\bar{p}_n$ .

13: Output: Declare that $X_{(i)}^{test}$ is OOD if $i \leq i^*$ , and the rests are ID.

arithmetic mean when $g(x) = x$ ; $\Omega(\cdot)$ is the geometric mean when $g(x) = \log x$ ; $\Omega(\cdot)$ is the harmonic mean when $g(x) = \frac{1}{x}$ . For a random variable $X$ , its $\alpha$ -quantile is defined as

Q(X,α)=supxR{P(Xx)<α}. Q (X, \alpha) = \sup _ {x \in \mathbb {R}} \{\mathbb {P} (X \leq x) < \alpha \}.

Clearly, $Q(X,1)$ is the essential supremum of $X$ . In addition, denote by $\mathcal{P}$ the set of all p-values. Suppose that the function $h(\cdot):[0,1]^L\to [0,\infty)$ is continuous and increasing, we define

Q(h,p,α)=infpiP{Q(h(p1,,pL),α)} Q ^ {*} (h, \mathbf {p}, \alpha) = \inf _ {p _ {i} \in \mathcal {P}} \left\{Q \left(h \left(p _ {1}, \dots , p _ {L}\right), \alpha\right) \right\}

where $\mathbf{p} = {p_1, \dots, p_L}$ . The following theoretical results provide a concise method that integrates the multiple p-values.

Theorem 5.1. Given the empirical $p$ -values $p_1, p_2, \dots, p_L$ and the function $g(x) = x^{\kappa}$ where $\kappa > 0$ , then

((κ+1)(w1p1κ+w2p2κ++wLpLκ))1κ \left(\left(\kappa + 1\right) \left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right)\right) ^ {\frac {1}{\kappa}}

is a valid $p$ -value. Specifically,

2(p1+p2++pL)L \frac {2 (p _ {1} + p _ {2} + \cdots + p _ {L})}{L}

and

max{p1,p2,,pL} \max \left\{p _ {1}, p _ {2}, \dots , p _ {L} \right\}

are the valid $p$ -values.

The proof of Theorem 5.1 is presented in Appendix A.3. According to Theorem 5.1, we can choose appropriate function $g(x) = x^{\kappa}$ to integrate various empirical p-values for decision-making. The following theorem provides an consideration for the choice of $\kappa$ in $g(x)$ .

Theorem 5.2. Given the empirical $p$ -values $p_1, p_2, \dots, p_L$ and the function $g(x) = x^{\kappa}$ , denote $w^* = \max {w_1, w_2, \dots, w_L}$ . If $w^* \leq \min \left{\frac{1}{2}, \frac{1}{1 + \kappa}, \frac{\kappa}{1 + \kappa}\right}$ , then we have

suppiP{P(h~(p)α)}=α. \sup _ {p _ {i} \in \mathcal {P}} \left\{\mathbb {P} \left(\tilde {h} (\mathbf {p}) \leq \alpha\right) \right\} = \alpha .

where $\tilde{h} (\mathbf{p}) = ((\kappa +1)(w_1p_1^\kappa +w_2p_2^\kappa +\dots +w_Lp_L^\kappa))^{\frac{1}{\kappa}}$

The proof of Theorem 5.2 is presented in Appendix A.4. Theorem 5.2 indicates that if we choose $w^{}$ such that $w^{} \leq \min \left{\frac{1}{2}, \frac{1}{1 + \kappa}, \frac{\kappa}{1 + \kappa}\right}$ , the integrated p-value $\tilde{h}(\mathbf{p})$ can be exact, which benefits the improvement of power for the hypothesis testing algorithm. Based on the analysis above, we summarize our proposed method in Algorithm 1, called ensemble g-BH (eg-BH) algorithm.

6. Experiments

In this section, we aims to verify the effectiveness of Theorem 4.1 and the superiority of our proposed eg-BH algorithm over the g-BH algorithm. Our experimental framework is based on Ma et al. (2024) and use the same evaluation metrics. The experimental results show the superiority of the eg-BH algorithm over the g-BH algorithm on small calibrated set.

6.1. Experimental Settings

Scores. We choose two famous methods MSP(Hendrycks & Gimpel, 2017) and Energy(Liu et al., 2020) as the score functions in our method.

Benchmarks. We use CIFAR-10 (Krizhevsky et al., 2009) as ID data, and use CIFAR-100, ImageNet (Krizhevsky et al., 2017), SVHN (Netzer et al., 2011), Fashion-MNIST (F-MNIST) (Xiao et al., 2017), Places365 (Zhou et al., 2018) and MNIST (Deng, 2012), as OOD data.

Metrics. We use the same practical evaluation metrics as Ma et al. (2024), including TPR, FPR and F1-score.

Model. The score functions in this paper are based on the ResNet18 and WideResNet, respectively. We mainly

follow the experimental implementation in (Yang et al., 2022; Zhang et al., 2023a), and our codes are based on (Zhang et al., 2023a). More details are found in (Zhang et al., 2023a).

6.2. Impact of Calibrated Set on Generalized BH Algorithm

In this experiment, we aims to reveal how the calibrated set influences the detection performance of the g-BH algorithm. We first split the training data equally into two parts. One part is employed to train the neural networks for constructing the score function, and the other serves as the largest calibrated set $\mathcal{T}_M^{cal}$ . Then, from $\mathcal{T}_M^{cal}$ , we extract samples at various proportions $r$ to construct several relatively smaller calibrated sets, where $r = {0.2, 0.3, \dots, 1.0}$ . The experimental results of practical metrics based on the Energy (Liu et al., 2020) are presented in Tables 1 and 3. The results based on the MSP (Hendrycks & Gimpel, 2017) are presented in Tables 2 and 4. Because of the space limitation, all experimental results of using MNIST as OOD data are presented in Appendix B.

From Table 1, we find that with the increase of size for the calibrated set, the evaluation metrics TPR and F1-score considerably increases, accompanied by a marginal rise in FPR. For example, we use the SVHN as the OOD data, and use the Energy as our score function based on ResNet18. When sampling ratio $r = 0.2$ , the F1-score, TPR and FPR of g-BH algorithm are $52.25%$ , $35.41%$ and $0.05%$ , respectively. When $r = 0.8$ , the corresponding F1-score, TPR and FPR are $75.78%$ , $61.51%$ and $0.31%$ , respectively, which leads to the direct improvements of $23.53%$ and $26.10%$ for F1-score and TPR, at a negligible cost of $0.26%$ increase in FPR. Notably, as shown in Tables 2, 3 and 4, this trend is consistent for other socre function MSP, network architecture WideResNet and OOD data. Therefore, large calibrated set improves the performance of the g-BH algorithm without the dependence on the distribution assumptions of OOD data. The above analysis demonstrates the effectiveness of Theorem 4.1.

6.3. Comparison between g-BH and eg-BH on Small Calibrated Set

In this experiment, we aim to compare the detection performance between vanilla g-BH algorithm and our proposed eg-BH algorithm on the small calibrated set. We first randomly divide the training data into $L$ equal parts. For the g-BH algorithm, one of these parts is used as the calibrated set. Note that a larger $L$ implies a smaller calibrated set. For our proposed method, we directly apply the strategies in the algorithm 1. When $L = 5$ , the corresponding experimental results of practical metrics are presented in Tables 5 and 6.

As tables 5 and 6 shown, we observe the FPR of our pro

Table 1. Experimental results (%) of practical metrics on CIFAR-10 as ID data. Energy (Liu et al., 2020) is used as the score function based on the ResNet18. We compare the detection performance of g-BH algorithm with different sizes of calibrated set.

RatioCIFAR-100TinyImageNetSVHNPlace365F-MNIST
F1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPR
0.263.6347.120.9833.0632.470.2952.2535.410.0551.8435.350.1962.9445.990.68
0.366.0649.911.2039.7634.580.3554.8137.800.0554.0137.530.2465.7447.830.97
0.467.8852.071.3543.3436.190.4058.0340.950.0756.0039.310.3068.1652.791.42
0.570.1954.921.5645.2740.630.5562.2745.310.0858.1141.480.3570.9554.981.46
0.671.9057.991.6153.8143.040.6765.6649.000.1059.6742.370.4472.7857.511.69
0.774.3560.542.3157.7546.940.8773.1858.050.2361.2744.920.4775.5859.252.15
0.876.1663.533.3062.0552.611.4175.7861.510.3166.1548.260.8776.5262.982.47
0.977.9064.943.9565.9153.631.4979.7182.345.4268.9256.141.1178.4367.614.79
179.2669.505.8770.5264.033.5184.0386.6811.6473.8471.765.3680.7470.686.51

Table 2. Experimental results (%) of practical metrics on CIFAR-10 as ID data. MSP (Hendrycks & Gimpel, 2017) is used as the score function based on the ResNet18. We compare the detection performance of g-BH algorithm with different sizes of calibrated set.

RatioCIFAR-100TinyImageNetSVHNPlace365F-MNIST
F1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPR
0.262.2745.620.9148.4932.470.2951.5534.760.0450.3133.810.1662.1245.360.68
0.364.8948.561.1150.0833.950.3355.6738.630.0653.4236.750.2364.8948.450.89
0.467.8852.071.3552.3836.190.4058.0340.950.0754.1937.510.2567.8451.931.17
0.570.1954.921.5653.8137.640.4561.2244.210.0857.0240.370.3370.2054.831.39
0.671.8157.001.7655.6439.530.5165.6649.000.1060.1643.680.4271.8156.891.56
0.774.3560.542.3158.7442.970.6767.7651.390.1262.7746.630.5374.3760.502.20
0.875.2662.032.8262.0546.940.8771.6756.100.1765.4149.920.7475.1961.762.51
0.978.4967.875.0665.4251.841.3375.7861.510.3168.8254.551.0976.9864.803.56
181.2877.6313.4069.6370.056.2379.7186.3411.6474.2670.235.1879.5870.256.30

Table 3. Experimental results (%) of practical metrics on CIFAR-10 as ID data. Energy (Liu et al., 2020) is used as the score function based on the WideResNet. We compare the detection performance of g-BH algorithm with different sizes of calibrated set.

RatioCIFAR-100TinyImageNetSVHNPlace365F-MNIST
F1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPR
0.263.5250.194.5346.4235.222.4938.0836.971.9748.7836.411.3264.3047.851.77
0.365.8450.885.9948.9236.493.2452.9539.862.4952.6437.581.7965.8951.212.25
0.467.2252.797.2450.1538.844.0856.7141.483.0554.8839.732.0467.1953.792.86
0.568.8556.178.0650.7940.164.9957.4643.153.4156.4940.892.6570.5355.843.59
0.670.2859.068.5553.8442.555.7159.2545.893.9758.3444.253.0273.4859.394.01
0.773.4361.729.0954.7544.636.3862.9447.744.3361.7548.843.4875.5962.414.64
0.874.5864.6613.2756.8848.187.4564.7750.914.8264.4853.793.9977.1164.295.29
0.977.5367.1814.6260.1366.0910.0465.9652.765.1266.5256.684.5179.4969.496.89
178.9778.7322.7961.5267.7512.8467.1856.575.7970.1671.418.7983.0982.2316.52

Table 4. Experimental results (%) of practical metrics on CIFAR-10 as ID data. MSP (Hendrycks & Gimpel, 2017) is used as the score function based on the WideResNet. We compare the detection performance of g-BH algorithm with different sizes of calibrated set.

RatioCIFAR-100TinyImageNetSVHNPlace365F-MNIST
F1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPRF1TPR
0.262.6548.105.4545.9534.112.8743.4536.492.1349.9535.131.5264.3048.301.93
0.363.9549.745.8247.2735.683.0653.1538.562.5152.3337.661.7265.8050.092.15
0.466.1952.696.5248.9937.843.3355.2440.852.7153.8639.341.8468.0452.882.55
0.567.6054.747.2149.9639.123.5056.9042.822.9655.3941.051.9769.7655.183.03
0.669.9158.438.7252.2142.063.8158.4544.693.1658.3444.622.2972.0158.313.64
0.770.6959.799.3753.0143.263.9960.2446.973.4460.3747.352.6174.0761.434.45
0.873.3164.8712.1155.2547.074.6661.4348.493.6064.4953.903.6375.5263.915.34
0.974.6667.9314.0359.2566.7011.6963.3351.204.0365.2355.373.9478.2768.947.21
176.7678.6626.3059.5663.5114.9565.5055.825.6168.1469.359.3781.2281.0418.51


(a) CIFAR-100


(b) TinyImageNet


(c) Place365


Figure 3. Comparison between g-BH algorithm and our eg-BH algorithm in terms of F1-score and TPR. The x-axis corresponds to the number $L$ in Algorithm 1, and the y-axis represents the value of metrics.


(d) SVHN

Table 5. Experimental results (%) of practical metrics on CIFAR-10 as ID data. The score function is Energy based on ResNet18. We compare the performance between g-BH and eg-BH based on the same score function.

Datag-BHTPReg-BHTPR
F1FPRFPRF1FPRFPR
CIFAR-10082.1688.5927.0591.1792.9125.80
TinyImageNet59.3453.5134.4573.2570.4330.02
SVHN75.1659.1329.0689.0079.4331.30
Place36556.0764.1633.0978.5584.7131.50
F-MNIST84.8987.9123.8288.1090.2624.05
MNIST79.0988.2314.8782.7789.9112.60
Average69.4573.5927.0683.8184.6125.88

Table 6. Experimental results (%) of practical metrics on CIFAR-10 as ID data. The score function is MSP based on ResNet18. We compare the performance between g-BH and eg-BH based on the same score function.

Datag-BHTPReg-BHTPR
F1FPRFPRF1FPRFPR
CIFAR-10080.7082.5325.6089.0987.2825.48
TinyImageNet73.2578.4333.0282.7888.0531.55
SVHN86.0087.434.3091.1792.154.06
Place36578.5583.9914.5086.0890.1713.09
F-MNIST85.2682.6321.7990.1189.2719.05
MNIST81.2685.3019.8786.7890.9216.70
average80.8483.3919.8587.6789.6418.32

posed eg-BH algorithm achieves a certain degree of improvement compared with g-BH algorithm. More significantly, the TPR and F1-score are considerably improved. For example, when using Energy as score function and TinyImageNet

as OOD data, compared to g-BH algorithm, our method reduce reduce the FPR from $34.45%$ to $30.02%$ , improve the TPR by $16.92%$ and the F1-score by $20.91%$ . Obviously, this improvement still exists for Different OOD data and score function MSP. The above analysis demonstrates the superiority of our method over the g-BH algorithm on small calibrated set.

To assess the impact of $L$ on both the g-BH algorithm and eg-BH algorithm. we set $L = {6,7,8,9,10}$ and conduct the corresponding experiments using Energy as score function based on the ResNet18. The experimental results are presented in Figure 3. From Figure 3, we find that our proposed method outperforms the g-BH algorithm in terms of F1-score and TPR across different values of $L$ . Moreover, for larger value $L$ (i.e. smaller calibrated set), the performance gap between our method and g-BH algorithm becomes more pronounced, especially when $L = 9$ and $L = 10$ . This demonstrates the superiority of our proposed eg-BH algorithm over the g-BH algorithm on the smaller calibrated set.

7. conclusion

In this paper, we thoroughly analyze the g-BH algorithm and demonstrate that a large calibrated set improves the performance of the g-BH algorithm in terms of TPR, while a small calibrated set weakens its performance. To address this issue, we propose a novel eg-BH algorithm that integrates multiple p-values for decision-making. Extensive experiments demonstrate the validity of our theoretical results and

the superiority of our method over the g-BH algorithm.

Acknowledgment

This work is supported by the Key R&D Program of Hubei Province under Grant 2024BAB038, the National Key R&D Program of China under Grant 2023YFC3604702, the Fundamental Research Funds for the Central Universities under Grant 2042025kf0045.

Impact Statement

To our best knowledge, this work has no negative social impact. This work mainly provides a solid theoretical support for the field of the OOD detection and improves the performance of existing methods obviously. Hence, our work may promote the development of the related applications.

References

Arlot, S., Blanchard, G., and Roquain, E. Some nonasymptotic results on resampling in high dimension, i: Confidence regions. Annals of Statistics, 38(1):51-82, 2010.
Benjamini, Y. and Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289-300, 1995.
Benjamini, Y. and Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Annals of statistics, pp. 1165-1188, 2001.
Bernard, C., Jiang, X., and Wang, R. Risk aggregation with dependence uncertainty. Insurance: Mathematics and Economics, 54:93-108, 2014.
Blanchard, G. and Roquain, E. Two simple sufficient conditions for fdr control. Electronic Journal of Statistics, 2: 963-992, 2008.
Cao, H., Chen, J., and Zhang, X. Optimal false discovery rate control for large scale multiple testing with auxiliary information. Annals of statistics, 50(2):807, 2022.
Casella, G. and Berger, R. L. Statistical inference. Cengage Learning, 2002.
Chen, Y. and Liu, W. A theory of transfer-based black-box attacks: Explanation and implications. In NeurIPS, 2023.
Delattre, S. and Roquain, E. New procedures controlling the false discovery proportion via romano-wolf's heuristic. Annals of Statistics, 43(3):1141-1177, 2015.
Deng, L. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag., 29(6):141-142, 2012.

Djurisic, A., Bozanic, N., Ashok, A., and Liu, R. Extremely simple activation shaping for out-of-distribution detection. In ICLR, 2023.
Frolova, D., Vasiluik, A., Belyaev, M., and Shirokikh, B. Solving sample-level out-of-distribution detection on 3d medical images. arXiv preprint arXiv:2212.06506, 2022.
Gong, X., Yuan, D., and Bao, W. Understanding partial multi-label learning via mutual information. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), NeurIPS, pp. 4147-4156, 2021.
Gong, X., Yuan, D., and Bao, W. Partial label learning via label influence function. In ICML, volume 162, pp. 7665-7678, 2022.
Gong, X., Yuan, D., and Bao, W. Discriminative metric learning for partial label learning. IEEE Transactions on Neural Networks and Learning Systems, 34(8):4428-4439, 2023a.
Gong, X., Yuan, D., Bao, W., and Luo, F. A unifying probabilistic framework for partially labeled data learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7):8036-8048, 2023b.
Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017.
Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., Steinhardt, J., and Song, D. Scaling out-of-distribution detection for real-world settings. In ICML, volume 162, pp. 8759-8773, 2022.
Kaur, R., Jha, S., Roy, A., Park, S., Dobriban, E., Sokolsky, O., and Lee, I. idecode: In-distribution equivariance for conformal out-of-distribution detection. In AAAI, pp. 7104-7114, 2022.
Kolmogorov, A. N. and Castelnuovo, G. Sur la notion de la moyenne. G. Bardi, tip. della R. Accad. dei Lincei, 1930.
Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM, 60(6):84-90, 2017.
Li, K., Chen, K., Wang, H., Hong, L., Ye, C., Han, J., Chen, Y., Zhang, W., Xu, C., Yeung, D., Liang, X., Li, Z., and Xu, H. CODA: A real-world road corner case dataset for object detection in autonomous driving. In ECCV, volume 13698, pp. 406-423, 2022.

Liang, S., Li, Y., and Srikant, R. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR, 2018.
Liu, W., Shen, X., Du, B., Tsang, I. W., Zhang, W., and Lin, X. Hyperspectral imagery classification via stochastic hhsvms. IEEE Transactions on Image Processing, 28(2): 577-588, 2019.
Liu, W., Wang, X., Owens, J. D., and Li, Y. Energy-based out-of-distribution detection. In NeurIPS, 2020.
Liu, X., Lochman, Y., and Zach, C. GEN: pushing the limits of softmax-based out-of-distribution detection. In CVPR, pp. 23946-23955, 2023.
Lu, H., Gong, D., Wang, S., Xue, J., Yao, L., and Moore, K. Learning with mixture of prototypes for out-of-distribution detection. In ICLR, 2024.
Ma, X., Wang, Z., and Liu, W. On the tradeoff between robustness and fairness. In NeurIPS, 2022.
Ma, X., Zou, X., and Liu, W. A provable decision rule for out-of-distribution detection. In ICML, 2024.
Ma, X., Wu, J., and Liu, W. SAC-BL: A hypothesis testing framework for unsupervised visual anomaly detection and location. Neural Networks, 185:107147, 2025.
Meng, X.-L. Posterior predictive $p$ -values. The annals of statistics, 22(3):1142-1160, 1994.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. 2011.
Regmi, S., Panthi, B., Dotel, S., Gyawali, P. K., Stoyanov, D., and Bhattarai, B. T2fnorm: Train-time feature normalization for OOD detection in image classification. In CVPR, pp. 153-162, 2024.
Ruschendorf, L. Random variables with maximum sums. Advances in Applied Probability, 14(3):623-632, 1982.
Sastry, C. S. and Oore, S. Detecting out-of-distribution examples with gram matrices. In ICML, volume 119, pp. 8491-8501, 2020.
Sun, Y., Ming, Y., Zhu, X., and Li, Y. Out-of-distribution detection with deep nearest neighbors. In ICML, volume 162, pp. 20827-20840, 2022.
Wang, B. and Wang, R. Joint mixability. Mathematics of Operations Research, 41(3):808-826, 2016.
Wei, H., Xie, R., Cheng, H., Feng, L., An, B., and Li, Y. Mitigating neural network overconfidence with logit normalization. In ICML, volume 162, pp. 23631-23644, 2022.

Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
Xu, J. and Liu, W. On robust multiclass learnability. In NeurIPS, 2022.
Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., Du, X., Zhou, K., Zhang, W., Hendrycks, D., Li, Y., and Liu, Z. Openood: Benchmarking generalized out-of-distribution detection. In NeurIPS, 2022.
Yu, C., Ma, X., and Liu, W. Delving into noisy label detection with clean data. In ICML, volume 202, pp. 40290-40305, 2023.
Zhang, J., Yang, J., Wang, P., Wang, H., Lin, Y., Zhang, H., Sun, Y., Du, X., Zhou, K., Zhang, W., Li, Y., Liu, Z., Chen, Y., and Li, H. Openood v1.5: Enhanced benchmark for out-of-distribution detection. CoRR, abs/2306.09301, 2023a.
Zhang, S., Zhou, C., Zhang, P., Liu, Y., Li, Z., and Chen, H. Multiple hypothesis testing for anomaly detection in multi-type event sequences. In ICDM, pp. 808-817, 2023b.
Zhang, S., Zhou, C., Liu, Y., Zhang, P., Lin, X., and Pan, S. Conformal anomaly detection in event sequences. In ICML, 2025.
Zhou, B., Lapedriza, Å., Khosla, A., Oliva, A., and Torralba, A. Places: A 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell., 40(6): 1452-1464, 2018.
Zou, X. and Liu, W. On the adversarial robustness of out-of-distribution generalization models. In NeurIPS, 2023a.
Zou, X. and Liu, W. Generalization bounds for adversarial contrastive learning. Journal of Machine Learning Research, 24:114:1-114:54, 2023b.

A. Proofs

A.1. Proof of Theorem 4.1

Proof. To obtain an explicit analytical solution, we assume that the testing data comes sequentially in a stream. Then, given a testing set $\mathcal{T}^{test} = {X_1^{test}, X_2^{test}, \ldots, X_n^{test}}$ consisting of ID data (TPR only focus on the detection performance of ID data), the expectation of TPR conditional on calibrated set $\mathcal{T}^{cal}$ for the g-BH algorithm can be expressed as

E(TPRTcal)=E(1ni=1n1(f(p(Xitest))>α)Tcal) \mathbb {E} (\operatorname {T P R} | \mathcal {T} ^ {c a l}) = \mathbb {E} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {1} (f (p (X _ {i} ^ {t e s t})) > \alpha) | \mathcal {T} ^ {c a l}\right)

Note that the ID data $X_{1}^{test}, X_{2}^{test}, \ldots, X_{n}^{test}$ are independent and identically distributed, we have

E(TPRTcal)=E(1(f(p(X1test))>α)Tcal)=P(f(p(X1t e s t))>αTc a l). \begin{array}{l} \mathbb {E} (\operatorname {T P R} | \mathcal {T} ^ {c a l}) = \mathbb {E} \left(\mathbb {1} (f (p \left(X _ {1} ^ {t e s t}\right)) > \alpha) | \mathcal {T} ^ {c a l}\right) \\ = \mathbb {P} (f (p \left(X _ {1} ^ {\text {t e s t}}\right)) > \alpha | \mathcal {T} ^ {\text {c a l}}). \\ \end{array}

Note that the empirical p-value

p^(X1test)=j=1m1(s(Xjcal)s(Xitest))+1m+1. \hat {p} (X _ {1} ^ {t e s t}) = \frac {\sum_ {j = 1} ^ {m} \mathbb {1} (s (X _ {j} ^ {c a l}) \leq s (X _ {i} ^ {t e s t})) + 1}{m + 1}.

Since $f'(\cdot) \in \mathcal{F}_1 \cup \mathcal{F}_2$ , $f'(\cdot) > 0$ and thus $f(\cdot)$ is increasing. Denote by $f^{-1}(\cdot)$ the inverse function of $f(\cdot)$ . Then, we obtain

E(TPRTcal)=P(f(p(X1test))>αTcal)=P(j=1m1(s(Xjcal)s(Xitest))+1m+1>f1(α)Tcal)=P(j=1m1(s(Xjc a l)s(Xit e s t))>f1(α)(m+1)1Tc a l)=P(j=1m1(s(Xjcal)s(Xitest))m>f1(α)(m+1)1mTcal)=P(F^(s(Xitest))>f1(α)(m+1)1mTcal) \begin{array}{l} \mathbb {E} (\operatorname {T P R} | \mathcal {T} ^ {c a l}) = \mathbb {P} (f (p (X _ {1} ^ {t e s t})) > \alpha | \mathcal {T} ^ {c a l}) \\ = \mathbb {P} \left(\frac {\sum_ {j = 1} ^ {m} \mathbb {1} (s (X _ {j} ^ {c a l}) \leq s (X _ {i} ^ {t e s t})) + 1}{m + 1} > f ^ {- 1} (\alpha) | \mathcal {T} ^ {c a l}\right) \\ = \mathbb {P} \left(\sum_ {j = 1} ^ {m} \mathbb {1} \left(s \left(X _ {j} ^ {\text {c a l}}\right) \leq s \left(X _ {i} ^ {\text {t e s t}}\right)\right) > f ^ {- 1} (\alpha) (m + 1) - 1 | \mathcal {T} ^ {\text {c a l}}\right) \\ = \mathbb {P} \left(\frac {\sum_ {j = 1} ^ {m} \mathbb {1} (s \left(X _ {j} ^ {c a l}\right) \leq s \left(X _ {i} ^ {t e s t}\right))}{m} > \frac {f ^ {- 1} (\alpha) (m + 1) - 1}{m} \mid \mathcal {T} ^ {c a l}\right) \\ = \mathbb {P} \left(\hat {F} (s (X _ {i} ^ {t e s t})) > \frac {f ^ {- 1} (\alpha) (m + 1) - 1}{m} | \mathcal {T} ^ {c a l}\right) \\ \end{array}

Without loss of generality, we suppose that the random variable $s(X_1^{test})$ follows continuous distribution. Denote by $F(\cdot)$ the real cumulative distribution function of $s(X_1^{test})$ and by $\hat{F}^{-1}(\cdot)$ the inverse function of $\hat{F}(\cdot)$ . In addition, we denote $[\cdot]$ the floor function. Then, we get

E(TPRTcal)=1P(F^(s(Xitest))f1(α)(m+1)1mTcal)=1P(F^(s(Xitest))[f1(α)(m+1)]1mTcal)=1P(s(Xitest)F^1([f1(α)(m+1)]1m)Tcal)=1F(F^1([f1(α)(m+1)]1m)). \begin{array}{l} \mathbb {E} (\mathrm {T P R} | \mathcal {T} ^ {c a l}) = 1 - \mathbb {P} \left(\hat {F} (s (X _ {i} ^ {t e s t})) \leq \frac {f ^ {- 1} (\alpha) (m + 1) - 1}{m} | \mathcal {T} ^ {c a l}\right) \\ = 1 - \mathbb {P} \left(\hat {F} (s (X _ {i} ^ {t e s t})) \leq \frac {[ f ^ {- 1} (\alpha) (m + 1) ] - 1}{m} | \mathcal {T} ^ {c a l}\right) \\ = 1 - \mathbb {P} \left(s \left(X _ {i} ^ {t e s t}\right) \leq \hat {F} ^ {- 1} \left(\frac {[ f ^ {- 1} (\alpha) (m + 1) ] - 1}{m}\right) \mid \mathcal {T} ^ {c a l}\right) \\ = 1 - F \left(\hat {F} ^ {- 1} \left(\frac {[ f ^ {- 1} (\alpha) (m + 1) ] - 1}{m}\right)\right). \\ \end{array}

For simplicity, we denote $\beta = [f^{-1}(\alpha)(m + 1)] - 1$ . Note that $\hat{F}(X_{(\beta)}^{cal}) = \frac{\beta}{m}$ . Therefore, we have

E(TPRTcal)=1F(X(β)cal). \mathbb {E} (\operatorname {T P R} | \mathcal {T} ^ {c a l}) = 1 - F (X _ {(\beta)} ^ {c a l}).

According to Eq. (5), $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal}) = 1 - p(X_{(\beta)}^{cal})$ .

To complete our proof, we need the following technical lemma.

Lemma A.1. Suppose the continuous random variable $X$ have the cumulative distribution function $F(\cdot)$ . Then, the random variable $F(X)$ follows the uniform distribution on $(0, 1)$ .

For the examples $X_{1}^{cal}, X_{2}^{cal}, \ldots, X_{m}^{cal}$ , we denote $p(X_{i}^{cal}) = F(X_{i}^{cal}), i = 1, 2, \dots, m$ and $p(X_{(\beta)}^{cal})$ is the $\beta$ -th order statistic from the smallest to the largest. By Lemma A.1, $p(X_{i}^{cal})$ follows the uniform distribution on $(0, 1)$ . Next, we aims to derive the probability density function of $p(X_{(\beta)}^{cal})$ . According to the Definition of $p(X_{i}^{cal}), p(X_{1}^{cal}), p(X_{2}^{cal}), \dots, p(X_{m}^{cal})$ are independent and identically distributed. Denote by $F_{p}(\cdot)$ the cumulative distribution function of $p(X_{i}^{cal})$ . For any $x$ in the support set of $F_{p}(\cdot)$ and a sufficiently small $\delta$ , we have

P(xp(X(β)cal)<x+δ)=P(o n e o f t h ep(Xcal)s[x,x+δ)a n dβ1o f t h e o t h e r s<x)=i=1nP(p(Xical)[x,x+δ)a n d e x a c t l yβ1o f t h e o t h e r s<x)=nP(p(X1c a l)[x,x+δ)a n dβ1o f t h e o t h e r s<x)=nP(p(X1c a l)[x,x+δ))P(β1o f t h e o t h e r s<x)=nP(p(X1cal)[x,x+δ))((m1β1)P(p(X1cal)<x)β1P(p(X1cal)>x)mβ)(8) \begin{array}{l} \mathbb {P} \left(x \leq p \left(X _ {(\beta)} ^ {c a l}\right) < x + \delta\right) = \mathbb {P} (\text {o n e o f t h e} p \left(X ^ {c a l}\right) ^ {\prime} s \in [ x, x + \delta) \text {a n d} \beta - 1 \text {o f t h e o t h e r s} < x) \\ = \sum_ {i = 1} ^ {n} \mathbb {P} \left(p \left(X _ {i} ^ {c a l}\right) \in [ x, x + \delta) \text {a n d e x a c t l y} \beta - 1 \text {o f t h e o t h e r s} < x\right) \\ = n \mathbb {P} \left(p \left(X _ {1} ^ {\text {c a l}}\right) \in [ x, x + \delta) \text {a n d} \beta - 1 \text {o f t h e o t h e r s} < x\right) \tag {8} \\ = n \mathbb {P} \left(p \left(X _ {1} ^ {\text {c a l}}\right) \in [ x, x + \delta)\right) \mathbb {P} (\beta - 1 \text {o f t h e o t h e r s} < x) \\ = n\mathbb{P}\left(p(X_{1}^{cal})\in [x,x + \delta)\right)\left(\binom {m - 1}{\beta - 1}\mathbb{P}(p(X_{1}^{cal}) < x)^{\beta -1}P(p(X_{1}^{cal}) > x)^{m - \beta}\right) \\ \end{array}

Then, the probability density function of $p(X_{(\beta)}^{cal})$ is

fβ(x)=limδ0P(xp(X(β)cal)<x+δ)δ=m(m1β1)Fβ1(x)(1F(x))mβF(x)={m(m1β1)xβ1(1x)mβi f0<x<10o t h e r w i s e . \begin{array}{l} f _ {\beta} (x) = \lim _ {\delta \rightarrow 0} \frac {\mathbb {P} \left(x \leq p \left(X _ {(\beta)} ^ {c a l}\right) < x + \delta\right)}{\delta} \\ = m \binom {m - 1} {\beta - 1} F ^ {\beta - 1} (x) (1 - F (x)) ^ {m - \beta} F ^ {\prime} (x) \\ = \left\{ \begin{array}{l l} m \binom {m - 1} {\beta - 1} x ^ {\beta - 1} (1 - x) ^ {m - \beta} & \text {i f} 0 < x < 1 \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \\ \end{array}

Therefore, the probability density function of $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal}) = 1 - F(X_{(\beta)}^{cal})$ can be expressed as

fE(TPRTcal)(x)=fβ(1x)={m(m1β1)xmβ(1x)β1i f0<x<10o t h e r w i s e . \begin{array}{l} f _ {\mathbb {E} (\mathrm {T P R} | \mathcal {T} ^ {c a l})} (x) = f _ {\beta} (1 - x) \\ = \left\{ \begin{array}{l l} m \binom {m - 1} {\beta - 1} x ^ {m - \beta} (1 - x) ^ {\beta - 1} & \text {i f} 0 < x < 1 \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \\ \end{array}

The above result indicates that $\mathbb{E}(\mathrm{TPR}|\mathcal{T}^{cal})$ follows beta distribution with shape parameters $m - \beta + 1$ and $\beta$ , which completes the proof.

A.2. Technical Lemmas and Their Proofs

Lemma A.2. Suppose that the function $g(\cdot)$ is continuous and monotonically increasing on [0, 1]. Then, for any $\alpha \in (0, 1)$ , we have

0αg(x)dxα01g(x)dx. \int_ {0} ^ {\alpha} g (x) d x \leq \alpha \int_ {0} ^ {1} g (x) d x.

Proof. Since function $g(\cdot)$ is increasing, $g(x)$ is integrable, namely, $\int_0^1 g(x)dx < \infty$ . Note that

α01g(x)dx=α0αg(x)dx+αα1g(x)dx \alpha \int_ {0} ^ {1} g (x) d x = \alpha \int_ {0} ^ {\alpha} g (x) d x + \alpha \int_ {\alpha} ^ {1} g (x) d x

then, we have

0αg(x)dxα01g(x)dx=0αg(x)dxα0αg(x)dxαα1g(x)dx=(1α)0αg(x)dxαα1g(x)dx. \begin{array}{l} \int_ {0} ^ {\alpha} g (x) d x - \alpha \int_ {0} ^ {1} g (x) d x = \int_ {0} ^ {\alpha} g (x) d x - \alpha \int_ {0} ^ {\alpha} g (x) d x - \alpha \int_ {\alpha} ^ {1} g (x) d x \\ = (1 - \alpha) \int_ {0} ^ {\alpha} g (x) d x - \alpha \int_ {\alpha} ^ {1} g (x) d x. \\ \end{array}

By the first mean value theorem for integration, there exist $\xi_1 \in (0, \alpha)$ and $\xi_2 \in (\alpha, 1)$ such that

0αg(x)dx=αg(ξ1),α1g(x)dx=(1α)g(ξ2). \int_ {0} ^ {\alpha} g (x) d x = \alpha g (\xi_ {1}), \qquad \int_ {\alpha} ^ {1} g (x) d x = (1 - \alpha) g (\xi_ {2}).

Obviously, $\xi_1 \leq \xi_2$ . Since $g(x)$ is increasing, then $g(\xi_1) \leq g(\xi_2)$ . Therefore, we have

0αg(x)dxα01g(x)dx=α(1α)(g(ξ1)g(ξ2))0, \int_ {0} ^ {\alpha} g (x) d x - \alpha \int_ {0} ^ {1} g (x) d x = \alpha (1 - \alpha) \left(g \left(\xi_ {1}\right) - g \left(\xi_ {2}\right)\right) \leq 0,

namely, for any $\alpha \in (0,1)$

0αg(x)dxα01g(x)dx. \int_ {0} ^ {\alpha} g (x) d x \leq \alpha \int_ {0} ^ {1} g (x) d x.

Based on the lemma A.2, we have following lemma.

Lemma A.3. Suppose that the function $g(\cdot)$ is continuous and monotonically increasing on $[0, 1]$ . Then, for any $\alpha \in (0, 1)$ , we have

P(Ω(p,w)g1(1α0αg(x)dx))α. \mathbb {P} \left(\Omega (\mathbf {p}, \mathbf {w}) \leq g ^ {- 1} \left(\frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x\right)\right) \leq \alpha .

Proof. Denote $Y_{i} = g(p_{i})$ where $p_i \in \mathcal{P}$ . Without loss of generality, we assume that the p-values $p_1, \dots, p_L$ are exact. Note that for any $t \in (0, 1)$ , we have

P(Yit)=P(g(pi)t)=P(pig1(t))=g1(t). \mathbb {P} \left(Y _ {i} \leq t\right) = \mathbb {P} \left(g \left(p _ {i}\right) \leq t\right) = \mathbb {P} \left(p _ {i} \leq g ^ {- 1} (t)\right) = g ^ {- 1} (t).

Therefore, $g^{-1}(\cdot)$ is the cumulative distribution function of $Y_{i}$ . Based on the theoretical results in Bernard et al. (2014), we obtain

Q(h,p,α)=infpiP{Q(Ω(p,w),1)} Q ^ {*} (h, \mathbf {p}, \alpha) = \inf _ {p _ {i} \in \mathcal {P}} \left\{Q (\Omega (\mathbf {p}, \mathbf {w}), 1) \right\}

where $h(p_1,\dots ,p_L) = \Omega (\mathbf{p},\mathbf{w})$ and

Ω(p,w)=g1(w1g(p1)+w2g(p2)+,wLg(pL)) \Omega (\mathbf {p}, \mathbf {w}) = g ^ {- 1} \left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right)

Note that $Q((w_{1}g(p_{1}) + w_{2}g(p_{2}) + \dots, w_{L}g(p_{L})), 1)$ is the essential supremum of $w_{1}g(p_{1}) + w_{2}g(p_{2}) + \dots, w_{L}g(p_{L})$ , thus the following relations hold:

Q((w1g(p1)+w2g(p2)+,wLg(pL)),1)E(w1g(p1)+w2g(p2)+,wLg(pL)). Q \left(\left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right), 1\right) \geq \mathbb {E} \left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right).

Since $g(p_{1}), g(p_{2}), \dots, g(p_{L})$ are identically distributed sharing the cumulative distribution function $g^{-1}(\cdot)$ , we have

E(w1g(p1)+w2g(p2)+,wLg(pL))=w1E(g(p1))+w2E(g(p2))+,wLE(g(pL))=(w1+w2++wL)E(g(p1))=E(g(p1))=01g(x)dx \begin{array}{l} \mathbb {E} \left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right) = w _ {1} \mathbb {E} \left(g \left(p _ {1}\right)\right) + w _ {2} \mathbb {E} \left(g \left(p _ {2}\right)\right) + \dots , w _ {L} \mathbb {E} \left(g \left(p _ {L}\right)\right) \\ = \left(w _ {1} + w _ {2} + \dots + w _ {L}\right) \mathbb {E} (g (p _ {1})) \\ = \mathbb {E} (g (p _ {1})) = \int_ {0} ^ {1} g (x) d x \\ \end{array}

By Lemma A.2, we get

Q((w1g(p1)+w2g(p2)+,wLg(pL)),1)1α0αg(x)dx Q \left(\left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right), 1\right) \geq \frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x

Because $g(\cdot)$ is continuous and increasing, $g^{-1}(\cdot)$ is also continuous and increasing. Hence, for any $p_i \in \mathcal{P}$ we have

Q(g1((w1g(p1)+w2g(p2)+,wLg(pL)),1)g1(1α0αg(x)dx), Q \left( \right.g ^ {- 1} \left(\left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right), 1\right) \geq g ^ {- 1} \left(\frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x\right),

namely, $g^{-1}\left(\frac{1}{\alpha}\int_{0}^{\alpha}g(x)dx\right)$ is the lower bound of $Q(g^{-1}((w_1g(p_1) + w_2g(p_2) + \dots ,w_Lg(p_L))),1)$ . Then, we get

Q(h,p,α)=infpiP{Q(g1((w1g(p1)+w2g(p2)+,wLg(pL)),1)}g1(1α0αg(x)dx). \begin{array}{l} Q ^ {*} (h, \mathbf {p}, \alpha) = \inf _ {p _ {i} \in \mathcal {P}} \left\{ \right.Q \left( \right.g ^ {- 1} \left(\left(w _ {1} g \left(p _ {1}\right) + w _ {2} g \left(p _ {2}\right) + \dots , w _ {L} g \left(p _ {L}\right)\right), 1\right)\left. \right\} \\ \geq g ^ {- 1} \left(\frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x\right). \\ \end{array}

According to the definition of $\alpha$ -quantile, we obtain

P(Ω(p,w)g1(1α0αg(x)dx))P(Ω(p,w)Q(Ω(p,w),α))α. \mathbb {P} \left(\Omega (\mathbf {p}, \mathbf {w}) \leq g ^ {- 1} \left(\frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x\right)\right) \leq \mathbb {P} \left(\Omega (\mathbf {p}, \mathbf {w}) \leq Q (\Omega (\mathbf {p}, \mathbf {w}), \alpha)\right) \leq \alpha .

Lemma A.3 provide a significant region for the level $\alpha$ . By Lemma A.3, we can choose appropriate function $g(\cdot)$ to integrate various p-values.

A.3. Proof of Theorem 5.1

Proof. When $g(x) = x^{\kappa}$ , $g^{-1}(x) = x^{\frac{1}{\kappa}}$ . According to the Theorem A.3, we have

g1(1α0αg(x)dx)=(1κ+1ακ)1κ=(κ+1)1κα. g ^ {- 1} \left(\frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x\right) = \left(\frac {1}{\kappa + 1} \alpha^ {\kappa}\right) ^ {\frac {1}{\kappa}} = (\kappa + 1) ^ {- \frac {1}{\kappa}} \alpha .

and

P(Ω(p,w)g1(1α0αg(x)dx))=P((w1p1κ+w2p2κ++wLpLκ)1κ(κ+1)1κα)=P((κ+1)1κ(w1p1κ+w2p2κ++wLpLκ)1κα)α \begin{array}{l} \mathbb {P} \left(\Omega (\mathbf {p}, \mathbf {w}) \leq g ^ {- 1} \left(\frac {1}{\alpha} \int_ {0} ^ {\alpha} g (x) d x\right)\right) \\ = \mathbb {P} \left(\left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} \leq (\kappa + 1) ^ {- \frac {1}{\kappa}} \alpha\right) \\ = \mathbb {P} \left(\left(\kappa + 1\right) ^ {\frac {1}{\kappa}} \left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} \leq \alpha\right) \leq \alpha \\ \end{array}

Therefore,

(κ+1)1κ(w1p1κ+w2p2κ++wLpLκ)1κ (\kappa + 1) ^ {\frac {1}{\kappa}} \left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right) ^ {\frac {1}{\kappa}}

is a valid p-value. Specifically, when $\kappa = 1$ and $w_{i} = \frac{1}{L}$

(κ+1)1κ(w1p1κ+w2p2κ++wLpLκ)1κ=2(p1+p2++pL)L. (\kappa + 1) ^ {\frac {1}{\kappa}} \left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} = \frac {2 \left(p _ {1} + p _ {2} + \cdots + p _ {L}\right)}{L}.

Denote $w_{k^*} = \max {p_1, p_2, \dots, p_L}$ , Note that

(κ+1)1κ(wkpkκ)1κ(κ+1)1κ(w1p1κ+w2p2κ++wLpLκ)1κ(κ+1)1κ(pkκ)1κ \left(\kappa + 1\right) ^ {\frac {1}{\kappa}} \left(w _ {k ^ {*}} p _ {k ^ {*}} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} \leq \left(\kappa + 1\right) ^ {\frac {1}{\kappa}} \left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} \leq \left(\kappa + 1\right) ^ {\frac {1}{\kappa}} \left(p _ {k ^ {*}} ^ {\kappa}\right) ^ {\frac {1}{\kappa}}

further,

limκ(κ+1)1κ(wkpkκ)1κ=limκ(κ+1)1κ(pkκ)1κ=pk \lim _ {\kappa \rightarrow \infty} (\kappa + 1) ^ {\frac {1}{\kappa}} \left(w _ {k ^ {*}} p _ {k ^ {*}} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} = \lim _ {\kappa \rightarrow \infty} (\kappa + 1) ^ {\frac {1}{\kappa}} \left(p _ {k ^ {*}} ^ {\kappa}\right) ^ {\frac {1}{\kappa}} = p _ {k ^ {*}}

Hence, when $\kappa \to \infty$ , we have

limκ(κ+1)1κ(w1p1κ+w2p2κ++wLpLκ)1κ=max{p1,p2,,pL}. \lim _ {\kappa \to \infty} (\kappa + 1) ^ {\frac {1}{\kappa}} (w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}) ^ {\frac {1}{\kappa}} = \max \{p _ {1}, p _ {2}, \dots , p _ {L} \}.

A.4. Proof of Theorem 5.2

Proof. According to the proof of Theorem A.3, if $g(x) = x^{\kappa}$ , we have

Q(h,p,α)=infpiP{Q(Ω(p,w),1)}=infpiP{Q((w1p1κ+w2p2κ++wLpLκ)1κ,1)} Q ^ {*} (h, \mathbf {p}, \alpha) = \inf _ {p _ {i} \in \mathcal {P}} \left\{Q (\Omega (\mathbf {p}, \mathbf {w}), 1) \right\} = \inf _ {p _ {i} \in \mathcal {P}} \left\{Q ((w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}) ^ {\frac {1}{\kappa}}, 1) \right\}

Then,

(Q(h,p,α))κ=infpiP{Q((w1p1κ+w2p2κ++wLpLκ),1)}ακκ+1. (Q ^ {*} (h, \mathbf {p}, \alpha)) ^ {\kappa} = \inf _ {p _ {i} \in \mathcal {P}} \left\{Q \left(\left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right), 1\right) \right\} \geq \frac {\alpha^ {\kappa}}{\kappa + 1}.

Note that $\kappa > 0$ , the probably density function of $g(p_i)$ is monotone on its support set. By Wang & Wang (2016),

(Q(h,p,α))κ=infpiP{Q((w1p1κ+w2p2κ++wLpLκ),1)}=ακκ+1. \big (Q ^ {*} (h, \mathbf {p}, \alpha) \big) ^ {\kappa} = \inf _ {p _ {i} \in \mathcal {P}} \big \{Q ((w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}), 1) \big \} = \frac {\alpha^ {\kappa}}{\kappa + 1}.

if and only if the "mean condition" is satisfied:

wακακκ+1(w1+w2++wL)ακwακ=(1w)ακ. w ^ {*} \alpha^ {\kappa} \leq \frac {\alpha^ {\kappa}}{\kappa + 1} \leq (w _ {1} + w _ {2} + \dots + w _ {L}) \alpha^ {\kappa} - w ^ {*} \alpha^ {\kappa} = (1 - w ^ {*}) \alpha^ {\kappa}.

Equivalently, $w^{*}\leq \min \left{\frac{1}{2},\frac{1}{1 + \kappa},\frac{\kappa}{\kappa + 1}\right}$ . Further, we have

Q(h~,p,α)=infpiP{Q((κ+1)1κ(w1p1κ+w2p2κ++wLpLκ)1κ,1)}=α,(9) Q ^ {*} (\tilde {h}, \mathbf {p}, \alpha) = \inf _ {p _ {i} \in \mathcal {P}} \left\{Q \left(\left(\kappa + 1\right) ^ {\frac {1}{\kappa}} \left(w _ {1} p _ {1} ^ {\kappa} + w _ {2} p _ {2} ^ {\kappa} + \dots + w _ {L} p _ {L} ^ {\kappa}\right) ^ {\frac {1}{\kappa}}, 1\right) \right\} = \alpha , \tag {9}

where $\tilde{h} (\mathbf{p}) = (\kappa +1)^{\frac{1}{\kappa}}(w_1p_1^\kappa +w_2p_2^\kappa +\dots +w_Lp_L^\kappa)^{\frac{1}{\kappa}}$ . Next, based on the condition in Eq. (9), we aim to demonstrate

suppiP{P(h~(p)α)}=α. \sup _ {p _ {i} \in \mathcal {P}} \left\{\mathbb {P} \left(\tilde {h} (\mathbf {p}) \leq \alpha\right) \right\} = \alpha .

If $Q^{}(\tilde{h}, \mathbf{p}, \alpha) = \alpha$ , for any $\alpha \in (0, 1)$ and arbitrary $\mathbf{p}$ -values $p_1, p_2, \dots, p_L$ where $p_i \in \mathcal{P}$ , we have $Q(\tilde{h}(\mathbf{p}), \alpha) \geq \alpha$ according to the definition of $Q^{}(\tilde{h}, \mathbf{p}, \alpha)$ . By the definition of $\alpha$ -quantile, $\mathbb{P}(\tilde{h}(\mathbf{p}) < \alpha) \leq \alpha$ . It follows that

P(h~(p)α)P(h~(p)<α+δ)α+δ, \mathbb {P} \left(\tilde {h} (\mathbf {p}) \leq \alpha\right) \leq \mathbb {P} \left(\tilde {h} (\mathbf {p}) < \alpha + \delta\right) \leq \alpha + \delta ,

Since $\delta$ is arbitrary, we have

P(h~(p)α)α. \mathbb {P} \left(\tilde {h} (\mathbf {p}) \leq \alpha\right) \leq \alpha .

On the other hand, according to the definition of infimum, for any $\delta \in (0,1)$ , there exist the p-values $p_1^*,\dots ,p_L^\in \mathcal{P}$ such that $\alpha \leq Q(\tilde{h} (\mathbf{p}^{}),\alpha) < \alpha +\delta$ , and thus $\mathbb{P}(\tilde{h} (\mathbf{p}^{})\leq \alpha +\delta)\geq \alpha$ where $\mathbf{p}^{} = {p_{1}^{},\dots ,p_{L}^{}}$ . Since $\delta$ is arbitrary, then we have

suppiP{P(h~(p)α)}=α. \sup _ {p _ {i} \in \mathcal {P}} \left\{\mathbb {P} \left(\tilde {h} (\mathbf {p}) \leq \alpha\right) \right\} = \alpha .

B. Additional Experimental Results

In this section, we present additional experimental results. The results on MNIST as OOD data are presented in Tables 7. Table 7 shows the same conclusions as those of tables in main text.

Table 7. Experimental results (%) of practical metrics on CIFAR-10 as ID data. The MNIST is OOD data. Energy and MSP are used as the score functions. We compare the detection performance of g-BH algorithm with different sizes of calibrated set.

ScoreEnergyMSP
ResNet18WideResNetResNet18WideResNet
RatioF1TPRFPRF1TPRFPRF1TPRFPRF1TPRFPR
0.263.4847.944.9463.1645.871.9862.8748.255.2562.4646.452.29
0.365.1550.155.7865.0949.122.3564.2350.105.9064.9549.402.72
0.467.5752.956.9566.9753.993.0466.6753.696.3767.9553.133.25
0.569.5357.397.2770.1556.753.5966.6754.687.5969.7655.613.82
0.672.9459.828.5471.5959.444.2569.9158.599.0270.8857.204.19
0.773.8262.2910.1175.5363.176.1171.1960.7910.0072.9160.204.94
0.874.7565.8713.6877.4666.576.7971.8262.1911.0074.8363.376.00
0.975.5968.5715.3579.8569.448.2673.5266.5514.4978.1369.698.70
176.8175.4919.5481.7473.3910.7975.5981.5134.1479.1572.4710.65