diff --git a/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_content_list.json b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e338f7d6c9a5468b3d27a7f5bf360ea730d36b6c
--- /dev/null
+++ b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b9742c693d20814ee1aa063ef17c1280dbcc90de865d53186f054158b1ecd61
+size 174666
diff --git a/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_model.json b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..03161fea99fdf1022dd3856a9e22d858e8fae621
--- /dev/null
+++ b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d220fcd0e497e29067411e72a8ae8cdb1570c1102c7662de960bc0ae672b9799
+size 224570
diff --git a/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_origin.pdf b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3e41fde12853497935769c285509853f5502c572
--- /dev/null
+++ b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da6a0e05e147f8cae1c75d219ea28c37cdd13010801940a7421205efcecd3ffc
+size 1177795
diff --git a/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/full.md b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8d389a7f1fb12cd81a9a2d22e93c161d3ef82ac4
--- /dev/null
+++ b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/full.md
@@ -0,0 +1,941 @@
+# $O(\sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization
+
+Rahul Vaze*
+
+ool of Technology and Computer Science
+
+Tata Institute of Fundamental Research, Mumbai
+
+rahul.vaze@gmail.com
+
+Abhishek Sinha
+
+School of Technology and Computer Science
+
+Tata Institute of Fundamental Research, Mumbai
+
+abhishek.sinha@tifr.res.in
+
+# Abstract
+
+The constrained version of the standard online convex optimization (OCO) framework, called COCO is considered, where on every round, a convex cost function and a convex constraint function are revealed to the learner after it chooses the action for that round. The objective is to simultaneously minimize the static regret and cumulative constraint violation (CCV). An algorithm is proposed that guarantees a static regret of $O(\sqrt{T})$ and a CCV of $\min \{ \mathcal{V}, O(\sqrt{T} \log T) \}$ , where $\mathcal{V}$ depends on the distance between the consecutively revealed constraint sets, the shape of constraint sets, dimension of action space and the diameter of the action space. When constraint sets have additional structure, $\mathcal{V} = O(1)$ . Compared to the state of the art results, static regret of $O(\sqrt{T})$ and CCV of $O(\sqrt{T} \log T)$ , that were universal, the new result on CCV is instance dependent, which is derived by exploiting the geometric properties of the constraint sets.
+
+# 1 Introduction
+
+In this paper, we consider the constrained version of the standard online convex optimization (OCO) framework, called constrained OCO or COCO. In COCO, on every round $t$ , the online algorithm first chooses an admissible action $x_{t} \in \mathcal{X} \subset \mathbb{R}^{d}$ , and then the adversary chooses a convex loss/cost function $f_{t}: \mathcal{X} \to \mathbb{R}$ and a constraint function of the form $g_{t}(x) \leq 0$ , where $g_{t}: \mathcal{X} \to \mathbb{R}$ is a convex function. Since $g_{t}$ 's are revealed after the action $x_{t}$ is chosen, an online algorithm need not necessarily take feasible actions on each round, and in addition to the static regret
+
+$$
+\operatorname {R e g r e t} _ {[ 1: T ]} \equiv \sup _ {\left\{f _ {t} \right\} _ {t = 1} ^ {T}} \sup _ {x ^ {\star} \in \mathcal {X}} \operatorname {R e g r e t} _ {T} \left(x ^ {\star}\right), \text {w h e r e} \operatorname {R e g r e t} _ {T} \left(x ^ {\star}\right) \equiv \sum_ {t = 1} ^ {T} f _ {t} \left(x _ {t}\right) - \sum_ {t = 1} ^ {T} f _ {t} \left(x ^ {\star}\right), \tag {1}
+$$
+
+an additional metric of interest is the total cumulative constraint violation (CCV) defined as $\mathrm{CCV}_{[1:T]} \equiv \sum_{t=1}^{T} \max(g_t(x_t), 0)$ . Let $\mathcal{X}^{\star}$ be the feasible set consisting of all admissible actions that satisfy all constraints $g_t(x) \leq 0, t \in [T]$ . Under the standard assumption that $\mathcal{X}^{\star}$ is not
+
+empty (called the feasibility assumption), the goal is to design an online algorithm to simultaneously achieve a small regret (1) with respect to any admissible benchmark $x^{\star} \in \mathcal{X}^{\star}$ and a small CCV.
+
+With constraint sets $\mathcal{G}_t = \{x\in \mathcal{X}:g_t(x)\leq 0\}$ being convex for all $t$ , and the assumption $\mathcal{X}^{\star} = \cap_{t}\mathcal{G}_{t}\neq \varnothing$ implies that sets $S_{t} = \cap_{\tau = 1}^{t}\mathcal{G}_{\tau}$ are convex and are nested, i.e. $S_{t}\subseteq S_{t - 1}$ and $\mathcal{X}^{\star}\in S_{t}$ for all $t$ . Essentially, set $S_{t}$ 's are sufficient to quantify the CCV.
+
+# 1.1 Prior Work
+
+Constrained OCO (COCO): (A) Time-invariant constraints: COCO with time-invariant constraints, i.e., $g_{t} = g, \forall t$ [Yuan and Lamperski, 2018, Jenatton et al., 2016, Mahdavi et al., 2012, Yi et al., 2021] has been considered extensively, where functions $g$ are assumed to be known to the algorithm a priori. The algorithm is allowed to take actions that are infeasible at any time to avoid the costly projection step of the vanilla projected OGD algorithm and the main objective was to design an efficient algorithm with a small regret and CCV while avoiding the explicit projection step.
+
+(B) Time-varying constraints: The more difficult question is solving COCO problem when the constraint functions, i.e., $g_{t}$ 's, change arbitrarily with time $t$ . In this setting, all prior work on COCO made the feasibility assumption. One popular algorithm for solving COCO considered a Lagrangian function optimization that is updated using the primal and dual variables [Yu et al., 2017, Sun et al., 2017, Yi et al., 2023]. Alternatively, [Neely and Yu, 2017] and [Liakopoulos et al., 2019] used the drift-plus-penalty (DPP) framework [Neely, 2010] to solve the COCO, but which needed additional assumption, e.g. the Slater's condition in [Neely and Yu, 2017] and with weaker form of the feasibility assumption [Neely and Yu, 2017]'s.
+
+[Guo et al., 2022] obtained the bounds similar to [Neely and Yu, 2017] but without assuming Slater's condition. However, the algorithm [Guo et al., 2022] was quite computationally intensive since it requires solving a convex optimization problem on each round. Finally, very recently, the state of the art guarantees on simultaneous bounds on regret $O(\sqrt{T})$ and CCV $O(\sqrt{T}\log T)$ for COCO were derived in [Sinha and Vaze, 2024] with a very simple algorithm that combines the loss function at time $t$ and the CCV accrued till time $t$ in a single loss function, and then executes the online gradient descent (OGD) algorithm on the single loss function with an adaptive step-size. Another extension of [Sinha and Vaze, 2024] can be found in [Lekeufack and Jordan, 2025] that considers COCO problem under predictions about $f_{t}$ 's and $g_{t}$ 's. See Remark 6 for comparison of this work with [Lekeufack and Jordan, 2025]. Please refer to Table 1 for a brief summary of the prior results.
+
+The COCO problem has been considered in the dynamic setting as well [Chen and Giannakis, 2018, Cao and Liu, 2018, Vaze, 2022, Liu et al., 2022] where the benchmark $x^{\star}$ in (1) is replaced by $x_{t}^{\star}$ ( $x_{t}^{\star} = \arg \min_{x} f_{t}(x)$ ) that is also allowed to change its actions over time. However, in this paper, we focus our entire attention on the static version. A special case of COCO is the online constraint satisfaction (OCS) problem that does not involve any cost function, i.e., $f_{t} = 0$ , $\forall t$ , and the only object of interest is minimizing the CCV. The algorithm with state of the art guarantee for COCO [Sinha and Vaze, 2024] was shown to have a CCV of $O(\sqrt{T}\log T)$ for the OCS.
+
+# 1.2 Convex Body Chasing Problem
+
+A well-studied problem related to the COCO is the nested convex body chasing (NCBC) problem [Bansal et al., 2018, Argue et al., 2019, Bubeck et al., 2020], where at each round $t$ , a convex set $\chi_t \subseteq \chi$ is revealed such that $\chi_t \subseteq \chi_{t-1}$ , and $\chi_0 = \chi \subseteq \mathbb{R}^d$ is a convex, compact, and bounded set. The objective is to choose action $x_t \in \chi_t$ so as to minimize the total movement cost $C = \sum_{t=1}^{T} ||x_t - x_{t-1}||$ , where $x_0 \in \chi$ is some fixed action. Best known-algorithms for NCBC [Bansal et al., 2018, Argue et al., 2019, Bubeck et al., 2020] choose $x_t$ to be the centroid or Steiner point of $\chi_t$ , essentially well inside the newly revealed convex set in order to reduce the future movement cost. With COCO, such an approach does not appear useful because of the presence of cost functions $f_t$ 's whose minima could be towards the boundary of convex sets $\chi_t$ 's.
+
+# 1.3 Limitations of Prior Work
+
+We explicitly show in Lemma 6 that the best known algorithm [Sinha and Vaze, 2024] (in terms of regret and up to log factors for CCV) for solving COCO suffers a CCV of $\Omega (\sqrt{T}\log T)$ even for 'simple' problem instances where $f_{t} = f$ and $g_{t} = g$ for all $t$ and $d = 1$ dimension, for which ideally the CCV should be $O(1)$ . The same is true for most other algorithms, where the main reason for their large CCV for simple instances is that all these algorithms treat minimizing the CCV as
+
+a regret minimization problem for functions $g_{t}$ . What they fail to exploit is the geometry of the underlying nested convex sets $S_{t}$ 's that control the CCV.
+
+# 1.4 Main open question
+
+In comparison to the above discussed upper bounds, the best known simultaneous lower bound [Sinha and Vaze, 2024] for COCO is $\mathcal{R}_{[1:T]} = \Omega(\sqrt{d})$ and $\mathrm{CCV}_{[1:T]} = \Omega(\sqrt{d})$ , where $d$ is the dimension of the action space $\mathcal{X}$ . Without constraints, i.e., $g_t \equiv 0$ for all $t$ , the lower bound on $\mathcal{R}_{[1:T]} = \Omega(\sqrt{T})$ [Hazan, 2012]. Thus, there is a fundamental gap between the lower and upper bound for the CCV, and the main open question for COCO is: Is it possible to simultaneously achieve $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ and $CCV_{[1:T]} = o(\sqrt{T})$ or $CCV_{[1:T]} = O(1)$ for COCO? Even though we do not fully resolve this question, in this paper, we make some meaningful progress by proposing an algorithm that exploits the geometry of the nested sets $S_t$ 's and show that it is possible to simultaneously achieve $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ and $\mathrm{CCV}_{[1:T]} = O(1)$ in certain cases, and for general case, give a bound on the CCV that depends on the shape of the convex sets $S_t$ 's while achieving $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ . In particular, the contributions of this paper are as follows.
+
+# 1.5 Our Contributions
+
+In this paper, we propose an algorithm (Algorithm 2) that tries to exploit the geometry of the nested convex sets $S_{t}$ 's. In particular, Algorithm 2 at time $t$ , first takes an OGD step from the previous action $x_{t-1}$ with respect to the most recently revealed loss function $f_{t-1}$ with appropriate step-size to reach $y_{t-1}$ , and then projects $y_{t-1}$ onto the most recently revealed set $S_{t-1}$ to get $x_{t}$ , the action to be played at time $t$ . Let $F_{t}$ be the "projection" hyperplane passing through $x_{t}$ that is perpendicular to $x_{t} - y_{t-1}$ . For Algorithm 2, we derive the following guarantees.
+
+- The regret of the Algorithm 2 is $O(\sqrt{T})$ .
+- The CCV for the Algorithm 2 takes the following form
+
+- When sets $S_{t}$ 's are structured, e.g. are spheres, or axis parallel cuboids/regular polygons, CCV is $O(1)$ .
+- For the special case of $d = 2$ , when projection hyperplanes $F_{t}$ 's progressively make increasing angles with respect to the first projection hyperplane $F_{1}$ , the CCV is $O(1)$ .
+- For general $S_{t}$ 's, the CCV is upper bounded by a quantity $\mathcal{V}$ that is a function of the distance between the consecutive sets $S_{t}$ and $S_{t+1}$ for all $t$ , the shape of $S_{t}$ 's, dimension $d$ and the diameter $D$ . Since $\mathcal{V}$ depends on the shape of $S_{t}$ 's, there is no universal bound on $\mathcal{V}$ , and the derived bound is instance dependent.
+
+- As pointed out above, for general $S_{t}$ 's, there is no universal bound on the CCV of Algorithm 2. Thus, we propose an algorithm Switch that combines Algorithm 2 and the algorithm from [Sinha and Vaze, 2024] to provide a regret bound of $O(\sqrt{T})$ and a CCV that is minimum of $\mathcal{V}$ and $O(\sqrt{T}\log T)$ . Thus, Switch provides a best of two worlds CCV guarantee, which is small if the sets $S_{t}$ 's are 'nice', while in the worst case it is at most $O(\sqrt{T}\log T)$ .
+- For the OCS problem, where $f_{t} = 0$ , $\forall t$ , we show that the CCV of Algorithm 2 is $O(1)$ compared to the CCV of $O(\sqrt{T}\log T)$ [Sinha and Vaze, 2024].
+
+# 2 COCO Problem
+
+On round $t$ , the online policy first chooses an admissible action $x_{t} \in \mathcal{X} \subset \mathbb{R}^{d}$ , and then the adversary chooses a convex cost function $f_{t}: \mathcal{X} \to \mathbb{R}$ and a constraint of the form $g_{t}(x) \leq 0$ where $g_{t}: \mathcal{X} \to \mathbb{R}$ is a convex function. Once the action $x_{t}$ has been chosen, we let $\nabla f_{t}(x_{t})$ and full function $g_{t}$ or the set $\{x: g_{t}(x) \leq 0\}$ to be revealed, as is standard in the literature. We now state the standard assumptions made in the literature while studying the COCO problem [Guo et al., 2022, Yi et al., 2021, Neely and Yu, 2017, Sinha and Vaze, 2024].
+
+Assumption 1 (Convexity) $\mathcal{X} \subset \mathbb{R}^d$ is the admissible set that is closed, convex and has a finite Euclidean diameter $D$ . The cost function $f_t: \mathcal{X} \mapsto \mathbb{R}$ and the constraint function $g_t: \mathcal{X} \mapsto \mathbb{R}$ are convex for all $t \geq 1$ .
+
+
| Reference | Regret | CCV | Complexity per round |
| [Neely and Yu, 2017], [Liakopoulos et al., 2019] | O(√T) | O(√T) | Conv-OPT, Slater's condition |
| O(√T) | O(√T) | Conv-OPT, Slater's condition |
| [Guo et al., 2022] | O(√T) | O(T3/4) | Conv-OPT |
| [Yi et al., 2023] | O(Tmax(β,1-β)) | O(T1-β/2) | Conv-OPT |
| [Sinha and Vaze, 2024] | O(√T) | O(√T log T) | Projection |
| This paper | O(√T) | O(min{V, √T log T}) | Projection |
+
+Table 1: Summary of the results on COCO for arbitrary time-varying convex constraints and convex cost functions. In the above table, $0 \leq \beta \leq 1$ is an adjustable parameter. Conv-OPT refers to solving a constrained convex optimization problem on each round. Projection refers to the Euclidean projection operation on the convex set $\mathcal{X}$ . The CCV bound for this paper is stated in terms of $\mathcal{V}$ which can be $O(1)$ or depends on the shape of convex sets $S_{t}$ 's.
+
+Assumption 2 (Lipschitzness) All cost functions $\{f_t\}_{t \geq 1}$ and the constraint functions $\{g_t\}_{t \geq 1}$ 's are $G$ -Lipschitz, i.e., for any $x, y \in \mathcal{X}$ , we have $|f_t(x) - f_t(y)| \leq G||x - y||$ , $|g_t(x) - g_t(y)| \leq G||x - y||$ , $\forall t \geq 1$ .
+
+Assumption 3 (Feasibility) With $\mathcal{G}_t = \{x\in \mathcal{X}:g_t(x)\leq 0\}$ , we assume that $\mathcal{X}^{\star} = \cap_{t = 1}^{T}\mathcal{G}_{t}\neq \emptyset$ . Any action $x^{\star}\in \mathcal{X}^{\star}$ is defined to be feasible.
+
+The feasibility assumption distinguishes the cost functions from the constraint functions and is common across all previous literature on COCO [Guo et al., 2022, Neely and Yu, 2017, Yu and Neely, 2016, Yuan and Lamperski, 2018, Yi et al., 2023, Liakopoulos et al., 2019, Sinha and Vaze, 2024].
+
+For any real number $z$ , we define $(z)^{+} \equiv \max(0, z)$ . Since $g_{t}$ 's are revealed after the action $x_{t}$ is chosen, any online policy need not necessarily take feasible actions on each round. Thus in addition to the static2 regret defined below
+
+$$
+\operatorname {R e g r e t} _ {[ 1: T ]} \equiv \sup _ {\left\{f _ {t} \right\} _ {t = 1} ^ {T}} \sup _ {x ^ {\star} \in \mathcal {X} ^ {\star}} \operatorname {R e g r e t} _ {[ 1: T ]} \left(x ^ {\star}\right), \quad \operatorname {R e g r e t} _ {[ 1: T ]} \left(x ^ {\star}\right) \equiv \sum_ {t = 1} ^ {T} f _ {t} \left(x _ {t}\right) - \sum_ {t = 1} ^ {T} f _ {t} \left(x ^ {\star}\right) \tag {2}
+$$
+
+where an additional obvious metric of interest is the total cumulative constraint violation (CCV) defined as $\mathrm{CCV}_{[1:T]} = \sum_{t=1}^{T}(g_t(x_t))^+$ . Under the standard assumption (Assumption 3) that $\mathcal{X}^{\star}$ is not empty, the goal is to design an online policy to simultaneously achieve a small regret with $x^{\star} \in \mathcal{X}^{\star}$ and a small CCV.
+
+For simplicity, we define set
+
+$$
+S _ {t} = \cap_ {\tau = 1} ^ {t} \mathcal {G} _ {\tau}, \tag {3}
+$$
+
+where $\mathcal{G}_t$ is as defined in Assumption 3. All $\mathcal{G}_t$ 's are convex and consequently, all $S_t$ 's are convex and are nested, i.e. $S_t \subseteq S_{t-1}$ . Moreover, because of Assumption 3, each $S_t$ is non-empty and in particular $\mathcal{X}^\star \in S_t$ for all $t$ . After action $x_t$ has been chosen, set $S_t$ controls the constraint violation, which can be used to write an upper bound on the $\mathrm{CCV}_{[1:T]}$ as follows.
+
+Definition 4 For a convex set $\chi$ and a point $x \notin \chi$ , $\text{dist}(x, \chi) = \min_{y \in \chi} ||x - y||$ .
+
+With $G$ being the common Lipschitz constants for all $g_{t}$ 's, the constraint violation at time $t$ ,
+
+$$
+\left(g _ {t} \left(x _ {t}\right)\right) ^ {+} \leq G \operatorname {d i s t} \left(x _ {t}, S _ {t}\right), \text {a n d} \mathbf {C C V} _ {[ 1: T ]} \leq G \sum_ {t = 1} ^ {T} \operatorname {d i s t} \left(x _ {t}, S _ {t}\right). \tag {4}
+$$
+
+# 3 Algorithm from Sinha and Vaze [2024]
+
+The best known algorithm (Algorithm 1) to solve COCO Sinha and Vaze [2024] (in terms of regret and up to log factors for CCV) was shown to have the following guarantee.
+
+# Algorithm 1 Online Algorithm from Sinha and Vaze [2024]
+
+1: Input: Sequence of convex cost functions $\{f_t\}_{t=1}^T$ and constraint functions $\{g_t\}_{t=1}^T$ , $G = a$ common Lipschitz constant, $T = \text{Horizon length}$ , $D = \text{Euclidean diameter of the admissible set } \mathcal{X}$ , $\mathcal{P}_{\mathcal{X}}(\cdot) = \text{Euclidean projection oracle on the set } \mathcal{X}$
+2: Let $\beta = (2GD)^{-1}$ , $V = 1$ , $\lambda = \frac{1}{2\sqrt{T}}$ , $\Phi(x) = \exp(\lambda x) - 1$ .
+3: Initialization: Set $x_{1} = 0$ , $\mathrm{CCV}(0) = 0$ .
+4: For $t = 1 : T$
+5: Play $x_{t}$ , observe $f_{t}, g_{t}$ , incur a cost of $f_{t}(x_{t})$ and constraint violation of $(g_{t}(x_{t}))^{+}$ .
+6: $\tilde{f}_t\gets \beta f_t,\tilde{g}_t\gets \beta \max (0,g_t).$
+7: $\mathbf{CCV}(t) = \mathbf{CCV}(t - 1) + \tilde{g}_t(x_t)$ .
+8: Compute $\nabla_t = \nabla \hat{f}_t(x_t)$ , where $\hat{f}_t(x) \coloneqq V\tilde{f}_t(x) + \Phi'(\mathbf{CCV}(t))\tilde{g}_t(x)$ , $t \geq 1$ .
+9: $x_{t + 1} = \mathcal{P}_{\mathcal{X}}(x_t - \eta_t\nabla_t)$ , where $\eta_t = \frac{\sqrt{2}D}{2\sqrt{\sum_{\tau = 1}^t||\nabla_\tau||_2^2}}$
+
+# 10: EndFor
+
+Theorem 5 [Sinha and Vaze [2024]] Algorithm 1's Regret $_{[1:T]}$ = $O(\sqrt{T})$ and $CCV_{[1:T]} = O(\sqrt{T}\log T)$ when $f_t, g_t$ are convex.
+
+We next show that in fact the analysis of Sinha and Vaze [2024] is tight for the CCV even when $d = 1$ and $f_{t}(x) = f(x)$ and $g_{t}(x) = g(x)$ for all $t$ . With finite diameter $D$ and the fact that any $x^{\star} \in \mathcal{X}^{\star}$ belongs to all nested convex bodies $S_{t}$ 's, when $d = 1$ , one expects that the CCV for any algorithm in this case will be $O(D)$ . However, as we show next, Algorithm 1 does not effectively make use of geometric constraints imposed by nested convex bodies $S_{t}$ 's.
+
+Lemma 6 Even when $d = 1$ and $f_{t}(x) = f(x)$ and $g_{t}(x) = g(x)$ for all $t$ , for Algorithm 1, its $CCV_{[1:T]} = \Omega(\sqrt{T} \log T)$ .
+
+Proof: Input: Consider $d = 1$ , and let $\mathcal{X} = [1, a], a > 2$ . Moreover, let $f_t(x) = f(x)$ and $g_t(x) = g(x)$ for all $t$ . Let $f(x) = cx^2$ for some (large) $c > 0$ and $g(x)$ be such that $G = \{x : g(x) \leq 0\} \subseteq [a/2, a]$ and let $|\nabla g(x)| \leq 1$ for all $x$ .
+
+Let $1 < x_{1} < a / 2$ . Note that $\mathrm{CCV}(t)$ (defined in Algorithm 1) is a non-decreasing function, and let $t^{\star}$ be the earliest time $t$ such that $\Phi^{\prime}(\mathrm{CCV}(t))\nabla g(x) < -c$ . For $f(x) = cx^{2}$ , $\nabla f(x) \geq c$ for all $x > 1$ . Thus, using Algorithm 1's definition, it follows that for all $t \leq t^{\star}$ , $x_{t} < a / 2$ , since the derivative of $f$ dominates the derivative of $\Phi^{\prime}(\mathrm{CCV}(t))g(x)$ until then.
+
+Since $\Phi(x) = \exp(\lambda x) - 1$ with $\lambda = \frac{1}{2\sqrt{T}}$ , and by definition $|\nabla g(x)| \leq 1$ for all $x$ , thus, we have that by time $t^{\star}$ , $\mathrm{CCV}_{[1:t^{\star}]} = \Omega(\sqrt{T}\log T)$ . Therefore, $\mathrm{CCV}_{[1:T]} = \Omega(\sqrt{T}\log T)$ .
+
+Essentially, Algorithm 1 is treating minimizing the CCV problem as regret minimization for function $g$ similar to function $f$ and this leads to its CCV of $\Omega(\sqrt{T} \log T)$ . For any given input instance with $d = 1$ , an alternate algorithm that chooses its actions following online gradient descent (OGD) projected on to the most recently revealed feasible set $S_{t}$ achieves $O(\sqrt{T})$ regret (irrespective of the starting action $x_{1}$ ) and $O(D)$ CCV (since any $x^{\star} \in S_{t}$ for all $t$ ). We extend this intuition in the next section, and present an algorithm that exploits the geometry of the nested convex sets $S_{t}$ for any $d$ .
+
+# 4 New Algorithm for solving COCO
+
+In this section, we present a simple algorithm (Algorithm 2) for solving COCO. Algorithm 2 is essentially an online projected gradient algorithm (OGD), which first takes an OGD step from the previous action $x_{t-1}$ with respect to the most recently revealed loss function $f_{t-1}$ with appropriate step-size which is then projected onto $S_{t-2}$ to reach $y_{t-1}$ , and then projects $y_{t-1}$ onto the most recently revealed set $S_{t-1}$ to get $x_t$ , the action to be played at time $t$ . (3).
+
+Remark 1 Step 6 of Algorithm 2 might appear unnecessary, however, its useful for proving Theorem 12.
+
+Since Algorithm 2 is essentially an online projected gradient algorithm, similar to the classical result on OGD, next, we show that the regret of Algorithm 2 is $O(\sqrt{T})$ .
+
+# Algorithm 2 Online Algorithm for COCO
+
+1: Input: Sequence of convex cost functions $\{f_t\}_{t=1}^T$ and constraint functions $\{g_t\}_{t=1}^T$ , $G = a$ common Lipschitz constant, $d$ dimension of the admissible set $\mathcal{X}$ , step size $\eta_t = \frac{D}{G\sqrt{t}}$ . $D =$ Euclidean diameter of the admissible set $\mathcal{X}$ , $\mathcal{P}_{\mathcal{X}}(\cdot) =$ Euclidean projection on the set $\mathcal{X}$ ,
+2: Initialization: Set $x_{1} \in \mathcal{X}$ arbitrarily, $\mathrm{CCV}(0) = 0$ .
+3: For $t = 1:T$
+4: Play $x_{t}$ , observe $f_{t}, g_{t}$ , incur a cost of $f_{t}(x_{t})$ and constraint violation of $(g_{t}(x_{t}))^{+}$ .
+5: Set $S_{t}$ as defined in (3)
+6: $y_{t} = \mathcal{P}_{S_{t - 1}}(x_{t} - \eta_{t}\nabla f_{t}(x_{t}))$
+7: $x_{t + 1} = \mathcal{P}_{S_t}(y_t)$
+8: EndFor
+
+Lemma 7 The Regret $_{[1:T]}$ for Algorithm 2 is $O(\sqrt{T})$ .
+
+Extension of Lemma 7 when $f_{t}$ 's are strongly convex which results in $\mathrm{Regret}_{[1:T]} = O(\log T)$ for Algorithm 2 follows standard arguments Hazan [2012] and is omitted.
+
+The real challenge is to bound the total CCV for Algorithm 2. Let $x_{t}$ be the action played by Algorithm 2. Then by definition, $x_{t} \in S_{t-1}$ . Moreover, from (4), the constraint violation at time $t$ , $\mathrm{CCV}(t) \leq G\mathrm{dist}(x_{t}, S_{t})$ . The next action $x_{t+1}$ chosen by Algorithm 2 belongs to $S_{t}$ , however, it is obtained by first taking an OGD step from $x_{t}$ to reach $y_{t}$ and then projects $y_{t}$ onto $S_{t}$ . Since $f_{t}$ 's are arbitrary, the OGD step could be towards any direction, and thus, there is no direct relationship between $x_{t+1}$ and $x_{t}$ . Informally, $(x_{1}, x_{2}, \ldots, x_{T})$ is not a connected curve with any useful property. Thus, we take recourse in upper bounding the CCV via upper bounding the total movement cost $M$ (defined below) between nested convex sets using projections.
+
+The total constraint violation for Algorithm 2 is
+
+$$
+\operatorname {C C V} _ {[ 1: t ]} \leq G \sum_ {\tau = 1} ^ {t ^ {\infty}} \operatorname {d i s t} \left(x _ {\tau}, S _ {\tau}\right) \stackrel {(a)} {\leq} G \sum_ {\tau = 1} ^ {t} \left| \left| x _ {\tau} - b _ {\tau} \right| \right| \stackrel {(b)} {=} G M _ {t}, \tag {5}
+$$
+
+where in (a) $b_{t}$ is the projection of $x_{t}$ onto $S_{t}$ , i.e., $b_{t} = \mathcal{P}_{S_{t}}(x_{t})$ and in (b) $M_{t} = \sum_{\tau=1}^{t} ||x_{\tau} - b_{\tau}||$ is defined to be the total movement cost on the instance $S_{1}, \ldots, S_{t}$ . The object of interest is $\mathbf{M}_{\mathbf{T}}$ .
+
+# 5 Bounding the Total Movement Cost $M_T$ for Algorithm 2
+
+We start by considering structured problem instances where CCV of Algorithm 2 is $O(1)$ , i.e., independent of $T$ .
+
+Lemma 8 If all nested convex bodies $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ are spheres then $M_T \leq d^{3/2}D = O(1)$ .
+
+Lemma 9 If all nested convex bodies $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ are cuboids/regular polygons that are axis parallel to each other, then $M_T \leq d^{3/2} D = O(1)$ .
+
+Interestingly, input instance where $S_{t}$ 's are axis-parallel cuboids has been used to derive the only known lower bound for COCO of Regret $_{[1:T]} = O(\sqrt{d})$ and $\mathrm{CCV}_{[1:T]} = O(\sqrt{d})$ [Sinha and Vaze, 2024].
+
+Remark 2 Lemma 8 and 9 are first results of its kind in COCO, where even for nicely structured instances the previous best known guarantee is $CCV_{[1:T]} = O(\sqrt{T}\log T)$ [Sinha and Vaze, 2024] or $CCV_{[1:T]} = O(\sqrt{T})$ [Ferreira and Soares, 2025].
+
+Next, we show that similar $O(1)$ CCV guarantee can be obtained for Algorithm 2 with less structured input, however, only when $d = 2$ .
+
+# 5.1 Special case of $d = 2$
+
+In this section, we show that if $d = 2$ (all convex sets $S_{t}$ 's lie in a plane) and the projections satisfy a monotonicity property depending on the problem instance, then we can bound the total CCV for Algorithm 2 independent of the time horizon $T$ and consequently getting a $O(1)$ CCV.
+
+
+Figure 1: Definition of $F_{t}$ 's.
+
+
+Figure 2: Figure representing the cone $C_{w_t}(c_t)$ that contains the convex hull of $m_t$ and $S_t$ with unit vector $w_t$ .
+
+Definition 10 Recall from the definition of Algorithm 2, $y_{t} = \mathcal{P}_{S_{t - 1}}(x_{t} - \eta_{t}\nabla f_{t}(x_{t}))$ and $x_{t + 1} = \mathcal{P}_{S_t}(y_t)$ . Let the hyperplane perpendicular to line segment $(y_{t},x_{t + 1})$ passing through $x_{t + 1}$ be $F_{t}$ . Without loss of generality, we let $y_{t} \notin S_{t}$ , since otherwise the projection is trivial. Essentially $F_{t}$ is the projection hyperplane at time $t$ . Let $\mathcal{H}_t^+$ denote the positive half plane corresponding to $F_{t}$ , i.e., $\mathcal{H}_t^+ = \{z: z^T(y_t - x_{t + 1}) \geq 0\}$ . Refer to Fig. 1. Let the angle between $F_{1}$ and $F_{t}$ be $\theta_t$ .
+
+Definition 11 The instance $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ is defined to be monotonic if $\theta_{2} \leq \theta_{3} \leq \dots \leq \theta_{T}$ .
+
+Theorem 12 For $d = 2$ when the instance is monotonic, $CCV_{[1:T]} = O(GD)$ for Algorithm 2.
+
+Theorem 12 shows that CCV of Algorithm 2 is independent of $T$ as long as the instance is monotonic when $d = 2$ . It is worth noting that even under the monotonicity assumption it is non-trivial to upper bound the CCV since the successive angles made by $F_{t}$ 's with $F_{1}$ can increase arbitrarily slowly, making it difficult to control the total CCV. The proof is derived by using basic convex geometry results from Manselli and Pucci [1991] in combination with exploiting the definition of Algorithm 2 and the monotonicity condition.
+
+Finally, in the next subsection, we upper bound $M_T$ , and consequently the CCV for Algorithm 2, when the input has no structure other than $S_t$ 's being nested.
+
+# 5.2 General Guarantee on CCV
+
+In this subsection, we give a general bound on $M_T$ of Algorithm 2 for any sequence of nested convex bodies which depends on the geometry of the nested convex bodies (instance dependent). To state the result we need the following preliminaries.
+
+Following (5), $b_{t} = \mathcal{P}_{S_{t}}(x_{t})$ where $x_{t} \in \partial S_{t-1}$ , where $\partial S$ is the boundary of convex set $S$ . Without loss of generality, $x_{t} \notin S_{t}$ since otherwise the distance $||x_{t} - b_{t}|| = 0$ . Let $m_{t}$ be the mid-point of $x_{t}$ and $b_{t}$ , i.e. $m_{t} = \frac{x_{t} + b_{t}}{2}$ .
+
+Definition 13 Let the convex hull of $m_t \cup S_t$ be $\mathcal{C}_t$ . Let $w_t$ be a unit vector such that there exists $c_t > 0$ such that the cone
+
+$$
+C _ {w _ {t}} \left(c _ {t}\right) = \left\{z \in \mathbb {R} ^ {d}: - w _ {t} ^ {T} \frac {(z - m _ {t})}{| | (z - m _ {t}) | |} \geq c _ {t} \right\}
+$$
+
+contains $\mathcal{C}_t$ . Since $S_{t}$ is convex, such $w_{t},c_{t} > 0$ exist. For example, $w_{t} = b_{t} - x_{t}$ is one such choice for which $c_{t} > 0$ since $m_t\notin S_t$ . See Fig. 2 for a pictorial representation.
+
+Let $c_{w_t,t}^\star = \arg \max_{c_t} C_{w_t}(c_t)$ , $c_t^\star = \max_{w_t} c_{w_t,t}^\star$ , and $w_t^\star = \arg \max_{w_t} c_{w_t,t}^\star$ . Moreover, let $c^\star = \min_{t} c_t^\star$ , where by definition, $c^\star < 1$ .
+
+Essentially, $2\cos^{-1}(c_t^\star)$ is the angle width of $\mathcal{C}_t$ with respect to $w_{t}^{\star}$ , i.e. each element of $\mathcal{C}_t$ makes an angle of at most $\cos^{-1}(c_t^\star)$ with $w_{t}^{\star}$ .
+
+Remark 3 Note that $c_t^*$ is only a function of the distance $||x_t - b_t||$ and the shape of $S_t$ 's, in particular, the maximum width of $S_t$ along the directions perpendicular to vector $x_t - b_t \forall t$ which
+
+can be at most the diameter $D$ . $c_{t}^{\star}$ decreases (increasing the "width" of cone $C_{w_t^*}(c_t^\star)$ ) as $\| x_{t} - b_{t}\|$ decreases, but small $\| x_{t} - b_{t}\|$ also implies small violation at time $t$ from (5).
+
+Remark 4 $c^{\star}$ is instance dependent or algorithm dependent? For notational simplicity, we have defined $c^{\star}$ using $x_{t}$ 's (Algorithm 2 specific quantity) and its projection $b_{t}$ on $S_{t}$ . However, since $x_{t}$ and $x_{t-1}$ have no useful relation between them, $x_{t}$ can be any arbitrary point on the boundary of $S_{t-1}$ , and $c^{\star}$ is in effect defined with respect to arbitrary $x_{t} \in S_{t-1}$ making it an instance-dependent quantity.
+
+Lemma 14 $M_T$ for Algorithm 2 is at most $\frac{2V_d(d - 1)}{V_{d - 1}}\left(\frac{1}{c^\star}\right)^d D$ , where $V_{d}$ is the $(d - 1)$ -dimensional Lebesgue measure of the unit sphere in $d$ dimensions.
+
+Proof Idea Projecting $x_{t} \in \partial S_{t-1}$ onto $S_{t}$ to get $b_{t} = \mathcal{P}_{S_{t}}(x_{t})$ , the diameter of $S_{t}$ is at most diameter of $S_{t-1} - ||x_{t} - b_{t}||$ , however, only along the direction $b_{t} - x_{t}$ . Since the shape of $S_{t}$ is arbitrary, as a result, the diameter of $S_{t}$ need not be smaller than the diameter of $S_{t-1}$ along any pre-specified direction, which was the main idea used to derive Lemma 8. Thus, to prove Lemma 14 we relate the distance $||x_{t} - b_{t}||$ with the decrease in mean width of a convex body, that is defined as the expected width of the convex body along all the directions that are chosen uniformly randomly (formal definition is provided in Definition 34).
+
+Note that $V_d / V_{d-1} = O(1 / \sqrt{d})$ . Thus, from Lemma 14 we get the following main result of the paper for Algorithm 2 combining Lemma 7 and Lemma 14.
+
+Theorem 15 Algorithm 2 has Regret $_{[1:T]} = O(\sqrt{T})$ , and $CCV_{[1:T]} = O\left(\sqrt{d}\left(\frac{1}{c^{\star}}\right)^{d}D\right)$ .
+
+Theorem 15 is an instance dependent result for the CCV, compared to the prior universal guarantees of $\tilde{O}(\sqrt{T})$ on the CCV. In particular, it exploits the geometric structure of the nested convex sets $S_{t}$ 's and derives an upper bound on the CCV that only depends on the 'shape' of $S_{t}$ 's via $c^{\star}$ . Moreover, $c^{\star}$ is only a dimension $(d)$ dependent quantity (independent of $T$ ) as long as the minimum distance between consecutive constraint sets is not function of $T$ , since the diameter $D$ is constant, whereas all existing algorithms will suffer from CCV of $\Omega(\sqrt{T})$ even in this case.
+
+Remark 5 One pertinent question at this time is: What is $c^{\star}$ and why should the CCV for a problem instance necessarily depend on it? $c^{\star}$ corresponds to the minimum angle width (via) of the problem instance, the angular width of the 'smallest' cone containing the newly revealed constraint sets. Angle width essentially depends on the width of the convex sets in directions perpendicular to the direction of projection, and controls the total CCV, since successive convex constraint sets are nested (lie inside each other), the smaller the angle width smaller is the room that an algorithm has to violate the constraints in future steps. Angle width also depends on the distance between $x_{t}$ and $S_{t}$ and is potentially large when $d(x_{t},S_{t})$ is small and the diameter along the direction perpendicular to $x_{t} - b_{t}$ is large.
+
+$c^{\star}$ is a fundamental natural object that inherently captures the geometric difficulty in bounding the CCV. The core contribution of this paper is to formalize this by bringing in the novel concept of connecting the reduction of average width of the convex constraint set to the total constraint violation, that entails non-trivial convex analysis. If $c^{\star}$ is in fact small (e.g. total CCV is $\Omega(\sqrt{T})$ ) for a problem instance then that problem instance does not have enough geometric features to extract via projections. To cover for such instances, we propose the Switch algorithm next to cap the CCV by $\tilde{O}(\sqrt{T})$ .
+
+# 6 Algorithm Switch
+
+Theorem 15 provides an instance dependent bound on the CCV, that is a function of $c^{\star}$ . If $c^{\star}$ is small, CCV can be larger than $O(\sqrt{T}\log T)$ , the CCV guarantee of Algorithm 1 [Sinha and Vaze, 2024]. Thus, next, we marry the two algorithms, Algorithm 1 and Algorithm 2, in Algorithm 3 to provide a best of both results as follows.
+
+Theorem 16 Switch (Algorithm 3) has regret $\text{Regret}_{[1:T]} = O(\sqrt{T})$ , while $CCV_{[1:T]} = \min \left\{O\left(\sqrt{d}\left(\frac{1}{c^s}\right)^d D\right), O(\sqrt{T}\log T)\right\}$ .
+
+# Algorithm 3 Switch
+
+1: Input: Sequence of convex cost functions $\{f_t\}_{t=1}^T$ and constraint functions $\{g_t\}_{t=1}^T$ , $G = a$ common Lipschitz constant, $d$ dimension of the admissible set $\mathcal{X}$ , $D = \text{Euclidean diameter of the admissible set } \mathcal{X}$ , $\mathcal{P}_{\mathcal{X}}(\cdot) = \text{Euclidean projection operator on the set } \mathcal{X}$ ,
+2: Initialization: Set $x_{1} \in \mathcal{X}$ arbitrarily, $\mathrm{CCV}(0) = 0$ .
+3: For $t = 1:T$
+4: If $\mathbf{CCV}(t - 1) \leq \sqrt{T} \log T$
+5: Follow Algorithm 2 and update $\mathbf{CCV}(t) = \mathbf{CCV}(t - 1) + \max \{g_t(x_t),0\}$ .
+6: Else
+7: Follow Algorithm 1 with resetting $\mathbf{CCV}(t - 1) = 0$
+8: EndIf
+9: EndFor
+
+Algorithm Switch should be understood as the best of two worlds algorithm, where the two worlds correspond to one having nice convex sets $S_{t}$ 's that have CCV independent of $T$ or $o(\sqrt{T})$ for Algorithm 2, while in the other, CCV of Algorithm 2 is large on its own, and the overall CCV is controlled by discontinuing the use of Algorithm 2 once its CCV reaches $\sqrt{T}\log T$ and switching to Algorithm 1 thereafter that has universal guarantee of $O(\sqrt{T}\log T)$ on its CCV.
+
+# 7 OCS Problem
+
+In [Sinha and Vaze, 2024], a special case of COCO, called the OCS problem, was introduced where $f_{t} \equiv 0$ for all $t$ . Essentially, with OCS, only constraint satisfaction is the objective. In [Sinha and Vaze, 2024], Algorithm 1 was shown to have CCV of $O(\sqrt{T}\log T)$ . Next, we show that Algorithm 2 has CCV of $O(1)$ for the OCS, a remarkable improvement.
+
+Theorem 17 For solving OCS, Algorithm 2 has $CCV_{[1:T]} = O\left(d^{d/2}D\right) = O(1)$ .
+
+As discussed in [Sinha and Vaze, 2024], there are important applications of OCS, and it is important to find tight bounds on its CCV. Theorem 17 achieves this by showing that CCV of $O(1)$ can be achieved, where the constant depends only on the dimension of the action space and the diameter. This is a fundamental improvement compared to the CCV bound of $O(\sqrt{T}\log T)$ from [Sinha and Vaze, 2024]. Theorem 17 is derived by using the connection between the curve obtained by successive projections on nested convex sets and self-expanded curves (Definition 20) and then using a classical result on self-expanded curves from [Manselli and Pucci, 1991].
+
+# 8 Experimental Results
+
+In this section, we compare the performance of Algorithm 1 and Algorithm 2 experimentally. We start by simulating the performance of Algorithm 1 and Algorithm 2 on the input that was used to prove Lemma 6. Fig. 3 numerically verifies the claim of Lemma 6 that the CCV of Algorithm 1 is $\Omega (\sqrt{T}\log T)$ , while the CCV of Algorithm 2 remains constant.
+
+# 8.1 Synthetic Data
+
+Next, we consider a more reasonable data setup to compare the performance of Algorithm 1 and Algorithm 2, where with $d = 10$ , we let $f_{t}(x) = ||x - a_{t}||_{1}$ , and $a_{t}$ is a $d$ -dimensional vector that is coordinate-wise uniformly distributed between $[-1,1]$ and is independent across $t$ . Similarly, we consider $g_{t}(x) = \max(0, w_{t}^{T} \cdot x - 0.1)$ where $w_{t}$ is a $d$ -dimensional vector that also is coordinate-wise uniformly distributed between $[-1,1]$ and is independent across $t$ . This choice ensures that $x = 0$ is feasible for all constraints, i.e., Assumption 3 is satisfied. In Figs. 4a and 4b, we plot the regret and CCV, respectively, for Algorithm 1 and Algorithm 2, and see that Algorithm 2 outperforms Algorithm 1 in both the regret and the CCV.
+
+
+Figure 3: Regret and CCV comparison for input described in Lemma 6.
+
+
+(a) Regret comparison of Algorithm 1 and Algorithm 2
+
+
+(b) CCV comparison of Algorithm 1 and Algorithm 2
+
+# 9 Conclusions
+
+One fundamental open question for COCO is: whether it is possible to simultaneously achieve $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ and $\mathrm{CCV}_{[1:T]} = o(\sqrt{T})$ or $\mathrm{CCV}_{[1:T]} = O(1)$ . In this paper, we have made substantial progress towards answering this question by proposing an algorithm that exploits the geometric properties of the nested convex sets $S_{t}$ 's that effectively control the CCV. The state of the art algorithms [Sinha and Vaze, 2024, Ferreira and Soares, 2025] achieve a CCV of $\tilde{\Omega}(\sqrt{T})$ even for very simple instances as shown in Lemma 6, and conceptually different algorithms are needed to achieve CCV of $o(\sqrt{T})$ . We propose one such algorithm and show that when the nested convex constraint sets are well structured, achieving a CCV of $O(1)$ is possible without losing out on $O(\sqrt{T})$ regret guarantee. We also derived a bound on the CCV for general problem instances, that is as a function of the shape of nested convex constraint sets and the distance between them, and the diameter.
+
+In the absence of good lower bounds, the open question remains unresolved in general, however, this paper significantly improves the conceptual understanding of COCO problem by demonstrating that good algorithms need to exploit the geometry of the nested convex constraint sets.
+
+# References
+
+Jianjun Yuan and Andrew Lamperski. Online convex optimization for cumulative constraints. Advances in Neural Information Processing Systems, 31, 2018.
+
+Rodolphe Jenatton, Jim Huang, and Cédric Archambeau. Adaptive algorithms for online convex optimization with long-term constraints. In International Conference on Machine Learning, pages
+
+402-411.PMLR,2016.
+Mehrdad Mahdavi, Rong Jin, and Tianbao Yang. Trading regret for efficiency: online convex optimization with long term constraints. The Journal of Machine Learning Research, 13(1):2503-2528, 2012.
+Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl Johansson. Regret and cumulative constraint violation analysis for online convex optimization with long term constraints. In International Conference on Machine Learning, pages 11998-12008. PMLR, 2021.
+Hao Yu, Michael Neely, and Xiaohan Wei. Online convex optimization with stochastic constraints. Advances in Neural Information Processing Systems, 30, 2017.
+Wen Sun, Debadeepta Dey, and Ashish Kapoor. Safety-aware algorithms for adversarial contextual bandit. In International Conference on Machine Learning, pages 3280-3288. PMLR, 2017.
+Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Yiguang Hong, Tianyou Chai, and Karl H Johansson. Distributed online convex optimization with adversarial constraints: Reduced cumulative constraint violation bounds under slater's condition. arXiv preprint arXiv:2306.00149, 2023.
+Michael J Neely and Hao Yu. Online convex optimization with time-varying constraints. arXiv preprint arXiv:1702.04783, 2017.
+Nikolaos Liakopoulos, Apostolos Destounis, Georgios Paschos, Thrasyvoulos Spyropoulos, and Panayotis Mertikopoulos. Cautious regret minimization: Online optimization with long-term budget constraints. In International Conference on Machine Learning, pages 3944-3952. PMLR, 2019.
+Michael J Neely. Stochastic network optimization with application to communication and queueing systems. Synthesis Lectures on Communication Networks, 3(1):1-211, 2010.
+Hengquan Guo, Xin Liu, Honghao Wei, and Lei Ying. Online convex optimization with hard constraints: Towards the best of two worlds and beyond. Advances in Neural Information Processing Systems, 35:36426-36439, 2022.
+Abhishek Sinha and Rahul Vaze. Optimal algorithms for online convex optimization with adversarial constraints. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=TxffvJMnBy.
+Ricardo N. Ferreira and Cláudia Soares. Optimal bounds for adversarial constrained online convex optimization, 2025. URL https://arxiv.org/abs/2503.13366.
+Jordan Lekeufack and Michael I. Jordan. An optimistic algorithm for online convex optimization with adversarial constraints, 2025. URL https://arxiv.org/abs/2412.08060.
+Tianyi Chen and Georgios B Giannakis. Bandit convex optimization for scalable and dynamic iot management. IEEE Internet of Things Journal, 6(1):1276-1286, 2018.
+Xuanyu Cao and KJ Ray Liu. Online convex optimization with time-varying constraints and bandit feedback. IEEE Transactions on automatic control, 64(7):2665-2680, 2018.
+Rahul Vaze. On dynamic regret and constraint violations in constrained online convex optimization. In 2022 20th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt), pages 9-16, 2022. doi: 10.23919/WiOpt56218.2022.9930613.
+Qingsong Liu, Wenfei Wu, Longbo Huang, and Zhixuan Fang. Simultaneously achieving sublinear regret and constraint violations for online convex optimization with time-varying constraints. ACM SIGMETRICS Performance Evaluation Review, 49(3):4-5, 2022.
+Nikhil Bansal, Martin Böhm, Marek Eliás, Grigorios Koumoutsos, and Seeun William Umboh. Nested convex bodies are chaseable. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1253-1260. SIAM, 2018.
+
+C.J. Argue, Sébastien Bubeck, Michael B Cohen, Anupam Gupta, and Yin Tat Lee. A nearly-linear bound for chasing nested convex bodies. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 117-122. SIAM, 2019.
+Sebastien Bubeck, Bo'az Klartag, Yin Tat Lee, Yanzhi Li, and Mark Sellke. Chasing nested convex bodies nearly optimally. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1496-1508. SIAM, 2020.
+Elad Hazan. The convex optimization approach to regret minimization. Optimization for machine learning, page 287, 2012.
+Hao Yu and Michael J Neely. A low complexity algorithm with $o(\sqrt{T})$ regret and $o(1)$ constraint violations for online convex optimization with long term constraints. arXiv preprint arXiv:1604.02218, 2016.
+Paolo Manselli and Carlo Pucci. Maximum length of steepest descent curves for quasi-convex functions. Geometriae Dedicata, 38(2):211-227, 1991.
+HaroldGordon Eggleston. Convexity,1966.
+
+# NeurIPS Paper Checklist
+
+# 1. Claims
+
+Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
+
+Answer: [Yes]
+
+Justification: We provide complete theorem statements and proofs of all claims.
+
+Guidelines:
+
+- The answer NA means that the abstract and introduction do not include the claims made in the paper.
+- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
+- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
+- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
+
+# 2. Limitations
+
+Question: Does the paper discuss the limitations of the work performed by the authors?
+
+Answer: [Yes]
+
+Justification: Our result crucially makes use of feasibility assumption (Assumption 3) which is universally used in the COCO literature. In the absence of good lower bounds, the problem considered in the paper question remains open in full generality.
+
+Guidelines:
+
+- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
+- The authors are encouraged to create a separate "Limitations" section in their paper.
+- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
+- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
+- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
+- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
+- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
+- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
+
+# 3. Theory assumptions and proofs
+
+Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
+
+# Answer: [Yes]
+
+Justification: We clearly state the assumptions under which our theoretical results hold.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include theoretical results.
+- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
+- All assumptions should be clearly stated or referenced in the statement of any theorems.
+- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
+- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
+- Theorems and Lemmas that the proof relies upon should be properly referenced.
+
+# 4. Experimental result reproducibility
+
+Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
+
+# Answer: [NA]
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
+- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
+- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
+- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
+(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
+(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
+(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
+(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
+
+# 5. Open access to data and code
+
+Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
+
+# Answer: [NA]
+
+# Guidelines:
+
+- The answer NA means that paper does not include experiments requiring code.
+- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
+- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
+- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
+- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
+- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
+
+# 6. Experimental setting/details
+
+Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
+
+# Answer: [NA]
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
+- The full details can be provided either with the code, in appendix, or as supplemental material.
+
+# 7. Experiment statistical significance
+
+Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
+
+# Answer: [NA]
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
+- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
+- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
+- The assumptions made should be given (e.g., Normally distributed errors).
+- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
+- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
+
+- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
+- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
+
+# 8. Experiments compute resources
+
+Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
+
+Answer: [NA]
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
+- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
+- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
+
+# 9. Code of ethics
+
+Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
+
+Answer: [Yes]
+
+Justification: This paper deals with fundamental optimization theory and conform with NeurIPS Code of Ethics
+
+Guidelines:
+
+- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
+- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
+- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
+
+# 10. Broader impacts
+
+Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
+
+Answer: [NA]
+
+Justification: This is a theoretical paper and the authors do not see any immediate direct societal impact of this paper.
+
+Guidelines:
+
+- The answer NA means that there is no societal impact of the work performed.
+- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
+- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
+- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
+
+- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
+- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
+
+# 11. Safeguards
+
+Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
+
+Answer: [NA]
+
+Justification: This theoretical paper does not pose any such risks.
+
+Guidelines:
+
+- The answer NA means that the paper poses no such risks.
+- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
+- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
+- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
+
+# 12. Licenses for existing assets
+
+Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
+
+Answer: [NA]
+
+Guidelines:
+
+- The answer NA means that the paper does not use existing assets.
+- The authors should cite the original paper that produced the code package or dataset.
+- The authors should state which version of the asset is used and, if possible, include a URL.
+- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
+- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
+- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
+- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
+- If this information is not available online, the authors are encouraged to reach out to the asset's creators.
+
+# 13. New assets
+
+Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
+
+Answer: [NA]
+
+Guidelines:
+
+- The answer NA means that the paper does not release new assets.
+
+- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
+- The paper should discuss whether and how consent was obtained from people whose asset is used.
+- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
+
+# 14. Crowdsourcing and research with human subjects
+
+Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
+
+Answer: [NA]
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
+- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
+
+# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
+
+Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
+
+Answer: [NA]
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
+- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
+- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
+
+# 16. Declaration of LLM usage
+
+Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
+
+Answer: [NA]
+
+Guidelines:
+
+- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
+- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
+
+# 10 Comparison with [Lekeufack and Jordan, 2025]
+
+Remark 6 [Lekeufack and Jordan, 2025] consider the COCO problem when predictions about both cost functions $f_{t}$ 's and constraint functions $g_{t}$ 's are available. With predictions, they show that if predictions are perfect, $O(1)$ regret and CCV is achievable, while if the predictions are totally wrong, in the worst case the regret and CCV are at most as bad as the result of [Sinha and Vaze, 2024]. Intermediate range of results is also obtained depending on the quality of prediction. Essentially [Lekeufack and Jordan, 2025] use the prediction wrapper over the algorithm of [Sinha and Vaze, 2024] to derive their guarantee.
+
+In this paper, however, we are not assuming any predictions, and are solving the COCO problem with the worst case input, similar to all the prior work listed in Table 1. Moreover, the presented algorithm is conceptually different than [Sinha and Vaze, 2024], and for the first time shows that $O(1)$ or instance dependent CCV while having $O(\sqrt{T})$ regret is possible, which is not the case with prior work even for $d = 1$ .
+
+Thus, the setting of [Lekeufack and Jordan, 2025] is completely different and not really comparable with our results.
+
+# 11 Proof of Lemma 7
+
+Proof: From the convexity of $f_{t}$ 's, for $x^{\star}$ satisfying Assumption (3), we have
+
+$$
+f _ {t} (x _ {t}) - f _ {t} \left(x ^ {\star}\right) \leq \nabla f _ {t} ^ {T} \left(x _ {t} - x ^ {\star}\right).
+$$
+
+From the choice of Algorithm 2 for $x_{t + 1}$ , we have
+
+$$
+\begin{array}{l} \left| \left| x _ {t + 1} - x ^ {\star} \right| \right| ^ {2} = \left| \left| \mathcal {P} _ {S _ {t}} (y _ {t}) - x ^ {\star} \right| \right| ^ {2} \\ \stackrel {(a)} {\leq} | | y _ {t} - x ^ {\star} | | ^ {2}, \\ = \left\| \mathcal {P} _ {S _ {t - 1}} \left(x _ {t} - \eta_ {t} \nabla f _ {t} \left(x _ {t}\right)\right) - x ^ {\star} \right\| ^ {2}, \\ \stackrel {(n)} {\leq} \left| \left| \left(x _ {t} - \eta_ {t} \nabla f _ {t} ^ {T} \left(x _ {t}\right)\right) - x ^ {\star} \right| \right| ^ {2}, \\ \end{array}
+$$
+
+where inequalities $(a)$ and $(b)$ follow since $x^{\star}\in S_{t}$ for all $t$ . Hence
+
+$$
+\left| \left| x _ {t + 1} - x ^ {\star} \right| \right| ^ {2} \leq \left| \left| x _ {t} - x ^ {\star} \right| \right| ^ {2} + \eta_ {t} ^ {2} \left| \left| \nabla f _ {t} (x _ {t}) \right| \right| ^ {2} - 2 \eta_ {t} \nabla f _ {t} ^ {T} (x _ {t}) (x _ {t} - x ^ {\star}),
+$$
+
+$$
+\nabla f _ {t} ^ {T} (x _ {t}) (x _ {t} - x ^ {\star}) \leq \frac {| | x _ {t} - x ^ {\star} | | ^ {2} - | | x _ {t + 1} - x ^ {\star} | | ^ {2}}{\eta_ {t}} + \eta_ {t} G ^ {2}.
+$$
+
+Summing this over $t = 1$ to $T$ , we get
+
+$$
+\begin{array}{l} 2 \sum_ {t = 1} ^ {T} \left(f _ {t} \left(x _ {t}\right) - f _ {t} \left(x ^ {\star}\right)\right) \leq \sum_ {t = 1} ^ {T} \nabla f _ {t} ^ {T} \left(x _ {t} - x ^ {\star}\right), \\ \leq \sum_ {t = 1} ^ {T} \frac {| | x _ {t} - x ^ {\star} | | ^ {2} - | | x _ {t + 1} - x ^ {\star} | | ^ {2}}{\eta_ {t}} + \sum_ {t = 1} ^ {T} \eta_ {t} G ^ {2}, \\ \leq D ^ {2} \frac {1}{\eta_ {T}} + G ^ {2} \sum_ {t = 1} ^ {T} \eta_ {t}, \\ \leq O (D G \sqrt {T}), \\ \end{array}
+$$
+
+where the final inequality follows by choosing $\eta_t = \frac{D}{G\sqrt{t}}$ .
+
+# 12 Proof of Lemma 8 and Lemma 9.
+
+Proof: [Proof of Lemma 8] Recall the definition that $x_{t} \in \partial S_{t-1}, b_{t} = \mathcal{P}_{S_{t}}(x_{t}) \in S_{t}$ from (5). Let $||x_{t} - b_{t}|| = r$ , then since all $S_{t}$ 's are spheres, at least along one of the $d$ -orthogonal canonical basis vectors, $\text{diameter}(S_{t}) \leq \text{diameter}(S_{t-1}) - \frac{r}{\sqrt{d}}$ . Since the diameter along any of the $d$ -axis is $D$ , we get the answer. We would like to remark that the proof is short and elementary that should be seen as a strength. Proof: [Proof of Lemma 9] Proof is identical to Lemma 8.
+
+# 13 Preliminaries for Bounding the CCV in Theorem 12 and Theorem 17
+
+Let $K_{1},\ldots ,K_{T}$ be nested (i.e., $K_{1}\supseteq K_{2}\supset K_{3}\supseteq \dots \supseteq K_{T})$ bounded convex subsets of $\mathbb{R}^d$
+
+Definition 18 If $\sigma_1 \in K_1$ , and $\sigma_{t+1} = \mathcal{P}_{K_{t+1}}(\sigma_t)$ , for $t = 1, \ldots, T$ . Then the curve
+
+$$
+\underline {{\sigma}} = \left\{\left(\sigma_ {1}, \sigma_ {2}\right), \left(\sigma_ {2}, \sigma_ {3}\right), \dots , \left(\sigma_ {T - 1}, \sigma_ {T}\right) \right\}
+$$
+
+is called the projection curve on $K_{1},\ldots ,K_{T}$
+
+We are interested in upper bounding the quantity
+
+$$
+\Sigma = \max _ {\underline {{\sigma}}} \sum_ {t = 1} ^ {T - 1} \left| \left| \sigma_ {t} - \sigma_ {t + 1} \right| \right|. \tag {6}
+$$
+
+Lemma 19 For a projection curve $\underline{\sigma}$ , $\Sigma \leq d^{d/2} \text{diameter}(K_1)$ .
+
+To prove the result we need the following definition.
+
+Definition 20 A curve $\gamma : I \to \mathbb{R}^d$ is called self-expanded, if for every $t$ where $\gamma'(t)$ exists, we have
+
+$$
+< \gamma^ {\prime} (t), \gamma (t) - \gamma (u) > \geq 0
+$$
+
+for all $u \in I$ with $u \leq t$ , where $<,.,>$ represents the inner product. In words, what this means is that $\gamma$ starting in a point $x_0$ is self expanded, if for every $x \in \gamma$ for which there exists the tangent line $\mathsf{T}$ , the arc (sub-curve) $(x_0, x)$ is contained in one of the two half-spaces, bounded by the hyperplane through $x$ and orthogonal to $\mathsf{T}$ .
+
+For self-expanded curves the following classical result is known.
+
+Theorem 21 Manselli and Pucci [1991] For any self-expanded curve $\gamma$ belonging to a closed bounded convex set of $\mathbb{R}^d$ with diameter $D$ , its total length is at most $O(d^{d/2}D)$ .
+
+Proof: [Proof of Lemma 19] From Definition 18, the projection curve is
+
+$$
+\underline {{\sigma}} = \left\{\left(\sigma_ {1}, \sigma_ {2}\right), \left(\sigma_ {2}, \sigma_ {3}\right), \dots , \left(\sigma_ {T - 1}, \sigma_ {T}\right) \right\}.
+$$
+
+Let the reverse curve be $\underline{r} = \{r_t\}_{t=0,\dots,T-2}$ , where $r_t = (\sigma_{T-t}, \sigma_{T-t-1})$ . Thus we are reading $\underline{\sigma}$ backwards and calling it $\underline{r}$ . Note that since $\sigma_t$ is the projection of $\sigma_{t-1}$ on $K_t$ , each piece-wise linear segment $(\sigma_t, \sigma_{t+1})$ is a straight line and hence differentiable except at the end points. Moreover, since each $\sigma_t$ is obtained by projecting $\sigma_{t-1}$ onto $K_t$ and $K_{t+1} \subseteq K_t$ , we have that the projection hyperplane $F_t$ that passes through $\sigma_t = \mathcal{P}_{K_t}(\sigma_{t-1})$ and is perpendicular to $\sigma_t - \sigma_{t-1}$ separates the two sub curves $\{(\sigma_1, \sigma_2), (\sigma_2, \sigma_3), \ldots, (\sigma_{t-1}, \sigma_t)\}$ and $\{(\sigma_t, \sigma_{t+1}), (\sigma_{t+1}, \sigma_{t+2}), \ldots, (\sigma_{T-1}, \sigma_T)\}$ .
+
+Thus, we have that for each segment $r_{\tau}$ , at each point where it is differentiable, the curve $r_1, \ldots, r_{\tau - 1}$ lies on one side of the hyperplane that passes through the point and is perpendicular to $r_{\tau + 1}$ . Thus, we conclude that curve $\underline{r}$ is self-expanded.
+
+As a result, Theorem 21 implies that the length of $\underline{r}$ is at most $O(d^{d/2}\mathrm{diameter}(K_1))$ , and the result follows since the length of $\underline{r}$ is same as that of $\underline{\sigma}$ which is $\Sigma$ .
+
+# 14 Proof of Theorem 12
+
+Proof: Recall that $d = 2$ , and the definition of $F_{t}$ from Definition 10. Let the center be $c = \mathcal{P}_{S_1}(x_1)$ . Let $t_{\mathrm{orth}}$ be the earliest $t$ for which $\angle (F_t,F_1) = \pi$ .
+
+Initialize $\kappa = 1$ , $s(1) = 1$ , $\tau(1) = 1$ .
+
+BeginProcedure Step 1:Definition of Phase $\kappa$ . Consider
+
+$$
+\tau (\kappa) = \arg \max _ {s (\kappa) < t \leq t _ {\mathrm {o r t h}}, \angle (F _ {s (\kappa)}, F _ {t}) \leq \pi / 4} t.
+$$
+
+If there is no such $\tau (\kappa)$
+
+Phase $\kappa$ ends, define Phase $\kappa$ as Empty, $s(\kappa + 1) = \tau(\kappa) + 1$ .
+
+Else If
+
+$$
+\angle \left(F _ {\tau (\kappa)}, F _ {1}\right) = \pi \operatorname {E x i t}
+$$
+
+Else If
+
+$$
+s (\kappa + 1) = \tau (\kappa)
+$$
+
+End If
+
+Increment $\kappa = \kappa + 1$ , and Go to Step 1.
+
+# EndProcedure
+
+Example 22 To better understand the definition of phases, consider Fig. 5, where the largest $t$ for which the angle between $F_{t}$ and $F_{1}$ is at most $\pi /4$ is 3. Thus, $\tau (1) = 3$ , i.e., phase 1 explores till time $t = 3$ and phase 1 ends. The starting hyperplane to consider in phase 2 is $s(2) = 3$ and given that angle between $F_{3}$ and and the next hyperplane $F_{4}$ is more than $\pi /4$ , phase 2 is empty and phase 2 ends by exploring till $t = 4$ . The starting hyperplane to consider in phase 3 is $s(3) = 4$ and the process goes on. The first time $t$ such that the angle between $F_{1}$ and $F_{t}$ is $\pi$ is $t = 6$ , and thus $t_{\text{orth}} = 6$ , and the process stops at time $t = 6$ . This also implies that $S_{6} \subset F_{1}$ . Since $S_{t}$ 's are nested, for all $t \geq 6$ , $S_{t} \subset F_{1}$ . Hence the total CCV after $t \geq t_{\text{orth}}$ is at most $GD$ .
+
+The main idea with defining phases, is to partition the whole space into empty and non-empty regions, where in each non-empty region, the starting and ending hyperplanes have an angle to at most $\pi /4$ , while in an empty phase the starting and ending hyperplanes have an angle of at least $\pi /4$ . Thus, we get the following simple result.
+
+Lemma 23 For $d = 2$ , there can be at most 4 non-empty and 4 empty phases.
+
+Proof is immediate from the definition of the phases, since any consecutively occurring non-empty and empty phase exhausts an angle of at least $\pi /4$
+
+Remark 7 Since we are in $d = 2$ dimensions, for all $t \geq t_{\text{orth}}$ , the movement is along the hyperplane $F_1$ and thus the resulting constraint violation after time $t \geq t_{\text{orth}}$ is at most $G$ . Thus, in the phase definition above, we have only considered time till $t_{\text{orth}}$ and we only need to upper bound the CCV till time $t_{\text{orth}}$ .
+
+
+Figure 5: Figure corresponding to Example 22.
+
+We next define the following required quantities.
+
+Definition 24 With respect to the quantities defined for Algorithm 2, let for a non-empty phase $\kappa$
+
+$$
+r _ {\max } (\kappa) = \max _ {s (\kappa) < t \leq \tau (\kappa)} | | y _ {t} - \mathsf {c} | | a n d t ^ {\star} (\kappa) = \arg \max _ {s (\kappa) < t \leq \tau (\kappa)} ^ {T} | | y _ {t} - \mathsf {c} | |.
+$$
+
+$t^{\star}(\kappa)$ is the time index belonging to phase $\kappa$ for which $y_{t}$ is the farthest.
+
+Definition 25 A non-empty phase $\kappa$ consists of time slots $\mathcal{T}(\kappa) = [\tau (\kappa -1),\tau (\kappa)]$ and the angle $\angle (F_{t_1},F_{t_2})\leq \pi /4$ for all $t_1,t_2\in \mathcal{T}(\kappa)$ . Using Definition 24, we partition $\mathcal{T}(\kappa)$ as $\mathcal{T}(\kappa) = \mathcal{T}^{-}(\kappa)\cup \mathcal{T}^{+}(\kappa)$ , where $\mathcal{T}^{-}(\kappa) = [\tau (\kappa -1) + 1,t^{\star}(\kappa) + 1]$ and $\mathcal{T}^{+}(\kappa) = [t^{\star}(\kappa) + 2,\tau (\kappa)]$ .
+
+Thus, $\mathcal{T}(\kappa)$ and $\mathcal{T}(\kappa + 1)$ have one common time slot.
+
+Definition 26 [Definition of $z_{t}(\kappa)$ for $t \in \mathcal{T}^{-}(\kappa)$ ]. Let $z_{t^{\star}(\kappa) + 1} = x_{t^{\star}(\kappa) + 1}$ . For $t \in \mathcal{T}^{-}(\kappa) \backslash t^{\star}(\kappa) + 1$ , define $z_{t}(\kappa)$ inductively as follows. $z_{t}(\kappa)$ is the pre-image of $z_{t + 1}(\kappa)$ on $F_{t - 1}$ such that the projection of $z_{t}(\kappa)$ on $F_{t}$ is $z_{t + 1}(\kappa)$ .
+
+Definition 27 [Definition of $z_{t}(\kappa)$ for $t \in \mathcal{T}^{+}(\kappa)$ ]. For $t \in \mathcal{T}^{+}(\kappa)$ , define $z_{t}(\kappa)$ inductively as follows. $z_{t}(\kappa)$ is the projection of $z_{t-1}(\kappa)$ on $F_{t-1}$ .
+
+See Fig. 6 for a visual illustration of $t^\star (\kappa)$ and $z_{t}(\kappa)$ .
+
+
+Figure 6: Illustration of definition of $z_{t}(\kappa)$ for $t \in \mathcal{T}(\kappa)$ . In this example, for phase 1, $t^{\star}(1) = 3$ since the distance of $y_{3}$ from $c$ is the farthest for phase 1 that consists of time slots $\mathcal{T}(1) = \{2,3\}$ . Hence $z_{t^{\star}(1) + 1}(1) = x_{4}$ . For $t \in \mathcal{T}(1) \backslash t^{\star}(1) + 1$ , $z_{t}(1)$ are such $z_{t + 1}(1)$ is a projection of $z_{t}(1)$ onto $F_{t}$ .
+
+The main idea behind defining $z_{t}(\kappa)$ 's is as follows. For each non-empty phase, we will construct a projection curve (Definition 18) using points $z_{k}$ such that the length of the projection curve upper bounds the CCV of Algorithm 2 (shown in Lemma 33), and then use Lemma 19 to upper bound the length of the projection curve.
+
+Definition 28 [Definition of $S_t'$ for a non-empty phase $\kappa$ :] $S_{t^{\star}(\kappa) + 1}' = S_{t^{\star}(\kappa) + 1}$ . For $t \in \mathcal{T}^{-}(\kappa) \backslash t^{\star}(\kappa) + 1$ , $S_t'$ is the convex hull of $z_{t + 1}(\kappa) \cup S_t \cup S_{t + 1}'(\kappa)$ . For $t \in \mathcal{T}^{+}(\kappa)$ , $S_t' = S_t$ . See Fig. 7.
+
+Lemma 29 For a non-empty phase $\kappa$ , for any $t \in \mathcal{T}(\kappa)$ , $S_{t+1}' \subseteq S_t'$ , i.e. they are nested.
+
+Definition 30 For a non-empty phase, $\chi (\kappa) = S_{\tau (\kappa -1)}^{\prime}\cap \mathcal{H}_{\tau (\kappa)}^{+}$ , where $\mathcal{H}_{\tau (\kappa)}^{+}$ has been defined in Definition 10.
+
+Definition 31 [New Violations for $t \in \mathcal{T}(\kappa)$ ): For a non-empty phase $\kappa$ , for $t \in \mathcal{T}(\kappa) \setminus \tau(\kappa - 1)$ , let
+
+$$
+v _ {t} (\kappa) = \left| \left| z _ {t} (\kappa) - z _ {t - 1} (\kappa) \right| \right|.
+$$
+
+
+Figure 7: Definition of $S_{t}$ 's where $U_{t}$ are the extra regions that are added to $S_{t}$ to get $S_{t}'$ .
+
+Lemma 32 For each non-empty phase $\kappa$ , all $z_{t}(\kappa)$ 's for $t \in \mathcal{T}(\kappa)$ belongs to $\mathcal{B}(\mathsf{c}, \sqrt{2}D)$ , where $\mathcal{B}(c, r)$ is a ball with radius $r$ centered at $c$ . In other words, $\chi(\kappa) \subseteq \mathcal{B}(\mathsf{c}, \sqrt{2}D)$ .
+
+Proof: Recall that for a non-empty phase $\kappa$ , $\mathcal{T}(\kappa) = \mathcal{T}^{-}(\kappa) \cup \mathcal{T}^{+}(\kappa)$ . We first argue about $t \in \mathcal{T}^{-}(\kappa)$ . By definition, $z_{t^{\star}(\kappa) + 1} = x_{t^{\star}(\kappa) + 1}$ and $x_{t^{\star}(\kappa) + 1} \in S_{t^{\star}(\kappa)}$ . Thus, $z_{t^{\star}(\kappa) + 1} \in \mathcal{B}(\mathfrak{c}, \sqrt{2} D)$ . Next we argue for $t \in \mathcal{T}^{-}(\kappa) \backslash t^{\star}(\kappa) + 1$ . Recall that the diameter of $\mathcal{X}$ is $D$ , and the fact that $y_t \in S_{t-1}$ from Algorithm 2. Thus, for any non-empty phase $\kappa$ , the distance from $\mathfrak{c}$ to the farthest $y_t$ belonging to the phase $\kappa$ is at most $D$ , i.e., $r_{\max}(\kappa) \leq D$ . Let the pre-image of $z_{t^{\star}(\kappa) + 1}(\kappa)$ onto $F_{s(\kappa)}$ (the base hyperplane with respect to which all hyperplanes have an angle of at most $\pi/4$ in phase $\kappa$ ) be $p(\kappa)$ such that projection of $p(\kappa)$ onto $F_{s(\kappa)}$ is $z_{t^{\star}(\kappa) + 1}(\kappa)$ . From the definition of any non-empty phase, the angle between $F_{s(\kappa)}$ and $F_t$ for $t \in \mathcal{T}(\kappa)$ is at most $\pi/4$ . Thus, the distance of $p(\kappa)$ from $\mathfrak{c}$ is at most $\sqrt{2} D$ .
+
+Consider the 'triangle' $\Pi(\kappa)$ that is the convex hull of $c$ , $z_{t^{\star}(\kappa) + 1}(\kappa)$ and $p(\kappa)$ . Given that the angle between $F_{t^{\star}(\kappa)}$ and $F_{t^{\star}(\kappa) - 1}$ is at most $\pi /4$ , the argument above implies that $z_{t}(\kappa)\in \Pi (\kappa)$ for $t = t^{\star}(\kappa)$ . For $t = t^{\star}(\kappa) - 1$ , $z_{t}(\kappa)\in F_{t - 1}$ is the projection of $z_{t - 1}(\kappa)$ onto $S_{t - 1}^{\prime}$ . This implies that the distance of $z_{t}(\kappa)$ (for $t = t^{\star}(\kappa) - 1$ ) from $c$ is at most
+
+$$
+\frac {D}{\cos \left(\alpha_ {t , t ^ {\star} (\kappa)}\right) \cos \left(\alpha_ {t ^ {\star} (\kappa) , t ^ {\star} (\kappa) + 1}\right)},
+$$
+
+where $\alpha_{t_1,t_2}$ is the angle between $F_{t_1}$ and $F_{t_2}$ . From the monotonicity of angles $\theta_t$ (Definition 11), and the definition of a non-empty phase, we have that $\alpha_{t,t^{\star}(\kappa)} + \alpha_{t^{\star}(\kappa),t^{\star}(\kappa) + 1} \leq \pi /4$ and $\alpha_{t,t^{\star}(\kappa)} \geq 0$ , $\alpha_{t^{\star}(\kappa),t^{\star}(\kappa) + 1} \geq 0$ . Next, we appeal to the identity
+
+$$
+\cos (A + B) \leq \cos (A) \cos (B) \tag {7}
+$$
+
+where $A + B \leq \pi / 4$ , to claim that $z_{t}(\kappa) \in \Pi(\kappa)$ for $t = t^{\star}(\kappa) - 1$ .
+
+Iteratively using this argument while invoking the identity (7) gives the result that for any $t \in \mathcal{T}^{-}(\kappa)$ , we have that $z_{t}(\kappa)$ belongs to $\Pi(\kappa)$ . Since $\Pi(\kappa) \subseteq \mathcal{B}(\mathsf{c}, \sqrt{2}D)$ , we have the claim for all $t \in \mathcal{T}^{-}(\kappa)$ .
+
+By definition $z_{t}(\kappa)$ for $t \in \mathcal{T}^{+}(\kappa)$ belong to $S_{t-1} \subseteq S_{1}$ . Thus, their distance from $c$ is at most $D$ .
+
+Lemma 33 For each non-empty phase $\kappa$ , and for $t \in \mathcal{T}(\kappa)$ the violation $v_{t}(\kappa) \geq dist(x_{t}, S_{t})$ where $dist(x_{t}, S_{t})$ is the original violation.
+
+Proof: By construction of any non-empty phase $\kappa$ , for $t \in \mathcal{T}(\kappa)$ both $x_{t}(\kappa)$ and $z_{t}(\kappa)$ belong to $F_{t-1}$ . Moreover, by construction, the distance of $z_{t}(\kappa)$ from $\mathsf{c}$ is at least as much as the distance of $x_{t}$ from $\mathsf{c}$ . Thus, using the monotonicity property of angles $\theta_{t}$ (Definition 11) we get the result. See Fig. 6 for a visual illustration.
+
+For each non-empty phase $\kappa$ , by definition, the curve defined by sequence $z_{t}(\kappa)$ for $t \in \mathcal{T}(\kappa)$ is a projection curve (Definition 18) on sets $S_{t}^{\prime}(\kappa)$ (note that $S_{t}^{\prime}(\kappa)$ 's are nested from Lemma 29). Moreover, for all $t \in \mathcal{T}(\kappa)$ , set $S_{t}^{\prime}(\kappa) \subset \chi(\kappa)$ which is a bounded convex set. Thus, for $d = 2$ from Lemma 19 the length of curve $\underline{z}(\kappa) = \{(z_{t}(\kappa), z_{t+1}(\kappa))\}_{t \in \mathcal{T}(\kappa)}$
+
+$$
+\sum_ {t \in \mathcal {T} (\kappa)} v _ {t} (\kappa) \leq 2 \text {d i a m e t e r} (\chi (\kappa)). \tag {8}
+$$
+
+By definition, the number of non-empty phases till time $t_{\mathrm{orth}}$ is at most 4. Moreover, in each nonempty phase $\chi(\kappa) \subseteq \mathcal{B}(\mathsf{c}, \sqrt{2}D)$ from Lemma 32.
+
+Thus, from (8), we have that
+
+$$
+\begin{array}{l} \sum_ {\text {P h a s e} \kappa \text {i s n o n - e m p t y}} \sum_ {t \in \mathcal {T} (\kappa)} v _ {t} (\kappa) \leq \sum_ {\text {P h a s e} \kappa \text {i s n o n - e m p t y}} 2 \operatorname {d i a m e t e r} (\chi (\kappa)) \\ \leq 8 \operatorname {d i a m e t e r} (\mathcal {B} (c, \sqrt {2} D)) \leq O (D). \tag {9} \\ \end{array}
+$$
+
+Using Lemma 33, we get
+
+$$
+\sum_ {\text {P h a s e} \kappa \text {i s n o n - e m p t y}} \sum_ {t \in \mathcal {T} (\kappa)} \operatorname {d i s t} \left(x _ {t}, S _ {t}\right) \leq O (D). \tag {10}
+$$
+
+For any empty phase, the constraint violation is the length of line segment $(x_{t},\mathcal{P}_{S_{t}}(x_{t}))$ (Algorithm 2) crossing it is a straight line whose length is at most $O(D)$ . Moreover, the total number of empty phases (Lemma 23) is a constant. Thus, the length of the curve $(x_{t},\mathcal{P}_{S_{t}}(x_{t}))$ for Algorithm 2 corresponding to all empty phases is at $O(D)$ .
+
+Recall from (4) that the CCV is at most $G$ times $\mathrm{dist}(x_t, S_t)$ . Thus, from (10) we get that the total violation incurred by Algorithm 2 corresponding to non-empty phases is at most $O(GD)$ , while corresponding to empty phases is at $O(GD)$ . Finally, accounting for the very first violation $\mathrm{dist}(x_1, S_1) \leq D$ and the fact that the CCV after time $t \geq t_{\mathrm{orth}}$ (Remark 7) is at most $GD$ , we get that the total constraint violation $\mathrm{CCV}_{[1:T]}$ for Algorithm 2 is at most $O(GD)$ .
+
+
+
+# 15 Proof of Theorem 14
+
+Proof: We need the following preliminaries.
+
+Definition 34 Let $K$ be a non-empty convex bounded set in $\mathbb{R}^d$ . Let $u$ be a unit vector, and $\ell_u$ a line through the origin parallel to $u$ . Let $K_u$ be the orthogonal projection of $K$ onto $\ell_u$ , with length $|K_u|$ . The mean width of $K$ is defined as
+
+$$
+W (K) = \frac {1}{V _ {d}} \int_ {\mathbb {S} _ {1} ^ {d}} | K _ {u} | d u, \tag {11}
+$$
+
+where $\mathbb{S}_1^d$ is the unit sphere in $d$ dimensions and $V_{d}$ its $(d - 1)$ -dimensional Lebesgue measure.
+
+The following is immediate.
+
+$$
+0 \leq W (K) \leq \operatorname {d i a m e t e r} (K). \tag {12}
+$$
+
+Lemma 35 Eggleston [1966] For $d = 2$
+
+$$
+W (K) = \frac {\text {P e r i m e t e r} (K)}{\pi}.
+$$
+
+
+Figure 8: Figure representing the cone $C_{w_t}(c_t)$ that contains the convex hull of $m_t$ and $S_t$ with respect to the unit vector $w_t$ . $u$ is a unit vector perpendicular to $H_u$ an hyperplane that is a supporting hyperplane $C_t$ at $m_t$ such that $\mathcal{C}_t \cap H_u = \{m_t\}$ and $u^T(x_t - m_t) \geq 0$
+
+Lemma 35 implies that $W(K) \neq W(K_1) + W(K_2)$ even if $K_1 \cup K_2 = K$ and $K_1 \cap K_2 = \phi$ .
+
+Recall from (5) that $x_{t} \in \partial S_{t-1}$ and $b_{t}$ is the projection of $x_{t}$ onto $S_{t}$ , and $m_{t}$ is the mid-point of $x_{t}$ and $b_{t}$ , i.e. $m_{t} = \frac{x_{t} + b_{t}}{2}$ . Moreover, the convex sets $S_{t}$ 's are nested, i.e., $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ . To prove Theorem 14 we will bound the rate at which $W(S_{t})$ (Definition 34) decreases as a function of the length $||x_{t} - b_{t}||$ .
+
+From Definition 13, recall that $\mathcal{C}_t$ is the convex hull of $m_t\cup S_t$ . We also need to define $\mathcal{C}_t^-$ as the convex hull of $x_{t}\cup S_{t}$ . Since $S_{t}\subseteq \mathcal{C}_{t}$ and $\mathcal{C}_t^{-}\subseteq S_{t - 1}$ (since $S_{t - 1}$ is convex and $x_{t}\in S_{t - 1}$ ), we have
+
+$$
+W \left(S _ {t}\right) - W \left(S _ {t - 1}\right) \leq W \left(\mathcal {C} _ {t}\right) - W \left(\mathcal {C} _ {t} ^ {-}\right). \tag {13}
+$$
+
+Definition 36 $\Delta_t = W(\mathcal{C}_t) - W(\mathcal{C}_t^-)$ .
+
+The main ingredient of the proof is the following Lemma that bounds $\Delta_t$ whose proof is provided after completing the proof of Theorem 14.
+
+Lemma 37
+
+$$
+\Delta_ {t} \leq - V _ {d - 1} \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d} (d - 1)} \left(c _ {t} ^ {\star}\right) ^ {d},
+$$
+
+where $c_t^*$ has been defined in Definition 13.
+
+Recalling that $c^{\star} = \min_{t}c_{t}^{\star}$ from Definition 13, and combining Lemma 37 with (12) and (13), we get that
+
+$$
+\sum_ {t = 1} ^ {T} \left| \left| x _ {t} - b _ {t} \right| \right| \leq \frac {2 V _ {d} (d - 1)}{V _ {d - 1}} \left(\frac {1}{c ^ {\star}}\right) ^ {d} \text {d i a m e t e r} \left(S _ {1}\right),
+$$
+
+since $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ . Recalling that $\mathrm{diameter}(S_1) \leq D$ , Theorem 14 follows.
+
+Proof: [Proof of Lemma 37]
+
+Let $H_{u}$ be the hyperplane perpendicular to vector $u$ . Let $\mathcal{U}_0$ be the set of unit vectors $u$ such that hyperplanes $H_{u}$ are supporting hyperplanes to $\mathcal{C}_t$ at point $m_t$ such that $\mathcal{C}_t \cap H_u = \{m_t\}$ and $u^T(x_t - m_t) \geq 0$ . See Fig. 8 for reference.
+
+Since $b_{t}$ is a projection of $x_{t}$ onto $S_{t}$ , and $m_{t}$ is the mid-point of $x_{t}, b_{t}$ , for $u \in \mathcal{U}_0$ , the hyperplane $H_{u}^{\prime}$ containing $x_{t}$ and parallel to $H_{u}$ is a supporting hyperplane for $\mathcal{C}_t^-$ .
+
+Thus, using the definition of $K_{u}$ from (11),
+
+$$
+\Delta_ {t} \leq \frac {1}{V _ {d}} \int_ {\mathcal {U} _ {0}} \left(\left| \mathcal {C} _ {t, u} \right| - \left| \mathcal {C} _ {t, u} ^ {-} \right|\right) d u = - \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d}} \int_ {\mathcal {U} _ {0}} u ^ {T} \frac {\left(x _ {t} - m _ {t}\right)}{\left| \left| x _ {t} - m _ {t} \right| \right|} d u, \tag {14}
+$$
+
+since $||x_{t} - m_{t}|| = ||x_{t} - b_{t}|| / 2$
+
+Recall the definition of $C_{w_t^*}(c_t^\star)$ from Definition 13 which implies that the convex hull of $m_t$ and $S_t$ , $\mathcal{C}_t$ is contained in $C_{w_t^*}(c_t^\star)$ . Next, we consider $\mathcal{U}_1$ the set of unit vectors $u$ such that hyperplanes $H_u$ are supporting hyperplanes to $C_{w_t^*}(c_t^\star)$ at point $m_t$ such that $u^T(x_t - m_t) \geq 0$ . By definition $\mathcal{C}_t \subseteq C_{w_t^*}(c_t^\star)$ , it follows that $\mathcal{U}_1 \subset \mathcal{U}_0$ .
+
+Thus, from (14)
+
+$$
+\Delta_ {t} \leq - \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d}} \int_ {\mathcal {U} _ {1}} u ^ {T}. \frac {\left(x _ {t} - m _ {t}\right)}{\left| \left| x _ {t} - m _ {t} \right| \right|} d u \tag {15}
+$$
+
+Recalling the definition of $w_{t}^{\star}$ (Definition 13), vector $u \in \mathcal{U}_1$ can be written as
+
+$$
+u = \lambda u _ {\perp} + \sqrt {1 - \lambda^ {2}} w _ {t} ^ {\star},
+$$
+
+where $u_{\perp}^{T}w_{t}^{\star} = 0,|u_{\perp}| = 1$ and since $u\in \mathcal{U}_1$
+
+$$
+0 \leq \lambda = \sqrt {1 - \left(u ^ {T} w _ {t} ^ {\star}\right)} = u ^ {T} u _ {\perp} \leq c _ {t} ^ {\star}.
+$$
+
+Let $\mathcal{S}_{\perp} = \{u_{\perp}:|u_{\perp}| = 1,u_{\perp}^{T}w_{t}^{\star} = 0\}$ . Let $du_{\perp}$ be the $(n - 2)$ -dimensional Lebesgue measure of $\mathcal{S}_{\perp}$ .
+
+It is easy to verify that $du = \lambda^{d - 2}(1 - \lambda^2)^{-1 / 2}d\lambda du_{\perp}$ and hence from (15)
+
+$$
+\Delta_ {t} \leq - \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{V _ {d}} \int_ {0} ^ {c _ {t} ^ {\star}} \lambda^ {d - 2} \left(1 - \lambda^ {2}\right) ^ {- 1 / 2} d \lambda \int_ {\mathscr {S} _ {\perp}} \left(\lambda u _ {\perp} + \sqrt {1 - \lambda^ {2}} w _ {t} ^ {\star}\right) ^ {T} \frac {\left(x _ {t} - m _ {t}\right)}{\left| \left| x _ {t} - m _ {t} \right| \right|} d u _ {\perp}. \tag {16}
+$$
+
+Note that $\int_{du_{\perp}}u_{\perp}du_{\perp} = 0$ . Thus,
+
+$$
+\begin{array}{l} \Delta_ {t} = - \frac {| | x _ {t} - b _ {t} | |}{2 V _ {d}} \frac {(w _ {t} ^ {\star}) ^ {T} (x _ {t} - m _ {t})}{| | x _ {t} - m _ {t} | |} \int_ {0} ^ {c _ {t} ^ {\star}} \lambda^ {d - 2} (1 - \lambda^ {2}) ^ {- 1 / 2} \sqrt {1 - \lambda^ {2}} d \lambda \int_ {\mathcal {S} _ {\perp}} d u _ {\perp}, \\ \stackrel {(a)} {\leq} - V _ {d - 1} \frac {| | x _ {t} - b _ {t} | |}{2 V _ {d}} \frac {(w _ {t} ^ {\star}) ^ {T} (x _ {t} - m _ {t})}{| | x _ {t} - m _ {t} | |} \int_ {0} ^ {c _ {t} ^ {\star}} \lambda^ {d - 2} d \lambda , \\ \stackrel {(b)} {\leq} - V _ {d - 1} \frac {| | x _ {t} - b _ {t} | |}{2 V _ {d} (d - 1)} c _ {t} ^ {\star} \left(c _ {t} ^ {\star}\right) ^ {d - 1}, \\ = - V _ {d - 1} \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d} (d - 1)} \left(c _ {t} ^ {\star}\right) ^ {d}, \tag {17} \\ \end{array}
+$$
+
+where $(a)$ follows since $\int_{\mathcal{S}_{\perp}}du_{\perp} = V_{d - 1}$ by definition, $(b)$ follows since $\frac{(w_t^\star)^T(x_t - m_t)}{||x_t - m_t||}\geq c_t^\star$ from Definition 13.
+
+
+
+# 16 Proof of Theorem 16
+
+Proof: Since $\mathrm{CCV}(t)$ is a monotone non-decreasing function, let $t_{\mathrm{min}}$ be the largest time until which Algorithm 2 is followed by Switch. The regret guarantee is easy to prove. From Theorem 15, regret until time $t_{\mathrm{min}}$ is at most $O(\sqrt{t_{\mathrm{min}}})$ . Moreover, starting from time $t_{\mathrm{min}}$ till $T$ , from Theorem 5, the regret of Algorithm 1 is at most $O(\sqrt{T - t_{\mathrm{min}}})$ . Thus, the overall regret for Switch is at most $O(\sqrt{T})$ .
+
+For the CCV, with Switch, until time $t_{\mathrm{min}}$ , $\mathrm{CCV}(t_{\mathrm{min}}) \leq \sqrt{T} \log T$ . At time $t_{\mathrm{min}}$ , Switch starts to use Algorithm 1 which has the following appealing property from (8) Sinha and Vaze [2024] that for
+
+any $t \geq t_{\mathrm{min}}$ where at time $t_{\mathrm{min}}$ Algorithm 1 was started to be used with resetting $\mathrm{CCV}(t_{\mathrm{min}}) = 0$ . For any $t \geq t_{\mathrm{min}}$
+
+$$
+\Phi \left(\mathrm {C C V} (t)\right) + \operatorname {R e g r e t} _ {t} \left(x ^ {\star}\right) \leq \sqrt {\sum_ {\tau = t _ {\min }} ^ {t} \left(\Phi^ {\prime} \left(\mathrm {C C V} (\tau)\right)\right) ^ {2}} + \sqrt {t - t _ {\min }}. \tag {18}
+$$
+
+where $\beta = (2GD)^{-1}, V = 1, \lambda = \frac{1}{2\sqrt{T}}, \Phi(x) = \exp(\lambda x) - 1$ , and $\lambda = \frac{1}{2\sqrt{T}}$ . We trivially have $\operatorname{Regret}_t(x^\star) \geq -\frac{Dt}{2D} \geq -\frac{t}{2}$ . Hence, from (18), we have that for any $\lambda = \frac{1}{2\sqrt{T}}$ and any $t \geq t_{\min}$
+
+$$
+\mathrm {C C V} _ {[ t _ {\min }, T ]} \leq 4 G D \ln (2 (1 + 2 T)) \sqrt {T}.
+$$
+
+Since as argued before, with Switch, $\mathrm{CCV}(t_{\mathrm{min}}) \leq \sqrt{T} \log T$ , we get that $\mathrm{CCV}_{[1:T]} \leq O(\sqrt{T} \log T)$ .
+
+# 17 Proof of Theorem 17
+
+Clearly, with $f_{t} \equiv 0$ for all $t$ , with Algorithm 2, $y_{t} = x_{t}$ and the successive $x_{t}$ 's are such that $x_{t + 1} = \mathcal{P}_{S_t}(x_t)$ . Thus, essentially, the curve $\underline{x} = (x_1,x_2),(x_2,x_3),\ldots ,(x_{T - 1},x_T)$ formed by Algorithm 2 for OCS is a projection curve (Definition 18) on $S_{1} \supseteq ,\dots ,\supseteq S_{T}$ and the result follows from Lemma 19 and the fact that $\mathrm{diameter}(S_1) \leq D$ .
\ No newline at end of file
diff --git a/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/images.zip b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..afec3243ab6cc39a448619655bc9dd6aeccbaeef
--- /dev/null
+++ b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca9d4f2d687c42fb34e4d26541b9705cf5bc6aa660dbdeea3a2cf99a8fc24c10
+size 612389
diff --git a/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/layout.json b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2c1c90fcd666f83c08096d8ce91b08fc9d1fd57c
--- /dev/null
+++ b/NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a408d2ac8e02a824373173e982d5c3ca5544de2533ff839947c19cc9d3605fc6
+size 1592059
diff --git a/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_content_list.json b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..495c4ed0ac7b99d6dc9cc0f1603ce4e4aa85c6f7
--- /dev/null
+++ b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30b697506c7a7e6b65857ebdbb1d5d59dd5b32304bcb133decf334ee8ea12bb4
+size 234050
diff --git a/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_model.json b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..731513b2126cdf7862bc10f061aebd61ea3b2dc3
--- /dev/null
+++ b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:221e8b74fe1916827a63487ba61f393329ed049114fb0879061c85ff32b3a7d4
+size 301627
diff --git a/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_origin.pdf b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a835ad80e286be996b5ccac4a7a558bd67f8a3c4
--- /dev/null
+++ b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cea9277e6a0a94336e488f6776e6ad1e6daba430ec5e0d79f637329c4e941df
+size 830885
diff --git a/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/full.md b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..58743f39825ba8ec7c549d8b478657ff993bebac
--- /dev/null
+++ b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/full.md
@@ -0,0 +1,1120 @@
+# Q#: Provably Optimal Distributional RL for LLM Post-Training
+
+Jin Peng Zhou $^{1}$ , Kaiwen Wang $^{1}$ , Jonathan Chang $^{4}$ , Zhaolin Gao $^{1}$ , Nathan Kallus $^{1,3}$ , Kilian Q. Weinberger $^{1}$ , Kianté Brantley $^{2}$ , and Wen Sun $^{1}$
+
+$^{1}$ Cornell University $^{2}$ Harvard University $^{3}$ Netflix $^{4}$ Databricks
+
+# Abstract
+
+Reinforcement learning (RL) post-training is crucial for LLM alignment and reasoning, but existing policy-based methods, such as PPO and DPO, can fall short of fixing shortcuts inherited from pre-training. In this work, we introduce $Q\sharp$ , a value-based algorithm for KL-regularized RL that guides the reference policy using the optimal regularized $Q$ function. We propose to learn the optimal $Q$ function using distributional RL on an aggregated online dataset. Unlike prior value-based baselines that guide the model using unregularized $Q$ -values, our method is theoretically principled and provably learns the optimal policy for the KL-regularized RL problem. Empirically, $Q\sharp$ outperforms prior baselines in math reasoning benchmarks while maintaining a smaller KL divergence to the reference policy. Theoretically, we establish a reduction from KL-regularized RL to no-regret online learning, providing the first bounds for deterministic MDPs under only realizability. Thanks to distributional RL, our bounds are also variance-dependent and converge faster when the reference policy has small variance. In sum, our results highlight $Q\sharp$ as an effective approach for post-training LLMs, offering both improved performance and theoretical guarantees. The code can be found at https://github.com/jinpz/q_sharp.
+
+# 1 Introduction
+
+Reinforcement learning (RL) post-training is a crucial step in training large language models (LLMs), aligning their generations with human preferences [1] and enhancing their reasoning capabilities [2, 3]. This stage typically follows supervised learning (next-token prediction), where the model is further trained to maximize expected cumulative reward while minimizing KL divergence from the reference policy $\pi^{\mathrm{ref}}$ obtained via supervised learning. The KL penalty plays a critical role by keeping the model close to $\pi^{\mathrm{ref}}$ , mitigating issues such as reward hacking and catastrophic forgetting.
+
+Most state-of-the-art LLMs [4, 5, 6] are post-trained using policy-based RL methods, which update model weights via stochastic gradient descent using algorithms like RLOO [7], PPO [8], and DPO [9]. However, these methods are computationally expensive, requiring full backpropagation through the LLM during training. In this paper, we propose a more efficient alternative: a value-based RL approach that guides the generations of the reference policy $\pi^{\mathrm{ref}}$ using a learned value function, without modifying $\pi^{\mathrm{ref}}$ model weights. This approach is particularly attractive because, for many tasks, evaluating generations is easier than producing them [4, 10], suggesting we can use much smaller models to learn value functions for guidance. For instance, in our experiments (Section 3.2), we show that a 1B parameter value model can effectively steer and improve a 70B parameter LLM.
+
+Existing value-based methods for LLM post-training, such as CD [11] and VAS [12], fall short of faithfully optimizing the KL-constrained RL objective. These approaches guide $\pi^{\mathrm{ref}}$ using $Q^{\pi^{\mathrm{ref}}}$ —the
+
+
+
+
+Representative Results
+Figure 1: (Left) A sketch of our post-training algorithm $(Q\sharp)$ based on distributional RL. $Q\sharp$ alternates between learning $Z^{\star}$ - the reward-to-go distribution of $\pi^{\mathrm{ref}}$ - and using the induced policy to collect new data and further improve the distributional estimate. (Right) Evaluation result on the GSM8K dataset [19]. We see that $Q\sharp$ achieves both higher accuracy and lower KL compared to prior value-based post-training algorithms [11, 12].
+
+expected reward-to-go under $\pi^{\mathrm{ref}}$ without KL regularization—which does not guarantee convergence to the optimal policy $\pi^{\star, \eta}$ . In contrast, under the classical KL-regularized RL framework, we show that it is provably optimal to guide $\pi^{\mathrm{ref}}$ using $Q^{\star, \eta}$ , the expected reward-to-go under the optimal policy $\pi^{\star, \eta}$ , which accounts for KL regularization. This theoretical insight ensures convergence to $\pi^{\star, \eta}$ and addresses the shortcomings of previous methods. As we demonstrate empirically and theoretically, prior approaches can lead to suboptimal rewards or large KL divergence—issues that our algorithm, $Q\sharp$ , provably avoids.
+
+Our method exploits special properties of $Q^{\star, \eta}$ in deterministic MDPs and iteratively trains a model to estimate it through supervised distributional learning such as MLE. The iterative training procedure is motivated by the classic imitation learning algorithm DAgger [13], which addresses covariate shift and ensures that the learned $Q^{\star, \eta}$ estimator remains accurate when used to guide $\pi^{\mathrm{ref}}$ at inference time. This distributional learning approach not only enhances empirical performance but also enables second-order style regret bounds - instance-dependent bounds that adapt to the variance of the model's generation.
+
+$Q\sharp$ differs from traditional RL methods in two key aspects. First, we avoid complex temporal difference (TD) learning [14] or Q-learning techniques [15, 16], instead relying on direct supervised learning of a fixed critic. Second, while we adopt a distributional perspective, $Q\sharp$ is conceptually simpler than classical distributional RL algorithms like C51 [17]: we directly learn outcome distributions via supervised maximum likelihood, without invoking distributional Bellman updates. We elaborate on this and related works in Appendix A. In summary, our contributions are as follows:
+
+1. We propose $Q^{\sharp}$ , a principled algorithm for KL-regularized RL in deterministic MDPs, which includes LLMs, based on guiding $\pi^{\mathrm{ref}}$ with the soft $Q^{\star}$ learned with distributional RL (Section 2.2).
+2. We prove variance-dependent PAC bounds for convergence to the optimal policy, which only requires realizability in the function class (Section 4).
+3. We show that value-based post-training, which includes $Q\sharp$ , can fix biases and shortcuts in a star-graph environment [18], while popular policy-based methods cannot (Section 3.1).
+4. We provide extensive experiments on math reasoning tasks that validate the effectiveness of our method at maximizing reward while maintaining small KL deviations from the reference policy (Section 3.2).
+
+# 2 Method
+
+# 2.1 Preliminaries
+
+We study KL-regularized reinforcement learning (RL) in deterministic Markov Decision Processes (MDPs), where large language model (LLM) post-training is a motivating special case. An MDP is defined by a state space $\mathcal{X}$ , action space $\mathcal{Y}$ , horizon $H$ , transition kernels $(P_{1},\ldots ,P_{H})$ with $P_{h}:\mathcal{X}\times \mathcal{Y}\mapsto \Delta (\mathcal{X})$ , and known reward functions $(r_1,\dots,r_H)$ where $r_h:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}$ . A policy $\pi = (\pi_{1},\dots,\pi_{H})$ consists of decision rules $\pi_h:\mathcal{X}\rightarrow \Delta (\mathcal{Y})$ . For a given $\eta >0$ , the KL-regularized
+
+value of a policy $\pi$ is defined as
+
+$$
+V ^ {\pi , \eta} := \mathbb {E} _ {\pi} \left[ \sum_ {h = 1} ^ {H} r _ {h} \left(x _ {h}, y _ {h}\right) - \eta \mathrm {K L} \left(\pi_ {h} \left(x _ {h}\right) \mid \mid \pi_ {h} ^ {\text {r e f}} \left(x _ {h}\right)\right) \right]. \tag {1}
+$$
+
+A classical result shows that KL-regularized RL can be solved via soft Bellman equations [20]. Starting from $h = H$ and proceeding backward, we define:
+
+$$
+V _ {H + 1} ^ {\star , \eta} (x) = 0, \qquad \qquad Q _ {h} ^ {\star , \eta} (x, y) = r _ {h} (x, y) + \mathbb {E} _ {x ^ {\prime} \sim P _ {h} (x, y)} [ V _ {h + 1} ^ {\star , \eta} (x ^ {\prime}) ],
+$$
+
+$$
+\pi_ {h} ^ {\star , \eta} (y \mid x) \propto \pi_ {h} ^ {\operatorname {r e f}} (y \mid x) \exp \left(\eta^ {- 1} Q _ {h} ^ {\star , \eta} (x, y)\right), \quad V _ {h} ^ {\star , \eta} (x) = \eta \ln \mathbb {E} _ {y \sim \pi^ {\operatorname {r e f}} (x)} \exp \left(\eta^ {- 1} Q _ {h} ^ {\star , \eta} (x, y)\right). \tag {2}
+$$
+
+This expresses the optimal policy as a softmax over $Q_h^{\star, \eta}$ , weighted by $\pi_h^{\mathrm{ref}}$ . Moreover, $Q_h^{\star, \eta}(x, y)$ is the maximal expected KL-regularized return starting from $(x, y)$ at time $h$ . We now focus on deterministic MDPs, which covers LLM post-training and other structured generation tasks such as diffusion models [21].
+
+Assumption 2.1. The transitions $P_{h}$ are deterministic.
+
+Under this assumption, the value function simplifies significantly:
+
+$$
+\exp (\eta^ {- 1} V _ {h} ^ {\star , \eta} (x))
+$$
+
+$$
+= \mathbb {E} _ {y \sim \pi_ {h} ^ {\text {r e f}} (x)} [ \exp (\eta^ {- 1} r _ {h} (x, y) + \eta^ {- 1} V _ {h + 1} ^ {\star , \eta} \left(x ^ {\prime}\right)) ] \tag {3}
+$$
+
+$$
+= \mathbb {E} _ {\pi^ {\text {r e f}}} \left[ \exp \left(\eta^ {- 1} \sum_ {t > h} r _ {t} \left(x _ {t}, y _ {t}\right)\right) \mid x _ {h} = x \right], \tag {4}
+$$
+
+where Equation (3) is due to the determinism of $P_{h}$ , while Equation (4) follows by recursively unrolling until the final step. Note that although $V_{h}^{\star, \eta}(x_{h})$ corresponds to the soft value of the optimal policy, its recursion is expressed via expectations over $\pi^{\mathrm{ref}}$ . We summarize this in the following known result [22, 23, 21]:
+
+Theorem 2.2. Under Assumption 2.1, we have $V_h^{\star,\eta}(x_h) = \eta \ln \mathbb{E}_{\pi^{\mathrm{ref}}}[\exp(\eta^{-1} \sum_{t \geq h} r_t(x_t, y_t)) | x_h]$ and $Q_h^{\star,\eta}(x_h, y_h) = \eta \ln \mathbb{E}_{\pi^{\mathrm{ref}}}[\exp(\eta^{-1} \sum_{t \geq h} r_t(x_t, y_t)) | x_h, y_h]$ .
+
+This shows $V^{\star, \eta}$ and $Q^{\star, \eta}$ are simple functionals of $Z^{\star}$ - the cumulative reward distribution of $\pi^{\mathrm{ref}}$ - where the functional is $f(P) = \eta \ln \mathbb{E}_P \exp(X / \eta)$ . In other words, if we learn the cumulative reward distribution of $\pi^{\mathrm{ref}}$ , then we can directly compute $V^{\star, \eta}$ and $Q^{\star, \eta}$ , without any dynamic programming.
+
+This offers several benefits. First, we do not require temporal difference (TD) learning (i.e., bootstrapping) which is notoriously unstable with deep networks [24] and requires completeness-type assumptions to guarantee convergence in theory [25]. Second, fitting the reward-to-go distribution $Z^{\star}$ or regressing $\mathbb{E}_{\pi^{\mathrm{ref}}}[\exp (\eta^{-1}\sum_{t\geq h}r_t)]$ is a standard supervised learning task with a fixed target, which is much more stable in practice and well-understood in theory. Notably, there is no bootstrapping or changing targets which is what renders deep RL fragile. Third, we can apply distributional RL methods, where we directly fit the distribution $Z^{\star}$ via supervised learning (e.g., maximum likelihood). Importantly, our approach does not involve distributional Bellman equation nor distributional TD update, which are known to be non-contractive under certain metrics [17]. Prior work has shown that fitting $Z^{\star}$ in this manner yields benefits in representation learning [17, 26], lower variance updates [27], and second-order bounds [28, 29].
+
+Applicability to LLMs. Our deterministic MDP framework directly models LLM post-training as a special case [4]. The initial state $x_{1}$ corresponds to the input prompt, each intermediate state $x_{h}$ is the current generation prefix, and the action $y_{h}$ is the next token. The policy thus reflects the LLM's autoregressive decoding process. The transition function is deterministic: $P_{h}(x_{h},y_{h}) = x_{h}y_{h}$ , which simply appends the new token to the prefix. In many post-training settings, the reward is sparse, meaning only $r_{H}$ is nonzero. Under this assumption, Theorem 2.2 simplifies to $Q_{h}^{\star ,\eta}(x_{h},y_{h}) = \eta \ln \mathbb{E}_{\pi^{\mathrm{ref}}}[\exp (\eta^{-1}r(x_{H},y_{H}))\mid x_{h},y_{h}]$ . For example, the reward may indicate solution correctness in math tasks or reflect user preference in dialogue, as determined by a learned reward model.
+
+Inference with cumulative reward distribution. Let $Z^{\star}$ denote the conditional distribution over cumulative rewards under rollouts from $\pi^{\mathrm{ref}}$ , that is, $Z_h^\star(x,y) \stackrel{D}{=} \sum_{t \geq h} r_t(x_t,y_t) \mid x_h = x, y_h = y$ , where the trajectory $(x_h,y_h,\ldots,x_H,y_H)$ is sampled under $\pi^{\mathrm{ref}}$ , and $\stackrel{D}{=}$ denotes equality in distribution. Combining Theorem 2.2 and Equation (2), the optimal policy can be rewritten in terms
+
+of $Z^{\star}$ as $\pi_h^{\star, \eta}(y \mid x) \propto \pi_h^{\mathrm{ref}}(y \mid x) \mathbb{E}_{z \sim Z_h^\star(x,y)}[\exp(z/\eta)]$ . This motivates defining a general family of policies induced by any distribution $Z: \mathcal{X} \times \mathcal{Y} \to \Delta(\mathbb{R})$ via
+
+$$
+\pi_ {h} ^ {Z, \eta} (y \mid x) \propto \pi_ {h} ^ {\operatorname {r e f}} (y \mid x) \mathbb {E} _ {z \sim Z _ {h} (x, y)} [ \exp (z / \eta) ]. \tag {5}
+$$
+
+Since $\pi^{\star,\eta} = \pi^{Z^{\star},\eta}$ , we can approximate the optimal policy by estimating $Z^{\star}$ with $\widehat{Z} \approx Z^{\star}$ using distributional learning techniques such as maximum likelihood estimation (MLE), and then instantiating $\pi^{Z,\eta}$ . This forms the core of our proposed $Q^{\sharp}$ algorithm.
+
+# 2.2 Algorithm $Q\sharp$
+
+We propose Q-Sharp $(Q\sharp)$ , a distributional value-based algorithm for KL-regularized RL in deterministic MDPs. $Q\sharp$ iteratively collects data from progressively improved policies to approximate the target distribution $Z^{\star}$ (Algorithm 1). In this section, we describe $Q\sharp$ in practical terms for deep neural networks and LLMs; in Section 4, we formalize it using online learning oracles and prove convergence under a mild realizability assumption.
+
+# Algorithm 1 $Q\sharp$
+
+1: Input: reference policy $\pi^{\mathrm{ref}}$
+2: Initialize parameters $\theta^1$ of conditional distribution $Z^{\theta}:\mathcal{X}\times \mathcal{Y}\to \Delta (\mathbb{R})$ and dataset $\mathcal{D}_h = \emptyset$ for all $h$ .
+3: for $k = 1,2,\ldots$ until convergence do
+4: Let $\pi^k\gets \pi^{Z_{\theta^k},\eta}$ be policy induced by $Z_{\theta^k}$ (using Equation (5)).
+5: for $i = 1,2,\dots ,N$ do
+6: Sample a switching time $h \sim [H]$ .
+7: Roll-in with $\pi^k$ for $h - 1$ steps.
+8: Resume trajectory with $\pi^{\mathrm{ref}}$ from $x_{h}$
+9: Let $R_{t}$ denote cumulative rewards after time $t$ .
+0: Add $(x_{t},y_{t},R_{t})$ to $\mathcal{D}_t,\forall t\geq h$
+11: end for
+12: Update $\theta^k$ by minimizing the distributional loss on the aggregated data:
+
+$$
+\theta^ {k + 1} \leftarrow \arg \min _ {\theta} \sum_ {h} \mathbb {E} _ {\mathcal {D} _ {h}} \left[ \mathcal {L} \left(R _ {h}, Z ^ {\theta} \left(x _ {h}, y _ {h}\right)\right) \right].
+$$
+
+13: end for
+14: Output: Final $\theta^k$ .
+
+Let $Z_h^\theta : \mathcal{X} \times \mathcal{Y} \to \Delta(\mathbb{R})$ denote a parametric conditional distribution with parameters $\theta$ . Given a sample $R \in \mathbb{R}$ (e.g., drawn from $Z^\star$ ) and a model prediction $\hat{Z}$ , let $L(R, \hat{Z})$ be a distributional loss for training the model. We denote by $\theta^\star$ the parameter that minimizes the distance between $Z^\star$ and $Z^\theta$ . For example, if $Z_h^\star(x, y)$ is $\operatorname{Ber}(p_h^\star(x, y))$ , we can parameterize $Z_h^\theta(x, y)$ by a neural network that outputs a scalar estimate $\hat{p}$ of $p_h^\star(x, y)$ . The natural loss in this case is binary cross-entropy (BCE): $L_{\mathrm{bce}}(r, \hat{p}) = -r \ln \hat{p} - (1 - r) \ln (1 - \hat{p})$ . This binary setup is appropriate for tasks such as math or multiple-choice questions where the reward is binary. If the reward distribution has no known parametric form, one can use a non-parametric model (e.g., a histogram that discretizes the reward space) trained via maximum likelihood (MLE) [17]: $L_{\mathrm{mle}}(r, \hat{z}) = -\ln \hat{z}[\mathrm{idx}(r)]$ , where $\mathrm{idx}(r)$ returns the index of the bin containing $r$ , and $\hat{z}[i]$ denotes the probability estimate for bin $i$ . In general, $Q^\sharp$ can incorporate any distributional RL loss function [30]. Once $Z^\theta$ closely approximates
+
+$Z^{\star}$ , we instantiate a near-optimal policy $\pi^{\theta, \eta}$ via Equation (5). In Section 4, we prove that this procedure converges to the optimal policy under a mild realizability assumption.
+
+Then, the key idea of $Q\sharp$ is an iterative data-collection and update process. At iteration $k$ , with current parameters $\theta^k$ , we deploy the induced policy $\pi^k \coloneqq \pi^{Z^{\theta^k},\eta}$ to gather new data. Specifically, we roll in with $\pi^k$ for $h - 1$ steps to reach a state $x_{h}$ , then switch to $\pi^{\mathrm{ref}}$ to complete the trajectory. The cumulative reward from step $h$ to the end, denoted $R_{h,k}$ , is a sample from $Z_h^\star (x_h)$ . We add these samples to the dataset and update $\theta$ via gradient descent on the distributional loss. This process repeats until convergence.
+
+Our iterative approach is similar in spirit to DAGger [13], AggreVaTe [31, 32], and RLGF [33], which likewise mitigate distribution shift to ensure the learned estimator remains accurate at test time. In contrast, prior value-based methods such as CD [11] and entropy-regularized PRM [34] train their estimators only on data from $\pi^{\mathrm{ref}}$ . While such an estimator may perform well on $\pi^{\mathrm{ref}}$ 's distribution, it offers no guarantee of accuracy when used to steer $\pi^{\mathrm{ref}}$ 's generation at inference time.
+
+Comparison with CD and VAS. The most closely related value-based baselines are CD [11] and VAS [12], yet they exhibit three critical limitations. (i) Incorrect value target. Both methods re-weight $\pi^{\mathrm{ref}}$ using $Q^{\pi^{\mathrm{ref}},0}$ —the unregularized $Q$ -function of $\pi^{\mathrm{ref}}$ —thereby ignoring the KL term. As shown in Section 4, this choice can yield policies that are either sub-optimal in reward or far from $\pi^{\mathrm{ref}}$ . $Q^{\#}$
+
+
+Figure 2: (a) The star-graph with degree $d = 5$ and path length $\ell = 5$ . Next-token prediction, the standard pre-training loss, learns a sub-optimal shortcut that selects a random first node and follows it until the end. (b) Accuracies on held-out test graphs for various models. Policy-based post-training methods such as REINFORCE and RPO (a variant of DPO) still exhibit the shortcut and has test accuracy $1 / d$ , while our value-based $Q\sharp$ fixes the shortcut and achieves near-perfect accuracy.
+
+instead employs the principled target $Q^{\star,\eta}$ and is guaranteed to converge to $\pi^{\star,\eta}$ under realizability. (ii) Offline training. CD and VAS fit their value functions on a fixed dataset, whereas $Q\sharp$ alternates data collection and updates, improving robustness to distribution shift [13, 31]. (iii) Squared-loss regression. Both baselines learn $Q^{\pi^{\mathrm{ref}},0}$ with an $\ell_2$ loss, implicitly assuming homoskedastic Gaussian rewards. $Q\sharp$ leverages distributional RL losses, which are theoretically more sample-efficient [35, 29] and empirically superior [17, 26].
+
+Relation to actor-critic methods. Although $Q\sharp$ learns a value function, its target $V^{\star ,\eta}$ (or $Q^{\star ,\eta}$ ) is fixed throughout training. Standard actor-critic algorithms (e.g., PPO) continuously update $V^{\pi}$ or $Q^{\pi}$ as $\pi$ evolves, and rely on bootstrap-based TD updates. In contrast, $Q\sharp$ trains the value network via distributional supervised learning (e.g., MLE), thereby avoiding the instability of changing targets.
+
+Relation to DPO [9]. While the form of Equation 5 resembles DPO's policy expression, their derivations and scopes are fundamentally different. DPO begins from the same KL-regularized RL objective but, without exploiting the deterministic transition structure, operates at the sequence level, corresponding to the one-step case ( $H = 1$ ). Its policy is given by $\pi_r(y|x) = \frac{1}{Z(x)}\pi^{\mathrm{ref}}(y|x)\exp \left(\frac{1}{\beta} r(x,y)\right)$ , where $y$ denotes a full completion and $Z(x)$ is the partition function over all possible sequences. When $H = 1$ , our $Q^{\star}$ reduces to the reward $r$ , and the DPO expression naturally follows as a special case. However, the DPO partition function $Z(x)$ is intractable to normalize, and practical implementations must rely on pairwise preference data (e.g., Bradley-Terry modeling) to bypass it.
+
+Inference with multiple $\eta$ . Because the learned distribution $\widehat{Z}^{\theta}$ is independent of $\eta$ , a single trained network can support any choice of $\eta$ at inference time simply by plugging it into Equation (5).
+
+# 3 Experiments
+
+# 3.1 Star-Graph
+
+We begin with the star-graph task from [18], illustrated in Figure 2(a). A star-graph $G(d, \ell)$ has $d$ paths of length $\ell$ from a central node. Given a start/goal node and the graph edges, the LM must generate a valid path. Though seemingly simple, [18] showed that next-token pre-training often learns a faulty shortcut: the model picks the first node at random (correct with probability $1 / d$ ) and follows the path, yielding a test accuracy of only $1 / d$ . This highlights the limitations of next-token prediction on planning tasks. [36] also showed that the task embeds the "sparse parity" problem — determining whether the sum of a binary string is even or odd — which is known to be difficult for gradient-based optimizers and is widely studied in learning theory and optimization [37, 38, 39, 40].
+
+Can this shortcut be fixed during post-training? We evaluate REINFORCE [41], DPO [9], RPO [42], and $Q\sharp$ , reporting test accuracies in Figure 2 (b). $Q\sharp$ consistently corrects the shortcut, achieving near-perfect accuracy, even for long paths ( $G(2,20)$ ) or large degrees ( $G(5,5)$ ). CD [11] achieves similar performance as $Q\sharp$ . In contrast, policy-based methods like REINFORCE and RPO fail to
+
+Table 1: Comparison of $Q\sharp$ with ${\pi }^{\text{ref }}$ and CD baseline on GSM8K (Left) and MATH (Right). For both Llama 3 and Llama 3.1 8B, ${Q}^{\sharp }$ consistently improves both pass@1 and majority voting accuracy upon baselines while incurring minimal KL deviation.
+
+| πref | Llama 3 8B | Llama 3.1 8B | πref | Llama 3 8B | Llama 3.1 8B |
| Methods | πref | CD | Q# | πref | CD | Q# | Methods | πref | CD | Q# | πref | CD | Q# |
| pass@1↑ | 69.1 | 77.8 | 78.4 | 82.9 | 84.5 | 85.1 | pass@1↑ | 25.4 | 24.9 | 27.1 | 43.9 | 45.3 | 46.7 |
| maj1@8↑ | 85.8 | 87.2 | 88.1 | 90.5 | 90.9 | 91.4 | maj1@8↑ | 34.3 | 34.3 | 37.9 | 57.0 | 59.0 | 60.1 |
| KL-Divergence ↓ | - | 6.39 | 2.65 | - | 7.43 | 3.67 | KL-Divergence ↓ | - | 15.27 | 7.14 | - | 26.8 | 8.69 |
+
+fix the shortcut, plateauing at $1 / d$ accuracy. DPO performs worst, often collapsing the policy to zero accuracy by suppressing both chosen and rejected paths—a failure mode also noted by RPO. These results suggest that once shortcuts are learned, policy-based methods struggle to unlearn them, reinforcing the effectiveness of value-based approaches like $Q\sharp$ and CD for LLM post-training. Please see Appendix C for a more detailed discussion on why REINFORCE and RPO cannot fix shortcuts and implementation details.
+
+# 3.2 Math Reasoning
+
+Datasets. We evaluate on two mathematical reasoning benchmarks: GSM8K [19], a dataset of grade school arithmetic word problems, and MATH [43], which features more challenging high school competition problems. We split each training set $90\% - 10\%$ for training and validation. Test performance is reported on the full GSM8K test set and a 500-sample subset of MATH (MATH-500), following prior work [44, 45]. In Appendix G, we also evaluate $Q_{\#}^{\#}$ on AIME-24 dataset.
+
+Models. We experiment with Llama 3 [5] and Qwen 2.5 [46] model families, both of which are competitive on math reasoning tasks and span a wide range of parameter scales. Due to space constraints, we report results for Llama 3 in the main text and defer Qwen 2.5 results to Appendix G. Unless otherwise noted, the $Q^{\star,\eta}$ function in $Q\sharp$ is parameterized and initialized with a Llama 3.2 1B model, and we use $\eta = 0.1$ , which yields consistent and strong performance. We run $Q\sharp$ for two iterations, after which performance converges. Additional details on model configurations and $Q\sharp$ training are provided in Appendices D and E.
+
+Evaluation metrics. We report single sample accuracy (pass@1) and majority voting accuracy (maj1@k). pass@1 evaluates one sampled generation per problem against the ground truth, while maj1@k checks if the most frequent answer among $k$ samples is correct. We use $k = 8$ , temperature $T = 0.8$ , and nucleus sampling $p = 0.9$ . The evaluation prompt template is provided in Appendix F.
+
+Main results. Table 1 presents $Q^{\sharp}$ performance on GSM8K (Left) and MATH (Right) with $\pi^{\mathrm{ref}}$ as Llama 3 or 3.1 8B. Although both have 8B parameters, Llama 3.1 performs significantly better. Across all settings, $Q^{\sharp}$ consistently improves over $\pi^{\mathrm{ref}}$ , boosting pass@1 by up to $9\%$ on GSM8K with just 1B additional parameters. We also compare against the CD baseline [11], which incorrectly uses $Q^{\pi^{\mathrm{ref}},0}$ to guide $\pi^{\mathrm{ref}}$ . $Q^{\sharp}$ outperforms CD on both accuracy metrics while maintaining lower KL divergence. Overall, $Q^{\sharp}$ Pareto-dominates CD in the KL-regularized RL setting by achieving higher reward and lower KL. We note that CD [11] and VAS [12] are concurrent work and differ only in minor aspects such as sampling strategy. Therefore, we use CD as a canonical baseline for empirical comparison. Since $Q^{\sharp}$ is complementary to policy-based methods, we further evaluate its effectiveness when guiding a PPO-trained model, as shown in Appendix I.
+
+Larger $\pi^{\mathrm{ref}}$ and $Q\sharp$ sizes. We evaluate how performance scales with larger $\pi^{\mathrm{ref}}$ and $Q\sharp$ models on MATH (Table 2). Using 70B Llama 3 and 3.1 as $\pi^{\mathrm{ref}}$ significantly boosts baseline pass@1 (45.6% and 60.6%, respectively). Remarkably, a 1B $Q\sharp$ still improves these large models—e.g., by 2.5% pass@1 and 3.5% maj1@8 for Llama 3.1. Increasing $Q\sharp$ to 3B yields further gains, demonstrating scalability. Compared to Table 1 (right), we note that with 9B total parameters (8B $\pi^{\mathrm{ref}} + 1$ B $Q\sharp$ ), the maj1@8 accuracy already matches the pass@1 of the 70B $\pi^{\mathrm{ref}}$ in Table 2, suggesting a promising low-resource alternative. For Llama 3, pass@1 improves while maj1@8 slightly drops, likely due to increased generation diversity benefiting harder problems but reducing consistency on easier ones.
+
+$Q\sharp$ as a reward model. Beyond guiding $\pi^{\mathrm{ref}}$ generation, $Q\sharp$ 's token-level $Q$ function can also assess how good a complete generation is among many. We compute $Q$ (generation, EOS) by applying $Q\sharp$ as a reward model ( $Q\sharp$ -RM) on GSM8K and MATH, using both $\pi^{\mathrm{ref}}$ and $Q\sharp$ generations. Table 3 reports two settings: $Q\sharp$ -RM Best of 8 (selects top-scoring sample) and $Q\sharp$ -RM maj1@8 (aggregates
+
+Table 2: Performance of $\pi^{\mathrm{ref}}$ and $Q\sharp$ on MATH with larger $\pi^{\mathrm{ref}}$ and $Q\sharp$ model sizes. $Q\sharp$ of size 1B is capable of guiding a 70B $\pi^{\mathrm{ref}}$ model. Increasing $Q\sharp$ model sizes to 3B also leads to noticeably better performance for Llama 3.1 70B.
+
+| πref | Llama 3 70B | Llama 3.1 70B | |
| Q# Model | None | Llama 3.2 1B | Llama 3.2 3B | None | Llama 3.2 1B | Llama 3.2 3B |
| pass@1 ↑ | 45.6 | 46.4 | 46.7 | 60.6 | 63.1 | 64.1 |
| maj1@8 ↑ | 55.6 | 55.5 | 55.3 | 69.0 | 72.5 | 72.7 |
| KL-Divergence ↓ | - | 3.12 | 5.15 | - | 4.98 | 4.99 |
+
+
+Figure 3: Performance tradeoff between CD and $Q \sharp$ on the GSM8K validation set. (Left) pass@1 vs. KL divergence: $Q \sharp$ outperforms CD with higher accuracy and lower KL. (Right) KL divergence across $\eta$ : CD's KL increases rapidly and deviates from $\pi^{\mathrm{ref}}$ , while $Q \sharp$ remains stable.
+
+
+
+majority voting with scores). $Q \sharp$ -RM maj1@8 consistently improves over vanilla maj1@8, and Best of 8 yields more than $10\%$ gains over pass@1 for $\pi^{\mathrm{ref}}$ . The reward model can be used on both $\pi^{\mathrm{ref}}$ and $Q \sharp$ own generations to further improve performance, which suggests the (same) reward model has generalizability for evaluating diverse generations.
+
+Effect of $\eta$ . Figure 3 shows the performance-KL tradeoff between CD and $Q\sharp$ on the GSM8K validation set. (Left) Increasing KL can improve pass@1 for both methods, but $Q\sharp$ consistently achieves a better Pareto frontier. (Right) CD is highly sensitive to $\eta$ : as $\eta^{-1}$ increases, its KL grows rapidly and performance degrades below that of $\pi^{\mathrm{ref}}$ . In contrast, $Q\sharp$ remains stable and requires less tuning of $\eta$ .
+
+Ablations. We ablate several design choices in Table 4 on the GSM8K and MATH validation sets using pass@1 accuracy. The "Prefix" column tests training on all $t \geq h$ prefixes after switching to $\pi^{\mathrm{ref}}$ (Algorithm 1, Line 10), as opposed to only $t = h$ . Though this breaks IID assumptions, the increased training data improves $Q\sharp$ performance by up to $4\%$ . We compare two parameterizations of $Q^{\star ,\eta}$ : Q-type, which computes $Q^{\star ,\eta}(x,y)$ for all $y$ , and V-type, which predicts $Q^{\star ,\eta}(x,\hat{y})$ for a specific $\hat{y}$ . V-type outperforms Q-type, likely due to its lower parameter count and per-token computation. Details are in Appendix D. We also compare distributional $Q\sharp$ with MSE-based regression, which underperforms as expected under Bernoulli rewards. Finally, more iterations of Algorithm 1 yield marginal gains, with performance saturating after two iterations, which we adopt by default.
+
+Qualitative comparison. Figure 6 shows side-by-side generations from $\pi^{\mathrm{ref}}$ and $Q\sharp$ on math reasoning tasks. While both models often begin with similar prefixes—consistent with $Q\sharp$ ’s low KL deviation— $Q\sharp$ typically corrects $\pi^{\mathrm{ref}}$ ’s mistakes and produces more coherent reasoning. This behavior reflects $Q\sharp$ ’s ability to assign higher value to correct tokens, thereby steering generation more effectively at critical decision points. Additional examples are provided in Appendix K.
+
+Beyond math reasoning. To further validate the generality of $Q\sharp$ beyond mathematical reasoning tasks, we evaluate its performance on QuALITY [47], a challenging multiple-choice reading comprehension benchmark with long-form passages drawn from Project Gutenberg. As shown in
+
+Table 3: Performance of $\pi^{\mathrm{ref}}$ and $Q\sharp$ with $Q\sharp$ as a reward model. The reward model can determine the best generation among all generations for a problem and consistently improves maj1@8 for $\pi^{\mathrm{ref}}$ and $Q\sharp$ own generations.
+
+| Setting | Llama 3 8B GSM8K | Llama 3.1 8B MATH |
| Methods | πref | Q# | πref | Q# |
| pass@1 | 69.1 | 78.4 | 43.9 | 46.7 |
| maj108 | 85.8 | 88.1 | 57.0 | 60.1 |
| Q#-RM Best of 8 | 85.9 | 86.0 | 54.0 | 54.0 |
| Q#-RM maj108 | 88.5 | 89.2 | 59.2 | 60.6 |
+
+Table 4: Ablations of $Q \sharp$ (last row) on pass@1 with various configurations on the validation set of GSM8K and MATH. The improvement suggests that our design choices all contribute positively to the final performance.
+
+| Prefix | Type | Opt. | #Iter. | Llama 3 8B GSM8K | Llama 3.1 8B MATH |
| Single | V | Dist. | 1 | 80.5 | 64.5 |
| All | Q | Dist. | 1 | 81.4 | 66.4 |
| All | V | MSE | 1 | 81.4 | 65.4 |
| All | V | Dist. | 1 | 82.3 | 67.4 |
| All | V | Dist. | 2 | 83.5 | 68.5 |
+
+Appendix H, Table 6, we compare $Q \sharp$ with $\pi^{\mathrm{ref}}$ and CD baseline using Qwen 2.5 and Llama 3.1. Specifically, Qwen 2.5 1B guides Qwen 2.5 7B and Llama 3.2 1B guides Llama 3.1 8B. Across both architectures, $Q \sharp$ consistently improves upon $\pi^{\mathrm{ref}}$ in all evaluation metrics, demonstrating its robustness beyond the mathematical domain.
+
+# 4 Theory
+
+# 4.1 CD & VAS are sub-optimal for KL-regularized RL
+
+First, CD and VAS both propose to reweight $\pi^{\mathrm{ref}}(\cdot \mid x)$ with the unregularized $Q$ -function of $\pi^{\mathrm{ref}}$ :
+
+$$
+\pi^ {\mathrm {C D}, \eta} (y \mid x) \propto \pi^ {\text {r e f}} (y \mid x) \exp \left(Q ^ {\pi^ {\text {r e f}}} (x, y) / \eta\right), \tag {6}
+$$
+
+where recall that $Q_{h}^{\pi^{\mathrm{ref}}}(x_{h},y_{h}) = \mathbb{E}_{\pi^{\mathrm{ref}}}[\sum_{t\geq h}r_{t}\mid x_{h},y_{h}]$ . Comparing with Equation (2), we can already see that $\pi^{\mathrm{CD},\eta}$ does not match the optimal policy $\pi^{\star ,\eta}$ , as $Q^{\pi^{\mathrm{ref}}}$ can be arbitrarily far from $Q^{\star ,\eta}$ . In particular, $\pi^{\mathrm{CD}}$ may fail to optimize the KL-regularized RL objective and exhibit two failure cases, which we demonstrate with a simple MDP in Figure 4. First, we show that CD fails to maximize expected reward in this MDP, even as the KL-regularizer $\eta$ decays to zero.
+
+Theorem 4.1. Under Figure 4, $CD$ learns to always select the left sub-tree as $\eta \to 0$ , which gives a sub-optimal reward of 0.1, while $\pi^{\star,\eta}$ learns to always select the right sub-tree and chooses the path that gives reward 1.
+
+Proof. First, for CD, we have $Q^{\pi^{\mathrm{ref}}}(x_1, a_L) = 0.1$ and $Q^{\pi^{\mathrm{ref}}}(x_1, a_R) = 0.05$ . Hence, CD's probability of selecting the left sub-tree is $\frac{p_L \exp(0.1 / \eta)}{p_L \exp(0.1 / \eta) + p_R \exp(0.05 / \eta)}$ , which converges to 1 as $\eta \to 0$ . Next, for $Q^{\sharp}$ , we have $Q^{\star, \eta}(x_1, a_L) = 0.1$ and $Q^{\star, \eta}(x_1, a_R) = \eta \ln (0.05 \exp(1 / \eta) + 0.95)$ . Hence, $Q^{\sharp}$ 's probability of selecting the left sub-tree is $\frac{p_L \exp(0.1 / \eta)}{p_L \exp(0.1 / \eta) + p_R (0.05 \exp(1 / \eta) + 0.95)}$ , which converges to 0 as $\eta \to 0$ . Thus, CD learns the sub-optimal path.
+
+
+Figure 4: A tree MDP where edges are labeled with $\pi^{\mathrm{ref}}$ 's action probability. $\pi^{\mathrm{ref}}$ goes to the left sub-tree w.p. $p_L$ and the right sub-tree w.p. $p_R$ , where $p_L, p_R > 0$ . The left sub-tree gives $r = 0.1$ w.p. 1. In the right sub-tree, $\pi^{\mathrm{ref}}$ chooses reward 1 w.p. 0.05 and chooses reward 0 w.p. 0.95.
+
+Next, we show that CD also incurs a higher KL than $Q\sharp$ .
+
+Theorem 4.2. Under Figure 4, CD's KL converges to $\ln (1 / p_L)$ while $Q\sharp$ s KL converges to $\ln (1 / p_R)$ as $\eta \rightarrow 0$ . Thus if $p_L\ll p_R$ , CD converges to a higher KL than $Q\sharp$ .
+
+Proof. As shown in Theorem 4.1, CD learns to select the left sub-tree while $Q\sharp$ learns to select the right sub-tree as $\eta \rightarrow 0$ . Then, the KLs simply follow by definition.
+
+In sum, we proved that Figure 4, CD both incurs a higher KL and achieves a lower sub-optimal reward compared to $Q\sharp$ . Thus, $Q\sharp$ generally Pareto-dominates CD in the reward-KL trade-off, which matches our empirical findings.
+
+# 4.2 Performance Guarantee for $Q\sharp$
+
+We prove that the learned policy by $Q\sharp$ is guaranteed to converge to the optimal policy with enough samples. This result holds in rich-observation MDPs where the size of the state space can be exponentially large or infinite, so long as the mild realizability assumption holds.
+
+To setup, let $\mathcal{F}$ be a distributional function class for modeling $Z^{\star}$ , the reward-to-go distribution under $\pi^{\mathrm{ref}}$ . Each element of $\mathcal{F}$ has type $f = (f_{1},\dots ,f_{H})$ and $f_{h}:\mathcal{X}\times \mathcal{Y}\mapsto \Delta ([0,V^{\max}])$ . For the purpose of analysis, we assume access to a no-regret online learning oracle for the maximum likelihood (MLE) loss, which proceeds as follows: for each iteration $k = 1,2,\ldots ,K$ , given any $\{x_{h,k},y_{h,k},R_{h,k}\}_{h = 1}^{H}$ , the oracle outputs $\widehat{Z}_k\in \mathcal{F}$ s.t.
+
+$$
+\sum_ {k = 1} ^ {K} \sum_ {h = 1} ^ {H} \left(\log Z _ {h} ^ {\star} \left(R _ {h, k} \mid x _ {h, k}, y _ {h, k}\right) - \log \widehat {Z} _ {h, k} \left(R _ {h, k} \mid x _ {h, k}, y _ {h, k}\right)\right) \leq \operatorname {R e g} _ {\mathrm {m l e}} (K).
+$$
+
+Here, $\operatorname{Reg}_{\mathrm{mle}}(K)$ denotes the cumulative regret of the MLE oracle after $K$ iterations. No-regret online learning is well-studied in the literature [48, 49] and is a standard tool when reducing decision making to supervised learning [13, 50, 35]. For example, if $\mathcal{F}$ is finite and satisfies realizability, then Vovk's aggregating algorithm ensures that $\operatorname{Reg}_{\mathrm{mle}}(K) \lesssim \ln(|\mathcal{F}|)$ [51].2
+
+Assumption 4.3 (Realizability). $Z^{\star} \in \mathcal{F}$
+
+The following algorithm is a slightly modified version of Algorithm 1 amenable for theoretical analysis. The only differences with Algorithm 1 are: (1) we use the MLE oracle to learn $\widehat{Z}_k$ , and (2) for purpose of local exploration, we play a random action at the switching time $h$ before following $\pi^{\mathrm{ref}}$ to the end of the trajectory [31].
+
+We now state our main PAC bound for $Q \sharp$ .
+
+Theorem 4.4. Fix any $\eta \in (0, V^{\max}]$ and $\delta \in (0, 1)$ . Under Assumptions 2.1 and 4.3, Algorithm 2 ensures w.p. at least $1 - \delta$ , setting $\beta = \ln(1 / \delta) + \operatorname{Reg}_{\mathrm{mle}}(K)$ , we have
+
+$$
+\sum_ {k = 1} ^ {K} (V ^ {\star , \eta} - V ^ {\pi^ {k}, \eta}) \lesssim A V ^ {\max } (\sqrt {\sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathsf {C V} _ {h , k} ^ {2} (x , y) \cdot \beta} + \max _ {h \in [ H ]} E _ {h} \cdot \beta),
+$$
+
+where $\mathsf{CV}_{h,k}(x,y)\coloneqq \mathbb{E}_{x_h\sim \pi^k,y_h\sim \mathrm{Unif}(\mathcal{A})}\left[\frac{\sqrt{\mathrm{Var}(\exp(Z_h^\star(x_h,y_h) / \eta))}}{\mathbb{E}[\exp(Z_h^\star(x_h,y_h) / \eta)]}\right]$ is the coefficient of variation of $\exp (Z_h^\star (x_h,y_h) / \eta)$ , and $E_{h}\coloneqq \| \exp ((V^{\max} - Q_{h}^{\star ,\eta}(x_{h},y_{h})) / \eta)\|_{L^{\infty}(\pi^{\mathrm{ref}})}$ is the envelope of $\exp ((V^{\max} - Q_h^{\star ,\eta}(x_h,y_h)) / \eta)$ , both under $\pi^{\mathrm{ref}}$ .
+
+# Algorithm 2 $Q \Vdash$ (Theory Version)
+
+1: Input: reference $\pi^{\mathrm{ref}}$ , iteration count $K$ , regularizer $\eta$ .
+2: Initialize $\widehat{Z}_1$ randomly.
+3: for $k = 1,2,\ldots ,K$ do
+4: Let $\pi^k\gets \pi^{\widehat{Z}_k,\eta}$
+5: for step $h = 1,2,\dots ,H$ do
+6: Roll-in with $\pi^k$ for $h - 1$ steps and see $x_{h,k}$
+7: Play random action $y_{h,k}$ and transit to $x_{h+1,k}$ .
+8: Resume trajectory with $\pi^{\mathrm{ref}}$ from $x_{h + 1,k}$
+9: Let $R_{h,k}$ be cumulative rewards after time $h$ .
+
+# 10: end for
+
+11: Input $\{x_{h,k},y_{h,k},R_{h,k}\}_{h\in [H]}$ to MLE oracle.
+12: Receive $\widehat{Z}_k$ from MLE oracle.
+13: end for
+14: Output: $\widehat{Z}_1, \ldots, \widehat{Z}_K$ .
+
+We highlight this applies to rich-observation MDPs where our only requirement for $\mathcal{F}$ is realizability. Our bound only scales with the function class's complexity, i.e., $\ln (|\mathcal{F}|)$ , and does not contain structural complexity measures. In contrast, prior bounds in RL theory require stronger assumptions such as Bellman completeness [52, 53, 54, 55, 56, 57, 29], even in deterministic MDPs [58], and/or scale with structural complexity measures such as coverability [59, 60], eluder dimension [61, 55], and certain rank related complexity measures [62, 63, 64].
+
+Computational efficiency. Algorithm 2 is model-free and computationally effi
+
+cient. In contrast, prior model-free algorithms for rich-observation MDPs perform exploration with version spaces and are computationally hard [62, 65, 55, 59, 29]. Thus, Theorem 4.4 shows that Algorithm 2 achieves both statistical and computational efficiency under mild assumptions by simply operating within the KL-regularized RL framework, which is of great relevance for post-training. We remark that [66] observed similar benefits in offline RL while we study the harder online setting.
+
+Second-order guarantee. Thanks to the distributional perspective, Theorem 4.4 is a second-order bound [28, 29]. Its leading term $\mathcal{O}(\sqrt{\sum_{h=1}^{H}\sum_{k=1}^{K}\mathsf{CV}_{h,k}^{2}(x,y)})$ aggregates coefficients of variation.
+
+In the worst case this is $\mathcal{O}(\sqrt{\sum_{h=1}^{H} E_h^2 K})$ but when $Z_h^\star$ has small or zero variance the term vanishes, leaving the lower-order $\mathcal{O}(\max_{h \in [H]} E_h \ln(K))$ , logarithmical in $K$ . Thus the bound adaptively improves from $\mathcal{O}(\sqrt{K})$ to $\mathcal{O}(\ln(K))$ in benign instances. Interestingly, the envelope term $E_h$ is also instance-dependent; when $Q^{\star,\eta} = V^{\max}$ it equals 1, eliminating the exponential dependence on $\eta$ . In general, we can tolerate an $\eta$ that is as small as the worst $V^{\max} - Q^{\star,\eta}$ under rollouts from $\pi^{\mathrm{ref}}$ , which is reminiscent of the condition required for first-order or small-loss bounds [50, 35, 57].
+
+Bernoulli rewards simplification. For closed-ended tasks (e.g. math or multiple-choice), the reward-to-go $Z_{h}^{\star}(x,y)$ is Bernoulli, $Z_{h}^{\star}(x,y) \sim \mathrm{Ber}\big(p_{h}(x,y)\big)$ . Then the CV term can be bounded by $\mathsf{CV}_{h,k} \leq \mathbb{E}_{\pi^{k} \circ \mathrm{Unif}} \sqrt{(1 - p_{h}) / p_{h}}$ and the envelope term becomes $\| 1 / p_{h} \|_{L_{\infty}(\pi^{\mathrm{ref}})}$ , which notably does not have exponential dependence on $1 / \eta$ . Thus, as long as the reference model $\pi^{\mathrm{ref}}$ has sufficient probability of solving the problem, our bound can be made independent of $\eta$ . Finally, we note that the distributional-realizability condition can also be weakened to mean-realizability, since the only parameter of a Bernoulli distribution is its mean; also the MLE loss reduces to the binary cross-entropy loss [50, 57]. We present the corollary below and the proof in Appendix B.1.
+
+Corollary 4.5. Suppose reward-to-gos are Bernoulli: $Z_h^*(x, y) \sim \mathrm{Ber}(p_h(x, y))$ . Then, under the setup of Theorem 4.4, the bound can be simplified to:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} (V ^ {\star , \eta} - V ^ {\pi^ {k}, \eta}) \lesssim A (\sqrt {\sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathbb {E} _ {x _ {h} \sim \pi^ {k} , y _ {h} \sim \mathrm {U n i f} (\mathcal {A})} \left[ \frac {1 - p _ {h} (x _ {h} , y _ {h})}{p _ {h} (x _ {h} , y _ {h})} \right] \cdot \beta}). \\ + \max _ {h \in [ H ]} \left\| \frac {1}{p _ {h} (x _ {h} , y _ {h})} \right\| _ {L _ {\infty} (\pi^ {\mathrm {r e f}})} \cdot \beta), \\ \end{array}
+$$
+
+Remark: Modification for Regret Bound. It is possible to turn Theorem 4.4 into a regret bound by replacing random action in Line 7 of Algorithm 2 with a no-regret contextual bandit oracle, where "context" is $x_{h}$ , action is $y_{h}$ and "reward" is $R_{h}$ . This is alike the steps needed to convert AggreVaTe's PAC bound into a regret bound [31]. Our theory can be interpreted as a regret/PAC reduction from KL-regularized RL in deterministic MDPs to no-regret online learning, which mirrors the type of imitation learning guarantees obtained for AggreVaTe [31].
+
+# 5 Limitations & Conclusion
+
+Our results focus on deterministic MDPs including LLM post-training, where the optimal action-value $Q^{\star,\eta}$ is a simple functional of the reference return distribution $Z^{\pi^{\mathrm{ref}}}$ and Theorem 2.2 applies directly. For domains with stochastic transitions such as multi-agent game playing where the next state depends on the (potentially unpredictable) behavior of the other player, $Q^{\star,\eta}$ need to be learned via temporal-difference methods, which typically rely on the stronger Bellman-completeness assumption and may introduce additional training instability. In summary, $Q^{\sharp}$ offers a principled and practical avenue for post-training LLMs. It combines a distributional-RL objective with supervised regression, enjoys provable convergence under mild assumptions, and consistently surpasses prior value-based baselines on synthetic planning and math-reasoning benchmarks—achieving higher accuracy while maintaining a lower KL divergence from the reference policy.
+
+# Acknowledgment
+
+JPZ is supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) (567916). KW is supported by a Google PhD Fellowship. ZG is supported by LinkedInIn-Cornell Grant. Wen Sun is supported by NSF IIS-2154711, NSF CAREER 2339395 and DARPA LANCER: LeArning Network CybERagents. This research is also supported by grants from the National Science Foundation NSF (IIS-1846210, IIS-2107161, and IIS-1724282, HDR-2118310), the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), DARPA, arXiv, LinkedIn, Google, and the New York Presbyterian Hospital.
+
+# References
+
+[1] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
+[2] Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for llm reasoning. arXiv preprint arXiv:2410.08146, 2024.
+[3] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.
+[4] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
+[5] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
+[6] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024.
+[7] Wouter Kool, Herke van Hoof, and Max Welling. Buy 4 reinforce samples, get a baseline for free! 2019.
+[8] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+[9] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
+[10] Jing-Cheng Pang, Pengyuan Wang, Kaiyuan Li, Xiong-Hui Chen, Jiacheng Xu, Zongzhang Zhang, and Yang Yu. Language model self-improvement by reinforcement learning contemplation. arXiv preprint arXiv:2305.14483, 2023.
+[11] Sidharth Mudgal, Jong Lee, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, et al. Controlled decoding from language models. arXiv preprint arXiv:2310.17022, 2023.
+[12] Seungwook Han, Idan Shenfeld, Akash Srivastava, Yoon Kim, and Pulkit Agrawal. Value augmented sampling for language model alignment and personalization. arXiv preprint arXiv:2405.06639, 2024.
+[13] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627-635. JMLR Workshop and Conference Proceedings, 2011.
+[14] Gerald Tesauro. Practical issues in temporal difference learning. Advances in neural information processing systems, 4, 1991.
+[15] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
+[16] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in neural information processing systems, 33:1179-1191, 2020.
+
+[17] Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International conference on machine learning, pages 449-458. PMLR, 2017.
+[18] Gregor Bachmann and Vaishnavh Nagarajan. The pitfalls of next-token prediction. arXiv preprint arXiv:2403.06963, 2024.
+[19] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
+[20] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pages 1433-1438. Chicago, IL, USA, 2008.
+[21] Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, and Ricky TQ Chen. Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control. arXiv preprint arXiv:2409.08861, 2024.
+[22] Alexandre Piche, Valentin Thomas, Cyril Ibrahim, Yoshua Bengio, and Chris Pal. Probabilistic planning with sequential monte carlo methods. In International Conference on Learning Representations, 2018.
+[23] Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, et al. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv preprint arXiv:2408.08252, 2024.
+[24] Hado Van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, and Joseph Modayil. Deep reinforcement learning and the deadly triad. arXiv preprint arXiv:1812.02648, 2018.
+[25] Rémi Munos and Csaba Szepesvári. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9(5), 2008.
+[26] Clare Lyle, Marc G Bellemare, and Pablo Samuel Castro. A comparative analysis of expected and distributional reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4504-4511, 2019.
+[27] Mark Rowland, Yunhao Tang, Clare Lyle, Rémi Munos, Marc G Bellemare, and Will Dabney. The statistical benefits of quantile temporal-difference learning for value estimation. In International Conference on Machine Learning, pages 29210-29231. PMLR, 2023.
+[28] Kaiwen Wang, Nathan Kallus, and Wen Sun. The central role of the loss function in reinforcement learning. arXiv preprint arXiv:2409.12799, 2024.
+[29] Kaiwen Wang, Owen Oertell, Alekh Agarwal, Nathan Kallus, and Wen Sun. More benefits of being distributional: Second-order bounds for reinforcement learning. International Conference of Machine Learning, 2024.
+[30] Marc G Bellemare, Will Dabney, and Mark Rowland. Distributional reinforcement learning. MIT Press, 2023.
+[31] Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
+[32] Wen Sun, Arun Venkatraman, Geoffrey J Gordon, Byron Boots, and J Andrew Bagnell. Deeply aggravated: Differentiable imitation learning for sequential prediction. In International conference on machine learning, pages 3309-3318. PMLR, 2017.
+[33] Jonathan D Chang, Kiante Brantley, Rajkumar Ramamurthy, Dipendra Misra, and Wen Sun. Learning to generate better than your llm. arXiv preprint arXiv:2306.11816, 2023.
+[34] Hanning Zhang, Pengcheng Wang, Shizhe Diao, Yong Lin, Rui Pan, Hanze Dong, Dylan Zhang, Pavlo Molchanov, and Tong Zhang. Entropy-regularized process reward model. arXiv preprint arXiv:2412.11006, 2024.
+
+[35] Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, and Wen Sun. The benefits of being distributional: Small-loss bounds for reinforcement learning. Advances in Neural Information Processing Systems, 36, 2023.
+[36] Edward S Hu, Kwangjun Ahn, Qinghua Liu, Haoran Xu, Manan Tomar, Ada Langford, Dinesh Jayaraman, Alex Lamb, and John Langford. Learning to achieve goals with belief state transformers. arXiv preprint arXiv:2410.23506, 2024.
+[37] Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of gradient-based deep learning. In International Conference on Machine Learning, pages 3067-3075. PMLR, 2017.
+[38] Boaz Barak, Benjamin Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. Hidden progress in deep learning: Sgd learns parities near the computational limit. Advances in Neural Information Processing Systems, 35:21750-21764, 2022.
+[39] Emmanuel Abbe and Colin Sandon. Polynomial-time universality and limitations of deep learning. Communications on Pure and Applied Mathematics, 76(11):3493-3549, 2023.
+[40] Yiwen Kou, Zixiang Chen, Quanquan Gu, and Sham Kakade. Matching the statistical query lower bound for $k$ -sparse parity problems with sign stochastic gradient descent. Advances in Neural Information Processing Systems, 37:113001-113037, 2024.
+[41] Arash Ahmadian, Chris Cremer, Matthias Galle, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, and Sara Hooker. Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms. arXiv preprint arXiv:2402.14740, 2024.
+[42] Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733, 2024.
+[43] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.
+[44] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. arXiv preprint arXiv:2305.20050, 2023.
+[45] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9426-9439, 2024.
+[46] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115, 2024.
+[47] Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, et al. Quality: Question answering with long input texts, yes! arXiv preprint arXiv:2112.08608, 2021.
+[48] Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
+[49] Francesco Orabona. A modern introduction to online learning. arXiv preprint arXiv:1912.13213, 2019.
+[50] Dylan J Foster and Akshay Krishnamurthy. Efficient first-order contextual bandits: Prediction, allocation, and triangular discrimination. Advances in Neural Information Processing Systems, 34:18907-18919, 2021.
+[51] Vladimir G Vovk. A game of prediction with expert advice. In Proceedings of the eighth annual conference on Computational learning theory, pages 51-60, 1995.
+
+[52] Jinglin Chen and Nan Jiang. Information-theoretic considerations in batch reinforcement learning. In International Conference on Machine Learning, pages 1042-1051. PMLR, 2019.
+[53] Yuanhao Wang, Ruosong Wang, and Sham Kakade. An exponential lower bound for linearly realizable mdp with constant suboptimality gap. Advances in Neural Information Processing Systems, 34:9521-9533, 2021.
+[54] Dylan J Foster, Akshay Krishnamurthy, David Simchi-Levi, and Yunzong Xu. Offline reinforcement learning: Fundamental barriers for value function approximation. arXiv preprint arXiv:2111.10919, 2021.
+[55] Chi Jin, Qinghua Liu, and Sobhan Miryoosefi. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems, 34:13406-13418, 2021.
+[56] Jonathan Chang, Kaiwen Wang, Nathan Kallus, and Wen Sun. Learning bellman complete representations for offline policy evaluation. In International Conference on Machine Learning, pages 2938-2971. PMLR, 2022.
+[57] Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James McInerney, Dawen Liang, Nathan Kallus, and Csaba Szepesvari. Switching the loss reduces the cost in batch reinforcement learning. In *Forty-first International Conference on Machine Learning*, 2024.
+[58] Runzhe Wu, Ayush Sekhari, Akshay Krishnamurthy, and Wen Sun. Computationally efficient rl under linear bellman completeness for deterministic dynamics. arXiv preprint arXiv:2406.11810, 2024.
+[59] Tengyang Xie, Dylan J Foster, Yu Bai, Nan Jiang, and Sham M Kakade. The role of coverage in online reinforcement learning. arXiv preprint arXiv:2210.04157, 2022.
+[60] Zakaria Mhammedi, Dylan J Foster, and Alexander Rakhlin. The power of resets in online reinforcement learning. In *The Thirty-eighth Annual Conference on Neural Information Processing Systems*, 2024.
+[61] Daniel Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic exploration. Advances in Neural Information Processing Systems, 26, 2013.
+[62] Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, and Robert E Schapire. Contextual decision processes with low bellman rank are pac-learnable. In International Conference on Machine Learning, pages 1704–1713. PMLR, 2017.
+[63] Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, and John Langford. Model-based rl in contextual decision processes: Pac bounds and exponential improvements over model-free approaches. In Conference on learning theory, pages 2898-2933. PMLR, 2019.
+[64] Simon Du, Sham Kakade, Jason Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, and Ruosong Wang. Bilinear classes: A structural framework for provable generalization in rl. In International Conference on Machine Learning, pages 2826-2836. PMLR, 2021.
+[65] Christoph Dann, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, and Robert E Schapire. On oracle-efficient pacrl with rich observations. Advances in neural information processing systems, 31, 2018.
+[66] Masatoshi Uehara, Nathan Kallus, Jason D Lee, and Wen Sun. Offline minimax soft-q-learning under realizability and partial coverage. Advances in Neural Information Processing Systems, 36, 2023.
+[67] Kevin Yang and Dan Klein. FUDGE: Controlled text generation with future discriminators. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511–3535, Online, June 2021. Association for Computational Linguistics.
+
+[68] Stephen Zhao, Rob Brekelmans, Alireza Makhzani, and Roger Grosse. Probabilistic inference in language models via twisted sequential monte carlo. arXiv preprint arXiv:2404.17546, 2024.
+[69] Will Dabney, Mark Rowland, Marc Bellemare, and Rémi Munos. Distributional reinforcement learning with quantile regression. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
+[70] Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taiga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, et al. Stop regressing: Training value functions via classification for scalable deep rl. arXiv preprint arXiv:2403.03950, 2024.
+[71] Kaiwen Wang, Dawen Liang, Nathan Kallus, and Wen Sun. Risk-sensitive rl with optimized certainty equivalents via reduction to standard rl. arXiv preprint arXiv:2403.06323, 2024.
+[72] Ke Sun, Bei Jiang, and Linglong Kong. How does return distribution in distributional reinforcement learning help optimization? arXiv preprint arXiv:2209.14513, 2022.
+[73] Ke Sun, Yingnan Zhao, Wulong Liu, Bei Jiang, and Linglong Kong. Distributional reinforcement learning with regularized Wasserstein loss. Advances in Neural Information Processing Systems, 37:63184-63221, 2024.
+[74] Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
+[75] Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, and Noah A. Smith. Tuning language models by proxy. In First Conference on Language Modeling, 2024.
+[76] Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274-19286. PMLR, 2023.
+[77] Wei Xiong, Hanze Dong, Chenlu Ye, Han Zhong, Nan Jiang, and Tong Zhang. Gibbs sampling from human feedback: A provable kl-constrained framework for rlhf. CoRR, 2023.
+[78] Richard Yuanzhe Pang, Weizhe Yuan, He He, Kyunghyun Cho, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. Advances in Neural Information Processing Systems, 37:116617-116637, 2024.
+[79] Zhaolin Gao, Jonathan Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kianté Brantley, Thorsten Joachims, Drew Bagnell, Jason D Lee, and Wen Sun. Rebel: Reinforcement learning via regressing relative rewards. Advances in Neural Information Processing Systems, 37:52354-52400, 2025.
+[80] Tengyang Xie, Dylan J Foster, Akshay Krishnamurthy, Corby Rosset, Ahmed Awadallah, and Alexander Rakhlin. Exploratory preference optimization: Harnessing implicit $q^*$ -approximation for sample-efficient rlhf. arXiv preprint arXiv:2405.21046, 2024.
+[81] Jae Hyeon Cho, Minkyung Park, and Byung-Jun Lee. Vpo: Leveraging the number of votes in preference optimization. arXiv preprint arXiv:2410.22891, 2024.
+[82] Shenao Zhang, Donghan Yu, Hiteshi Sharma, Han Zhong, Zhihan Liu, Ziyi Yang, Shuohang Wang, Hany Hassan, and Zhaoran Wang. Self-exploring language models: Active preference elicitation for online alignment. arXiv preprint arXiv:2405.19332, 2024.
+[83] Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1512.08562, 2015.
+[84] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pages 1861-1870. PMLR, 2018.
+[85] Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. Advances in neural information processing systems, 32, 2019.
+
+[86] Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. Algebra: Policy gradient from arbitrary experience. arXiv preprint arXiv:1912.02074, 2019.
+[87] Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity of interactive decision making. arXiv preprint arXiv:2112.13487, 2021.
+[88] Monroe D Donsker and SR Srinivasa Varadhan. Asymptotic evaluation of certain markov process expectations for large time. iv. Communications on pure and applied mathematics, 36(2):183-212, 1983.
+[89] Edward S Hu, Kwangjun Ahn, Qinghua Liu, Haoran Xu, Manan Tomar, Ada Langford, Jayden Teoh, Bryon Xu, David Yan, Dinesh Jayaraman, et al. The belief state transformer. arXiv preprint arXiv:2410.23506, 2024.
+[90] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
+[91] Ilya Loshchilov, Frank Hutter, et al. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 5, 2017.
+
+# NeurIPS Paper Checklist
+
+# 1. Claims
+
+Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
+
+Answer: [Yes]
+
+Justification: Please see the last paragraph of our introduction which includes both our claims and the sections where the claim is supported.
+
+Guidelines:
+
+- The answer NA means that the abstract and introduction do not include the claims made in the paper.
+- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
+- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
+- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
+
+# 2. Limitations
+
+Question: Does the paper discuss the limitations of the work performed by the authors?
+
+Answer: [Yes]
+
+Justification: We discuss the limitations in Section 5.
+
+Guidelines:
+
+- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
+- The authors are encouraged to create a separate "Limitations" section in their paper.
+- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
+- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
+- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
+- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
+- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
+- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
+
+# 3. Theory assumptions and proofs
+
+Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
+
+Answer: [Yes]
+
+Justification: Please see Appendix B for the complete assumptions and proofs.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include theoretical results.
+- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
+- All assumptions should be clearly stated or referenced in the statement of any theorems.
+- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
+- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
+- Theorems and Lemmas that the proof relies upon should be properly referenced.
+
+# 4. Experimental result reproducibility
+
+Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
+
+Answer: [Yes]
+
+Justification: Please see Appendix C, D, E for the information needed to reproduce our results.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
+- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
+- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
+- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
+(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
+(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
+(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
+(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
+
+# 5. Open access to data and code
+
+Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
+
+# Answer: [Yes]
+
+Justification: We provide our code with instructions in the supplementary materials.
+
+# Guidelines:
+
+- The answer NA means that paper does not include experiments requiring code.
+- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
+- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
+- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
+- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
+- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
+
+# 6. Experimental setting/details
+
+Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
+
+# Answer: [Yes]
+
+Justification: All training and test details are provided in Section 3.1, 3.2 and Appendix C, D, E, F.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
+- The full details can be provided either with the code, in appendix, or as supplemental material.
+
+# 7. Experiment statistical significance
+
+Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
+
+# Answer: [No]
+
+Justification: The experiments are reported on datasets that are significantly large.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
+- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
+- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
+- The assumptions made should be given (e.g., Normally distributed errors).
+
+- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
+- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
+- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
+- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
+
+# 8. Experiments compute resources
+
+Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
+
+Answer: [Yes]
+
+Justification: The compute resources used are reported in Appendix C.
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
+- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
+- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
+
+# 9. Code of ethics
+
+Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
+
+Answer: [Yes]
+
+Justification: The research conducted in the paper conforms with the NeurIPS Code of Ethics.
+
+Guidelines:
+
+- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
+- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
+- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
+
+# 10. Broader impacts
+
+Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
+
+Answer: [NA]
+
+Justification: This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+Guidelines:
+
+- The answer NA means that there is no societal impact of the work performed.
+- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
+- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
+
+- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
+- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
+- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
+
+# 11. Safeguards
+
+Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
+
+Answer: [NA]
+
+Justification: The paper poses no such risks.
+
+Guidelines:
+
+- The answer NA means that the paper poses no such risks.
+- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
+- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
+- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
+
+# 12. Licenses for existing assets
+
+Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
+
+Answer: [Yes]
+
+Justification: The benchmarks and data splits are publicly available. All licenses are respected.
+
+Guidelines:
+
+- The answer NA means that the paper does not use existing assets.
+- The authors should cite the original paper that produced the code package or dataset.
+- The authors should state which version of the asset is used and, if possible, include a URL.
+- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
+- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
+- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
+- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
+
+- If this information is not available online, the authors are encouraged to reach out to the asset's creators.
+
+# 13. New assets
+
+Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
+
+Answer: [Yes]
+
+Justification: Assets will be released, and all instructions and details are included for reproduction.
+
+Guidelines:
+
+- The answer NA means that the paper does not release new assets.
+- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
+- The paper should discuss whether and how consent was obtained from people whose asset is used.
+- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
+
+# 14. Crowdsourcing and research with human subjects
+
+Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
+
+Answer: [NA]
+
+Justification: The paper does not involve crowdsourcing nor research with human subjects.
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
+- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
+
+# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
+
+Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
+
+Answer: [NA]
+
+Justification: The paper does not involve crowdsourcing nor research with human subjects.
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
+- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
+- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
+
+# 16. Declaration of LLM usage
+
+Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
+
+Answer: [NA]
+
+Justification: The core method development in this research does not involve LLMs as any important, original, or non-standard components.
+
+Guidelines:
+
+- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
+- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
+
+# A Related Works
+
+From the empirical side, the most relevant works are controlled decoding (CD; [11]) and value augmented sampling (VAS; [12]). These two works both propose to guide the reference policy $\pi^{\mathrm{ref}}$ with $Q^{\pi^{\mathrm{ref}},0}$ , the expected reward-to-go under $\pi^{\mathrm{ref}}$ without KL regularization. As discussed in Section 4.1, guiding with $Q^{\pi^{\mathrm{ref}},0}$ is not principled for the KL-regularized RL problem and can lead to both sub-optimal reward and large KL from $\pi^{\mathrm{ref}}$ . In contrast, we propose to guide $\pi^{\mathrm{ref}}$ with $Q^{\star,\eta}$ , the expected reward-to-go under the optimal policy with KL regularization, which is the correct closed-form of the optimal policy. A recent work [34] proposed a process reward model (PRM) of a similar form as our $Q^{\star,\eta}$ , but their PRM is applied to steps instead of tokens, and they do not use distributional RL or iterative training (i.e., data aggregation).
+
+In terms of reweighting $\pi^{\mathrm{ref}}$ with classifier scores, FUDGE [67] is another closely related work but their derivation is based on Bayes rule and FUDGE does not solve KL-regularized RL. Sequential Monte Carlo (SMC) methods [22, 68] also reweight $\pi^{\mathrm{ref}}$ 's distribution with a twist function, where the optimal twist function is analogous to our $Q^{\star,\eta}$ . One key difference is that SMC performs resampling while we directly combine logits of $\pi^{\mathrm{ref}}$ and $\exp(Q^{\star,\eta})$ to avoid importance sampling, which has higher variance. Finally, none of these prior works apply distributional RL losses [17, 69, 70, 57] or online data aggregation [13] to learn $Q^{\star,\eta}$ , which we showed to be beneficial in our ablations. Indeed, CD and VAS both use square loss regression over a fixed offline dataset. We also remark that risk-sensitive RL has been an important application of distributional RL [69, 71] and extending $Q^{\sharp}$ along those lines is a promising future direction.
+
+We also discuss some of the recent advances in stable distributional RL. [72] shows that the categorical distributional RL loss, which we employ for our theory and experiments, enjoys smoothness and optimization stability under a bounded logit condition. [73] introduces a Sinkhorn distributional RL loss which is a computationally efficient alternative for Wasserstein distance, and was shown to be more stable for multi-dimensional rewards. [69] proposed a KL-regularized categorical loss which they showed is empirically more stable in Atari games. However, these references all apply TD-learning with function approximation and replay buffers, which [74] identified as a deadly triad that is notoriously difficult to scale, requiring many tricks such as double Q-learning and target networks. In contrast, our work obviates the need for TD-learning or tricks such as the target network by leveraging the special form of $Q^{\star}$ in deterministic KL-regularized MDPs, which perfectly captures the LLM post-training application we focus on.
+
+We also cite some tangentially related works. Proxy tuning [75] and speculative decoding [76] both use a small model to guide the logit distribution of a large $\pi^{\mathrm{ref}}$ model. Speculative decoding is focused on maximizing the large model's likelihood, which does not relate to any extrinsic rewards. In our framework, the classifier model can be any size relative to $\pi^{\mathrm{ref}}$ , although deeper investigation into the computational benefits of using a small classifier is a promising direction for future work. We note that the star-graph problem can also be solved during pre-training by also predicting backwards via the belief state transformer [36].
+
+Finally we discuss previous post-training methods for LLMs. First, online iterative DPO [77, 78], REBEL [79], PPO [8], etc. are based on policy gradient and require a good reset distribution which only guarantees local optimality. XPO [80], VPO [81], SELM [82], etc. treat this as an exploration setting but requires solving non-convex optimization oracles and relies on strong structure conditions such as coverability / eluder / linearity, similar to the theoretical works like [55, 59]. Instead, we approach post-training in a fundamentally different angle and solve it via simple computationally tractable regression and mle oracles, without any strong structural conditions or reset distribution assumptions.
+
+From the theoretical side, KL-regularized RL is closely related to soft RL or maximum entropy RL which are well-studied [20, 83, 84, 22]. The optimal policy decomposition in deterministic MDPs is also known in prior works [23, 21]. Our contribution is an algorithm that provably learns $Q^{\star,\eta}$ using distributional RL [17] and data aggregation [13]. This enables us to prove a reduction of KL-regularized RL (in deterministic MDPs) to no-regret online learning, which ensures convergence to the optimal policy with realizability being the only assumption for function approximation. Notably we are able to avoid more stringent conditions such as completeness or structural MDP conditions which are ubiquitous in the current literature [53, 55, 56, 35, 29, 57, 59]. [66] observed similar benefits in offline RL, while we provide guarantees for the harder online RL setting.
+
+Complementary to our online, KL-regularized setting, DualDICE [85] and AlgaeDICE [86] tackle the high-variance "curse of horizon" that arises when one performs importance weighting for long trajectories in offline RL. Both methods replace per-step importance weights with stationary-distribution density ratios, learned through a dual (Lagrangian) formulation, and have shown empirical success on low-dimensional continuous-control benchmarks, although learning is also shown to be difficult in high-dimensional control tasks [56]. Because we continually collect on-policy data and constrain updates via an explicit KL penalty—which already limits distribution shift—we do not need such ratio estimation; nonetheless, density-ratio approaches remain a promising orthogonal direction for variance reduction in purely offline LLM post-training.
+
+We remark that our theoretical guarantees are quite similar in structure to that of AggreVaTe [31, 32], which is a reduction of imitation learning to no-regret online learning. Besides the obvious difference in problem setting, another improvement from our work is using distributional RL theory to prove second-order bounds. Notably, we are able to prove second-order bounds without any completeness assumptions that were required in [35, 28, 29].
+
+# B Proofs
+
+In this section, we provide the full proof for Theorem 4.4.
+
+Theorem 4.4. Fix any $\eta \in (0, V^{\max}]$ and $\delta \in (0, 1)$ . Under Assumptions 2.1 and 4.3, Algorithm 2 ensures w.p. at least $1 - \delta$ , setting $\beta = \ln(1 / \delta) + \operatorname{Reg}_{\mathrm{mle}}(K)$ , we have
+
+$$
+\sum_ {k = 1} ^ {K} (V ^ {\star , \eta} - V ^ {\pi^ {k}, \eta}) \lesssim A V ^ {\max} (\sqrt {\sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathsf {C V} _ {h , k} ^ {2} (x , y) \cdot \beta} + \max _ {h \in [ H ]} E _ {h} \cdot \beta),
+$$
+
+where $\mathsf{CV}_{h,k}(x,y)\coloneqq \mathbb{E}_{x_h\sim \pi^k,y_h\sim \mathrm{Unif}(\mathcal{A})}\left[\frac{\sqrt{\mathrm{Var}(\exp(Z_h^\star(x_h,y_h) / \eta))}}{\mathbb{E}[\exp(Z_h^\star(x_h,y_h) / \eta)]}\right]$ is the coefficient of variation of $\exp (Z_h^\star (x_h,y_h) / \eta)$ , and $E_{h}\coloneqq \| \exp ((V^{\max} - Q_{h}^{\star ,\eta}(x_{h},y_{h})) / \eta)\|_{L_{\infty}(\pi^{\mathrm{ref}})}$ is the envelope of $\exp ((V^{\max} - Q_h^{\star ,\eta}(x_h,y_h)) / \eta)$ , both under $\pi^{\mathrm{ref}}$ .
+
+Proof. Fix any $\eta \in (0, V^{\max})$ . Let $Q_{h,k}(x,y) = \eta \ln \mathbb{E}_{z \sim \widehat{Z}_{h,k}(x,y)} \exp(z / \eta)$ denote the induced soft $Q$ function from the distributional estimate $\widehat{Z}_k$ . Let $\pi_h^k(y|x) \propto \pi_h^{\mathrm{ref}}(y|x) \exp(Q_{h,k}(x,y) / \eta)$ denote the induced policy from $Q_{h,k}$ . Then,
+
+$$
+V ^ {\star , \eta} - V ^ {\pi^ {k}, \eta}
+$$
+
+$$
+\begin{array}{l} \stackrel {(i)} {=} \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi^ {k}} \left[ Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {\star , \eta}\right) - Q _ {h} ^ {\pi^ {k}, \eta} \left(x _ {h}, \pi_ {h} ^ {k}\right) + \eta \mathrm {K L} \left(\pi_ {h} ^ {k} \left(x _ {h}\right) \| \pi_ {h} ^ {\text {r e f}} \left(x _ {h}\right)\right) - \eta \mathrm {K L} \left(\pi_ {h} ^ {\star , \eta} \left(x _ {h}\right) \| \pi_ {h} ^ {\text {r e f}} \left(x _ {h}\right)\right) \right] \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi^ {k}} \left[ Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {\star , \eta}\right) - \eta \mathrm {K L} \left(\pi_ {h} ^ {\star , \eta} \left(x _ {h}\right) \| \pi_ {h} ^ {\mathrm {r e f}} \left(x _ {h}\right)\right) - \left(Q _ {h, k} \left(x _ {h}, \pi_ {h} ^ {k}\right) - \eta \mathrm {K L} \left(\pi_ {h} ^ {k} \left(x _ {h}\right) \| \pi_ {h} ^ {\mathrm {r e f}} \left(x _ {h}\right)\right)\right) \right. \\ + Q _ {h, k} \left(x _ {h}, \pi_ {h} ^ {k}\right) - Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {k}\right) ] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(i i)} {\leq} \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi^ {k}} \left[ Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {\star , \eta}\right) - \eta \mathrm {K L} \left(\pi_ {h} ^ {\star , \eta} \left(x _ {h}\right) \parallel \pi_ {h} ^ {\mathrm {r e f}} \left(x _ {h}\right)\right) - \left(Q _ {h, k} \left(x _ {h}, \pi_ {h} ^ {\star , \eta}\right) - \eta \mathrm {K L} \left(\pi_ {h} ^ {\star , \eta} \left(x _ {h}\right) \parallel \pi_ {h} ^ {\mathrm {r e f}} \left(x _ {h}\right)\right)\right) \right. \\ + Q _ {h, k} \left(x _ {h}, \pi_ {h} ^ {k}\right) - Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {k}\right) ] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi^ {k}} \left[ Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {\star , \eta}\right) - Q _ {h, k} \left(x _ {h}, \pi_ {h} ^ {\star , \eta}\right) + Q _ {h, k} \left(x _ {h}, \pi_ {h} ^ {k}\right) - Q _ {h} ^ {\star , \eta} \left(x _ {h}, \pi_ {h} ^ {k}\right) \right] \\ \leq 2 \sum_ {h = 1} ^ {H} \mathbb {E} _ {x _ {h} \sim \pi^ {k}} \big [ \max _ {\pi \in \{\pi^ {\star}, \pi^ {k} \}} \big | Q _ {h} ^ {\star , \eta} (x _ {h}, \pi) - Q _ {h, k} (x _ {h}, \pi) \big | \big ] \\ \leq 2 A \sum_ {h = 1} ^ {H} \mathbb {E} _ {x _ {h} \sim \pi^ {k}, y _ {h} \sim \mathrm {U n i f} (\mathcal {A})} \left| Q _ {h} ^ {\star , \eta} (x _ {h}, y _ {h}) - Q _ {h, k} (x _ {h}, y _ {h}) \right|, \\ \end{array}
+$$
+
+where (i) is by the performance difference lemma in the soft MDP (Lemma B.2); (ii) is by Donsker-Varadhan (Lemma B.1) which proves that $\pi_h^k (x_h) = \arg \max_{\pi}\mathbb{E}_{\pi}[Q_{h,k}(x_h,\pi) - \mathrm{KL}(\pi (x_h)\parallel$ $\pi_h^{\mathrm{ref}}(x_h))]$ . Now, we bound the difference between the optimal and learned $Q$ functions:
+
+$$
+\begin{array}{l} \left| Q _ {h} ^ {\star , \eta} (x, y) - Q _ {h, k} (x, y) \right| \\ = \eta \Big | \ln \mathbb {E} _ {z \sim Z _ {h} ^ {*} (x, y)} \exp (z / \eta) - \ln \mathbb {E} _ {z \sim \widehat {Z} _ {h, k} (x, y)} \exp (z / \eta) \Big | \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(i)} {\lesssim} \eta (1 + V _ {\max } / \eta) \left(\mathbb {C V} _ {z \sim Z _ {h} ^ {\star} (x, y)} (\exp (z / \eta)) H _ {h, k} (x, y) + \frac {\exp (V _ {\max } / \eta) - 1}{\mathbb {E} _ {z \sim Z _ {h} ^ {\star} (x , y)} \exp (z / \eta)} H _ {h, k} ^ {2} (x, y)\right) \\ = (\eta + V _ {\mathrm {m a x}}) \Bigl (\mathbb {C} V _ {z \sim Z _ {h} ^ {*} (x, y)} (\exp (z / \eta)) H _ {h, k} (x, y) + \frac {\exp (V _ {\mathrm {m a x}} / \eta)}{\exp (Q _ {h} ^ {* , \eta} (x , y) / \eta)} H _ {h, k} ^ {2} (x, y) \Bigr), \\ \end{array}
+$$
+
+where (i) is by Lemma B.4 and the fact that $Z^{\star}, \widehat{Z}_k \in [0, V^{\max}]$ and $H_{h,k}(x,y) := H(Z_h^\star(x,y), \widehat{Z}_{h,k}(x,y))$ is the Hellinger distance between the learned $\widehat{Z}_{h,k}$ and optimal $Z_h^\star$ .
+
+Thus, if we let $x_{h},y_{h}\sim \pi^{k}\circ_{h}$ Unif $(\mathcal{A})$ denote the distribution of rolling in with $\pi^k$ until $x_{h}$ and taking a random $y_{h}\sim \mathrm{Unif}(\mathcal{A})$ , then we have:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} V ^ {\star , \eta} - V ^ {\pi^ {k}, \eta} \\ \leq 2 A \sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathbb {E} _ {\pi^ {k} \circ_ {h} \operatorname {U n i f} (\mathcal {A})} \left| Q _ {h} ^ {\star , \eta} \left(x _ {h}, y _ {h}\right) - Q _ {h, k} \left(x _ {h}, y _ {h}\right) \right| \\ \lesssim A V _ {\max } \sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathbb {E} _ {\pi^ {k} \circ_ {h} \operatorname {U n i f} (\mathcal {A})} \left[ \mathbb {C V} _ {z \sim Z _ {h} ^ {\star} (x, y)} (\exp (z / \eta)) H _ {h, k} (x _ {h}, y _ {h}) + \frac {\exp (V _ {\max } / \eta)}{\exp (Q _ {h} ^ {\star , \eta} (x _ {h} , y _ {h}) / \eta)} H _ {h, k} ^ {2} (x, y) \right] \\ \leq A V _ {\max } \sqrt {\sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathbb {E} _ {\pi^ {k} \circ_ {h} \operatorname {U n i f} (\mathcal {A})} [ \mathbf {C V} _ {h , k} ^ {2} (x _ {h} , y _ {h}) ]} \sqrt {\sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathbb {E} _ {\pi^ {k} \circ_ {h} \operatorname {U n i f} (\mathcal {A})} [ H _ {h , k} ^ {2} (x _ {h} , y _ {h}) ]} \\ + A V _ {\mathrm {m a x}} \left\| \frac {\exp (V _ {\mathrm {m a x}} / \eta)}{\exp (Q _ {h} ^ {\star , \eta} (x _ {h} , y _ {h}) / \eta)} \right\| _ {L _ {\infty} (\pi^ {k} \circ_ {h} \mathrm {U n i f} (\mathcal {A}))} \cdot \sum_ {h = 1} ^ {H} \sum_ {k = 1} ^ {K} \mathbb {E} _ {\pi^ {k} \circ_ {h} \mathrm {U n i f} (\mathcal {A})} [ H _ {h, k} ^ {2} (x _ {h}, y _ {h}) ]. \\ \end{array}
+$$
+
+The final step is to bound the summed Hellinger square terms. This can be done via Multiplicative Azuma's inequality and [87, Lemma A.14], which shows that for any $\delta \in (0,1)$ , we have $\sum_{h,k} \mathbb{E}_{\pi^k \circ_h \mathrm{Unif}(\mathcal{A})}[H_{h,k}^2(x_h, y_h)] \lesssim \sum_{h,k} H_{h,k}^2(x_{h,k}, y_{h,k}) + \ln(1/\delta) \lesssim \operatorname{Reg}_{\mathrm{mle}}(K) + \ln(1/\delta)$ , which recall is exactly the definition of $\beta$ . This finishes the proof of Theorem 4.4.
+
+Lemma B.1 (Donsker-Varadhan's Variational Formula; [88]). For any prior $p \in \Delta(\Theta)$ , consider the KL-regularized optimization:
+
+$$
+\pi^ {\star} = \arg \max _ {\pi \in \Delta (\Theta)} V (\pi) := \mathbb {E} _ {\pi} [ Q (\theta) - \eta \mathrm {K L} (\pi (\theta) \| p (\theta)) ].
+$$
+
+The optimal policy $\pi^{\star}$ is given by $\pi^{\star}(\theta)\propto p(\theta)\exp (Q(\theta) / \eta)$ and it has value $V(\pi^{\star}) = \eta \ln \mathbb{E}_{\theta \sim p}\exp (Q(\theta) / \eta)$ .
+
+Lemma B.2 (Soft Performance Difference Lemma (PDL)). For any $f$ and $\pi$ ,
+
+$$
+V ^ {\pi} - f _ {1} \left(x _ {1}, \pi\right) = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} \left[ \left(\mathcal {T} _ {h} ^ {\pi} f _ {h + 1} - f _ {h}\right) \left(x _ {h}, y _ {h}\right) \right] - \eta \mathrm {K L} \left(\pi_ {1} \left(x _ {1}\right) \| \pi_ {1} ^ {\text {r e f}} \left(x _ {1}\right)\right).
+$$
+
+For any $\pi, \pi'$ ,
+
+$$
+V ^ {\pi} - V ^ {\pi^ {\prime}} = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} \left[ Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, y _ {h}\right) - Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, \pi^ {\prime}\right) + \eta \mathrm {K L} \left(\pi_ {h} ^ {\prime} \left(x _ {h}\right) \| \pi_ {h} ^ {\text {r e f}} \left(x _ {h}\right)\right) - \eta \mathrm {K L} \left(\pi_ {h} \left(x _ {h}\right) \| \pi_ {h} ^ {\text {r e f}} \left(x _ {h}\right)\right) \right].
+$$
+
+Proof. Let $\mathrm{KL}(\pi_h(x_h))\coloneqq \mathrm{KL}(\pi_h(x_h)\parallel \pi_h^{\mathrm{ref}}(x_h))$ denote KL-divergence w.r.t. $\pi^{\mathrm{ref}}$ . Then,
+
+$$
+\begin{array}{l} V ^ {\pi} - V ^ {\pi^ {\prime}} \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} [ r _ {h} - \eta \mathrm {K L} (\pi_ {h} (x _ {h})) ] - (Q _ {1} ^ {\pi^ {\prime}} (x _ {1}, \pi^ {\prime}) - \eta \mathrm {K L} (\pi_ {1} ^ {\prime} (x _ {1}))) \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} \left[ r _ {h} - \eta \mathrm {K L} \left(\pi_ {h + 1} ^ {\prime} \left(x _ {h + 1}\right)\right) + \eta \mathrm {K L} \left(\pi_ {h + 1} ^ {\prime} \left(x _ {h + 1}\right)\right) + Q _ {h + 1} ^ {\pi^ {\prime}} \left(x _ {h + 1}, \pi^ {\prime}\right) - Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, \pi^ {\prime}\right) - \eta \mathrm {K L} \left(\pi_ {h} \left(x _ {h}\right)\right) \right] + \eta \mathrm {K L} \left(\pi_ {1} ^ {\prime} \left(x _ {h}\right)\right). \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} \left[ r _ {h} - \eta \mathrm {K L} \left(\pi_ {h + 1} ^ {\prime} \left(x _ {h + 1}\right)\right) + Q _ {h + 1} ^ {\pi^ {\prime}} \left(x _ {h + 1}, \pi^ {\prime}\right) - Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, \pi^ {\prime}\right) + \eta \mathrm {K L} \left(\pi_ {h} ^ {\prime} \left(x _ {h}\right)\right) - \eta \mathrm {K L} \left(\pi_ {h} \left(x _ {h}\right)\right) \right] \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} \left[ \mathcal {T} _ {h} ^ {\pi^ {\prime}} Q _ {h + 1} ^ {\pi^ {\prime}} \left(x _ {h}, y _ {h}\right) - Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, \pi^ {\prime}\right) + \eta \mathrm {K L} \left(\pi_ {h} ^ {\prime} \left(x _ {h}\right)\right) - \eta \mathrm {K L} \left(\pi_ {h} \left(x _ {h}\right)\right) \right] \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {\pi} \left[ Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, y _ {h}\right) - Q _ {h} ^ {\pi^ {\prime}} \left(x _ {h}, \pi^ {\prime}\right) + \eta \mathrm {K L} \left(\pi_ {h} ^ {\prime} \left(x _ {h}\right)\right) - \eta \mathrm {K L} \left(\pi_ {h} \left(x _ {h}\right)\right) \right]. \\ \end{array}
+$$
+
+
+
+Lemma B.3. For any two numbers $x, y \in [\exp(a), \exp(b)]$ , we have
+
+$$
+\left| \ln (x) - \ln (y) \right| \leq (1 + b - a) \left| \frac {x - y}{y} \right|.
+$$
+
+If $b - a \geq \frac{1}{2}$ , then $\max \left(1, \frac{b - a}{1 - \exp(a - b)}\right) \leq 3(b - a)$ .
+
+Proof. If $x \geq y$ , then $\ln(x) - \ln(y) = \ln(1 + (x - y)/y) \leq (x - y)/y$ . If $x < y$ , then $\ln(y) - \ln(x) = -\ln(1 + (x - y)/y)$ . By premise, we have $0 \geq \frac{x - y}{y} \geq \exp(a - b) - 1$ . Note that $-\ln(1 + z)$ is convex and is thus upper bounded by the line connecting $(0,0)$ and $(\exp(a - b) - 1, b - a)$ , i.e., $-\ln(1 + z) \leq \frac{b - a}{1 - \exp(a - b)} |z|$ for $0 \geq z \geq \exp(a - b) - 1$ . Thus, $-\ln(1 + (x - y)/y) \leq \frac{b - a}{1 - \exp(a - b)} \left| \frac{x - y}{y} \right|$ . Thus, we've shown that $|\ln(x) - \ln(y)| \leq \max\left(1, \frac{b - a}{1 - \exp(a - b)}\right) \left| \frac{x - y}{y} \right|$ . Finally, since $\frac{x}{1 - \exp(-x)} \leq 1 + x$ when $x \geq 0$ , we have $\max(1, \frac{b - a}{1 - \exp(a - b)}) \leq \max(1, 1 + b - a) = 1 + b - a$ .
+
+Lemma B.4. For any distributions $p, q$ on $[a, b]$ , we have
+
+$$
+| \ln \sum_ {z} p (z) e ^ {z} - \ln \sum_ {z} q (z) e ^ {z} | \lesssim (1 + b - a) \bigg (\frac {\sqrt {\mathrm {V a r} _ {q} (e ^ {z})}}{\mathbb {E} _ {q} e ^ {z}} H (p, q) + \frac {\exp (b) - \exp (a)}{\mathbb {E} _ {q} e ^ {z}} H ^ {2} (p, q) \bigg),
+$$
+
+where $H^2(p, q) = \frac{1}{2} \sum_z (\sqrt{p(z)} - \sqrt{q(z)})^2$ is the squared Hellinger distance.
+
+Proof. By Lemma B.3, we have $|\ln \sum_{z}p(z)e^{z} - \ln \sum_{z}q(z)e^{z}|\leq (1 + b - a)\Big|\frac{\sum_{z}(p(z) - q(z))e^{z}}{\sum_{z}q(z)e^{z}}\Big|$ . By Lemma B.5, we have that the numerator is bounded by $\sqrt{\operatorname{Var}_q(e^z)} H(p,q) + (\exp (b) - \exp (a))H^2 (p,q)$ .
+
+Lemma B.5 (Second-Order Lemma). Suppose $p, q$ are distributions on the interval $[a, b]$ . Then, we have
+
+$$
+| \bar {p} - \bar {q} | \lesssim \sqrt {\operatorname {V a r} (p)} H (p, q) + (b - a) H ^ {2} (p, q).
+$$
+
+Proof. Define $p', q'$ as the normalized distributions on $[0,1]$ , i.e., $p'$ is the law of $X' = (X - a) / (b - a)$ where $X \sim p$ . Then, we have
+
+$$
+\begin{array}{l} | \bar {p} - \bar {q} | = (b - a) | \bar {p} ^ {\prime} - \bar {q} ^ {\prime} | \\ \lesssim (b - a) \left(\sqrt {\operatorname {V a r} \left(p ^ {\prime}\right)} H \left(p ^ {\prime}, q ^ {\prime}\right) + H ^ {2} \left(p ^ {\prime}, q ^ {\prime}\right)\right) \\ = \sqrt {\operatorname {V a r} (p)} H (p, q) + (b - a) H ^ {2} (p, q), \\ \end{array}
+$$
+
+where the $\lesssim$ step is due to the second-order lemma of [28].
+
+
+
+# B.1 Case of Bernoulli reward-to-go
+
+In this section, we focus on problems where $Z_h^\star(x,y) = \operatorname{Ber}(p_h(x,y))$ is a Bernoulli distribution, which is common for closed-ended problems such as math or multiple choice. Here, the envelope term can be bounded as follows:
+
+Lemma B.6. If $Z_h^\star(x, y) = \operatorname{Ber}(p_h(x, y))$ , then we have $V^{\max} = 1$ and for all $\eta > 0$ , we have
+
+$$
+\exp \left(\left(1 - Q _ {h} ^ {\star , \eta} (x, y)\right) / \eta\right) \leq 1 / p _ {h} (x, y).
+$$
+
+Proof. Fix $x, y$ and let $p = p_h(x, y)$ . Then, it suffices to show that
+
+$$
+1 / \eta - \ln (p \exp (1 / \eta) + 1 - p) \leq \ln (1 / p).
+$$
+
+This is indeed true because
+
+$$
+\begin{array}{l} 1 / \eta - \ln (p \exp (1 / \eta) + 1 - p) = \ln \left(\frac {\exp (1 / \eta)}{p \exp (1 / \eta) + 1 - p}\right) \\ = \ln \left(\frac {1}{p + (1 - p) \exp (- 1 / \eta)}\right) \leq \ln (1 / p). \\ \end{array}
+$$
+
+
+
+We can also bound the coefficient of variance in terms of the Bernoulli parameter.
+
+Lemma B.7. If $Z_h^\star(x, y) = \operatorname{Ber}(p_h(x, y))$ , then for all $\eta > 0$ , we have
+
+$$
+\frac {\sqrt {\operatorname {V a r} \left(\exp \left(Z _ {h} ^ {\star} (x , y) / \eta\right)\right)}}{\mathbb {E} \left[ \exp \left(Z _ {h} ^ {\star} (x , y) / \eta\right) \right]} \leq \sqrt {(1 - p) / p}.
+$$
+
+Proof. Fix $x, y$ and let $p = p_h(x, y)$ . Then, the variance term is:
+
+$$
+\begin{array}{l} \operatorname {V a r} \left(\exp \left(Z _ {h} ^ {\star} (x, y) / \eta\right)\right) = \mathbb {E} \left[ \exp \left(2 Z _ {h} ^ {\star} (x, y) / \eta\right) \right] - \left(\mathbb {E} \left[ \exp \left(Z _ {h} ^ {\star} (x, y) / \eta\right) \right]\right) ^ {2} \\ = p \exp (2 / \eta) + (1 - p) - (p \exp (1 / \eta) + (1 - p)) ^ {2} \\ = p \exp (2 / \eta) + (1 - p) - p ^ {2} \exp (2 / \eta) - 2 p \exp (1 / \eta) (1 - p) - (1 - p) ^ {2} \\ = p (1 - p) \exp (2 / \eta) + (1 - p) p - 2 p \exp (1 / \eta) (1 - p) \\ = p (1 - p) \left(\exp (2 / \eta) + 1 - 2 \exp (1 / \eta)\right) \\ = p (1 - p) \left(\exp (1 / \eta) - 1\right) ^ {2}. \\ \end{array}
+$$
+
+Thus, the CV is:
+
+$$
+\frac {\sqrt {p (1 - p) (\exp (1 / \eta) - 1) ^ {2}}}{p \exp (1 / \eta) + 1 - p} = \frac {\sqrt {p (1 - p) (\exp (1 / \eta) - 1) ^ {2}}}{p (\exp (1 / \eta) - 1) + 1} \leq \frac {\sqrt {p (1 - p)}}{p} = \sqrt {(1 - p) / p}.
+$$
+
+
+Figure 5: Full results for star-graph experiments. The empty bar for $G(5,5)$ and $G(3,8)$ are for DPO, which pushed down both the chosen and reject paths resulting in 0 accuracy.
+
+# C Additional Discussion and Implementation Details for Star-Graph
+
+The shortcut behavior, also known as the Clever Hans Trick [18], in the star-graph task arises directly from the auto-regressive next-token prediction objective. Specifically, the model minimizes loss by memorizing the first token seen during training and following the corresponding edge, achieving low training error but generalizing poorly at test time when the initial token is not provided. This leads to a brittle, shortcut-based policy.
+
+Policy-based methods such as REINFORCE and RPO attempt to correct this by upweighting high-reward trajectories. However, because their loss is still based on the product of next-token prediction probabilities, the same as in pretraining, they are vulnerable to the same shortcut and require exponentially many samples via gradient descent on the policy to correct it once it is learned (Theorem 1 of [89]).
+
+In contrast, $Q\sharp$ does not depend on myopic next-token supervision. Instead, it learns to predict the cumulative reward-to-go from each (prefix, token) pair under the reference policy, and uses this to guide generation toward optimal completions. This token-level value modeling allows $Q\sharp$ to predict future outcome and assign higher value to early tokens that lead to long-term reward. In other words, $Q\sharp$ 's loss function is directly trained to perform planning, making it robust to the Clever Hans Trick [18] that undermines next-token-based methods. As shown in Figure 5, both $Q\sharp$ and CD are able to solve the star-graph task near-perfectly, while policy-based methods perform at random-guess level.
+
+We follow the setup of [18] and reused their official code for producing the star-graph results. We used the GPT-2 small model for graphs $G(2,5)$ , $G(5,5)$ and the GPT-2 medium model for $G(3,8)$ [90].3 We first pretrain these models with next-token prediction on a pretraining set of $200k$ random graphs and correct paths. We call this the resultant model the "pre-trained" model, and as observed by [18], these models have the Clever Hans shortcut so they do not generalize well on unseen test graphs. We highlight that this is a failure in generalization, since the pre-trained model achieves near-perfect accuracy on the training set but only $1/d$ accuracy on the test set.
+
+In order to fix the Clever Hans shortcut, we perform post-training with two common baselines - REINFORCE [41] and DPO [9], RPO [42] - as well as our algorithm $Q\sharp$ . The post-training is done on another set of $200k$ random graphs. For REINFORCE, the reward function we use is 1 if the response is correct, and -0.1 if incorrect. We noticed that if the incorrect reward is too negative, this causes model collapsing to accuracy of 0. For DPO and RPO, we sampled pairwise responses $(y_{\mathrm{choosen}}, y_{\mathrm{reject}})$ where $y_{\mathrm{choosen}}$ is the correct path and $y_{\mathrm{reject}}$ is an incorrect shortcut path sampled from the pretrained model. For $Q\sharp$ , we also trained the classifier on the same dataset of pairwise responses, where correct paths are marked with reward 1 and incorrect responses are marked with reward 0. Throughout, we used the AdamW optimizer with weight decay 0.1 and batch size of 256, and trained for 10 epochs. The learning rates were $2.5e - 4$ for pre-training; $1e - 5$ for REINFORCE; $1e - 4$ for DPO and RPO; $1e - 4$ for classifier-based CD and $Q\sharp$ . All models are trained on a single A100 or H100 GPU. All models were evaluated on a separate test set of $20k$ graphs, using top-k 10 and
+
+temperature 1.0. For $Q\sharp$ and CD, we use $\eta = 0.1$ . We found that DPO often pushed down the probabilities of both the chosen and reject paths, leading to poor performance even on the training set; RPO fixed this issue and so we report the RPO numbers.
+
+# D Additional Model Details
+
+$\pi^{\mathrm{ref}}$ models. All models we use in the experiments are the "Instruct" versions. That is, Llama 3 8B refers to meta-llama/Meta-Llama-3-8B-Instruct and we use the default chat template and system message from Meta to interact with them.
+
+$Q^{\sharp}$ models. Two variants for $Q^{\sharp}$ are implemented and experimented: Q-type and V-type. Specifically, the Q-type takes input of a partial generation $x$ and computes $Q^{\star,\eta}(x,y)$ for all $y$ in the vocabulary of the $\pi^{\mathrm{ref}}$ model whereas the V-type takes input of concatenated $x$ and a specific token $\hat{y}$ and outputs a single value that represents $Q^{\star,\eta}(x,\hat{y})$ . Because of the key difference, Q-type therefore can efficiently calculate $Q^{\star,\eta}$ with just one forward pass and its model architecture can also be identical to the original LLM. V-type, however, has a prohibitive inference cost with a naive implementation since it requires making $|V|$ forward passes at every decoding step to calculate the full $Q$ function. In the paragraph below, we discuss our efficient implementation to address this issue. For Q-type, we initialize the model directly from Llama 3.2 1B and for V-type, we replace the last layer of Llama 3.2 1B with a randomly initialized fully connected layer with output size of 1. Therefore, V-type $Q^{\sharp}$ also has slightly fewer number of parameters than Q-type. We by default use V-type $Q^{\sharp}$ in our experiments.
+
+Efficient inference with V-type. To speed up inference for V-type, we note that not all tokens in the vocabulary are worth computing its value since for any partial generation $x$ , most tokens have extremely low probability from $\pi^{\mathrm{ref}}$ as the next token candidate. In our preliminary experiments, we have found that only computing the values for the top 20 tokens ranked by $\pi^{\mathrm{ref}}$ give similar performance compared to computing for all tokens. Additionally, we also note that the values for these tokens can be computed in one forward pass. To accomplish this, we input a partial generation $x$ and the top 20 candidate next tokens together, modify the attention mask so that the candidate tokens do not attend to each other but still to $x$ . This allows us to compute the values for these top tokens in just one additional forward pass without any approximation.
+
+# E $Q$ Training Settings
+
+We collect 16 samples for each question in the training set and label every sample either as correct (1) or incorrect (0) based on the final answer. The first round of training data is collected with just $\pi^{\mathrm{ref}}$ . For training $Q\sharp$ model, we filter out samples from questions where all samples are either correct or incorrect. We use a learning rate of $2e - 5$ and weight decay of 0.01 with AdamW optimizer [91]. The model is trained for 5 epochs. We train $Q\sharp$ for two iterations as we observe performance converges. In the second iteration, we repeat the above data collection procedure and concatenate the training data from the first round. The model is always trained from scratch between iterations.
+
+# F Additional Evaluation Details
+
+We evaluate all methods and models with zero-shot prompting. The prompt template is 'Problem:\n\n{0} Write your answer inside \boxed{\{\}}'.\n\nSolution: where {0} is replaced by the actual question from the dataset. The MATH-500 dataset can also be found at Huggingface 4.
+
+# G Math Reasoning Results on Qwen 2.5
+
+We conduct experiments using Qwen 2.5 [46], where a 1.5B model guides the 7B version on GSM8K, MATH and AIME-24 (Table 5). All other configurations mirror those used with Llama 3. We find that $Q\sharp$ consistently outperforms both $\pi^{\mathrm{ref}}$ and CD across all datasets, achieving higher accuracy with
+
+lower KL divergence. Compared to Table 1, Qwen 2.5 yields stronger overall performance, likely due to its stronger base model, demonstrating that $Q_{\sharp}^{\#}$ generalizes well across model families.
+
+Table 5: Comparison of $Q \sharp$ with $\pi^{\mathrm{ref}}$ and CD baseline on GSM8K (left), MATH (middle) and AIME-24 (right) with Qwen 2.5.
+
+| Dataset | GSM8K | MATH | AIME-24 |
| Methods | πref | CD | Q# | πref | CD | Q# | πref | CD | Q# |
| pass@1 ↑ | 76.1 | 79.0 | 83.5 | 58.6 | 60.7 | 61.9 | 9.3 | 13.5 | 14.1 |
| maj1@8 ↑ | 92.9 | 93.1 | 93.8 | 72.8 | 74.2 | 74.8 | 16.7 | 16.7 | 20.0 |
| KL-Divergence ↓ | - | 5.37 | 4.10 | - | 7.07 | 6.46 | - | 9.95 | 9.23 |
+
+# H Results on QuALITY
+
+In Table 6, we show the results of $Q \sharp$ on QuALITY [47], a challenging multiple-choice reading comprehension benchmark with long-form passages drawn from Project Gutenberg. $Q \sharp$ consistently performs better than baselines.
+
+Table 6: Comparison of ${Q\sharp }$ with ${\pi }^{\text{ref }}$ and CD baseline on QuALITY with Qwen 2.5 and Llama 3.1.
+
+| πref | Qwen 2.5 7B | Llama 3.1 8B |
| Methods | πref | CD | Q# | πref | CD | Q# |
| pass@1↑ | 64.5 | 64.2 | 68.1 | 73.5 | 75.1 | 75.9 |
| maj1@8↑ | 72.0 | 66.3 | 73.3 | 79.3 | 79.3 | 81.1 |
| KL-Divergence ↓ | - | 12.32 | 7.90 | - | 9.23 | 8.88 |
+
+# I Comparison with Policy-based Methods
+
+$Q\sharp$ can serve as a lightweight complement to policy-based approaches. Specifically, $Q\sharp$ can guide both the base reference policy and policies trained via reinforcement learning such as PPO. To empirically assess this, we present results on the MATH dataset where $Q\sharp$ is instantiated with a Qwen 2.5 1.5B model and used to guide: (1) the base Qwen 2.5 7B reference model and (2) a PPO-trained version of the same model. As shown in Table 7, $Q\sharp$ consistently improves both pass@1 and maj1@8 for each policy. In particular, when applied to the PPO-trained policy, $Q\sharp$ reduces the KL divergence from $\pi^{\mathrm{ref}}$ while further boosting accuracy. We also note a qualitative distinction: PPO improves pass@1 but slightly reduces maj1@8, indicating that its generations tend to be lower entropy and less diverse. $Q\sharp$ , in contrast, improves both metrics while maintaining closer alignment with $\pi^{\mathrm{ref}}$ .
+
+In terms of efficiency, $Q\sharp$ is significantly lighter to train. PPO requires approximately 20 hours on 4 H100 GPUs, whereas $Q\sharp$ training completes in roughly 5 hours on a single H100 GPU, thanks to its supervised learning objective and the use of a much smaller model. These findings suggest that $Q\sharp$ can effectively enhance performance while maintaining closer alignment with the reference policy, demonstrating its practical advantage as a complementary lightweight module.
+
+Table 7: Comparison of ${Q}_{\sharp }$ with PPO-trained models and their guided variants on the MATH dataset.
+
+| Methods | πref | πref + Q# | PPO | PPO + Q# |
| pass@1 ↑ | 58.6 | 61.9 | 68.4 | 71.1 |
| maj1@8 ↑ | 72.8 | 74.8 | 72.4 | 73.4 |
| KL-Divergence ↓ | - | 6.46 | 69.52 | 60.53 |
+
+# J Computational Complexity and Runtime Comparison of $Q\sharp$
+
+$Q\sharp$ and other value-based baselines such as CD [11] have the same computational complexity. Compared to generating responses solely with $\pi^{\mathrm{ref}}$ , value-based approaches additionally use the guidance model to compute a $Q$ function at every decoding step. That is, it increases complexity by the ratio of the guidance model's size to that of $\pi^{\mathrm{ref}}$ . Since the guidance model can be much smaller in size compared to $\pi^{\mathrm{ref}}$ , the overhead is mild. For instance, guiding a Llama 8B with Llama 1B increases complexity by $12.5\%$ .
+
+Additionally, we efficiently implemented value-based guidance for $Q \sharp$ in Hugging Face using LogitProcessor and key-value caches. On an Nvidia A6000, generating one response on test set of MATH takes 4.10s for $\pi^{\mathrm{ref}}$ and 5.18s for $Q \sharp$ , slightly exceeding 12.5% possibly due to sequential $Q$ function computation in LogitProcessor. The code for our implementation can be found in the supplementary materials.
+
+# K Qualitative Examples
+
+In Figure 6 and the ones below it, we qualitative visualize side by side generation results from $\pi^{\mathrm{ref}}$ and $Q\sharp$ on Llama 3 8B GSM8K and Llama 3.1 8B MATH settings. In the first example of Figure 6, we observe $\pi^{\mathrm{ref}}$ and $Q\sharp$ start off similarly by calculating the total number of cookies Shannon eat but $\pi^{\mathrm{ref}}$ starts to roughly guess the answer (highlighted in red) without calculating the answer precisely, where $Q\sharp$ calculates the answer step by step (in blue). In the second MATH example, $\pi^{\mathrm{ref}}$ first states some confusing statement (highlighted in orange) and then makes a mistake of addition $(5 + 5)$ instead of multiplication when calculating all the possibilities. $Q\sharp$ , however, correctly solves the problem by multiplying the options $(5*5)$ for Paul and Jesse and then adds the single case when using the number of 2, arriving at the correct answer.
+
+More examples can be found in subsequent pages with various levels of difficulty. We highlight that $Q^{\sharp}$ can still make confusing statements similar to $\pi^{\mathrm{ref}}$ even in solution that arrives at the correct final answer. For example, in the Cecilia puppy example (the first after Figure 6), similar to $\pi^{\mathrm{ref}}$ , $Q^{\sharp}$ also makes a division of 1 when it should be multiplication.
+
+Problem from GSM8K: Shannon makes her own madeleine cookies and eats 2 a night as a treat. She wants to make enough cookies to last her for 30 days by storing them in the freezer. Her recipe makes 1 dozen madeleine cookies. How many dozens of cookies will she need to make so she has enough for 30 days?
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : Since Shannon eats 2 cookies per night, she will eat $2 \cdot 30 = 60$ cookies over 30 days. She will need $\boxed{7}$ dozen cookies, since $7 \cdot 12 = 84 > 60$ .
+
+Final Answer: The final answer is 7. I hope it is correct.
+
+Correct Solution from $Q^{\sharp}$ : Shannon eats 2 cookies per night, so she will eat $2 * 30 = 60$ cookies in 30 days. Since 1 dozen cookies is 12, we can divide 60 by 12 to find out how many dozens she needs. $60 \div 12 = 5$ . The answer is $\boxed{5}$ dozens.
+
+Counting & Probability Problem from MATH: Paul and Jesse each choose a number at random from the first six primes. What is the probability that the sum of the numbers they choose is even?
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : To find the probability that the sum of the numbers Paul and Jesse choose is even, we need to consider the possible combinations of even and odd numbers. The first six primes are 2, 3, 5, 7, 11, and 13. Since 2 is the only even prime number, we will always have one even number and one or more odd numbers. For the sum to be even, both numbers must be either even or odd. If Paul chooses the even number (2), Jesse can choose any of the 5 odd numbers (3, 5, 7, 11, 13), resulting in 5 possibilities. If Paul chooses any of the 5 odd numbers, Jesse can choose the even number (2), resulting in 5 more possibilities. Therefore, there are $5 + 5 = 10$ possibilities where the sum is even. The total number of possible combinations is $6 * 6 = 36$ (6 choices for Paul and 6 choices for Jesse). The probability that the sum of the numbers Paul and Jesse choose is even is $10/36$ , which simplifies to
+
+5/18. The answer is: $\frac{5}{18}$
+
+Correct Solution from $Q_{\#}^{\sharp}$ : The first six primes are 2, 3, 5, 7, 11, and 13. For the sum of the numbers Paul and Jesse choose to be even, they must either both choose even numbers or both choose odd numbers. There is only one even prime number, which is 2. Paul has 6 options, and Jesse also has 6 options. The total number of possibilities is $6 * 6 = 36$ . For the case where both choose the even number, there is only one possibility (2, 2). For the case where both choose odd numbers, there are 5 options for Paul and 5 options for Jesse. The total number of possibilities is $5 * 5 = 25$ . So, the total number of possibilities where the sum is even is $1 + 25 = 26$ . The probability that the sum of the numbers they
+
+choose is even is $26 / 36 = 13 / 18$ . The answer is $\boxed{\frac{13}{18}}$
+
+Figure 6: Example generations from $\pi^{\mathrm{ref}}$ and $Q\sharp$ on GSM8K and MATH questions. $\pi^{\mathrm{ref}}$ consistently fails to solve the problems compared to $Q\sharp$ . The generations from $\pi^{\mathrm{ref}}$ and $Q\sharp$ usually start off with similar reasoning chain-of-thought but $\pi^{\mathrm{ref}}$ gradually makes unjustified reasoning leap or simply a guess of the answer whereas $Q\sharp$ tends to be more logical for solving the problems. Colors are manually added for ease of visualization. Blue represents correct reasoning steps or answers, red represents clearly incorrect ones and orange represents ambiguous or minor mistake that could potentially lead to clearly incorrect steps or answers.
+
+Problem from GSM8K: Cecilia just bought a new puppy. According to her veterinarian, she has to feed the puppy 1 cup of dog food every day for the first 180 days. Then she has to feed the puppy 2 cups of dog food every day for the rest of its life. If one bag of dog food contains 110 cups, how many bags of dog food will Cecilia use in the first year?
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : For the first 180 days, Cecilia needs $\frac{180}{1} = 180$ cups of dog food. For the rest of the year, she needs $\frac{365 - 180}{2} = 92.5$ cups of dog food. So in total she needs $180 + 92.5 = 272.5$ cups of dog food. If one bag of dog food contains 110 cups, then she will need $\frac{272.5}{110} = \boxed{2.5}$ bags of dog food. Final Answer: The final answer is 2.5. I hope it is correct.
+
+Correct Solution from $Q \sharp$ : The first 180 days, she will need to feed her puppy $\frac{180}{1} = 180$ cups of dog food. After that, she will need to feed her puppy 365-180=185 days of 2 cups each, for a total of $2 \cdot 185 = 370$ cups of dog food. So, the total amount of dog food needed in the first year is $180 + 370 = 550$ cups. Since each bag of dog food contains 110 cups, the number of bags needed is
+
+$$
+\left| \frac {5 5 0}{1 1 0} = 5 \right|.
+$$
+
+Problem from GSM8K: According to its nutritional info, a bag of chips has 250 calories per serving. If a $300\mathrm{g}$ bag has 5 servings, how many grams can you eat if your daily calorie target is 2000 and you have already consumed 1800 calories?
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : Let's break this down step by step!
+
+1. The bag of chips has 5 servings, and each serving has 250 calories. To find the total calories in the bag, multiply the number of servings by the calories per serving:
+5 servings $\times$ 250 calories/serving $= 1250$ calories
+2. You have already consumed 1800 calories, and your daily calorie target is 2000. To find out how many more calories you can consume, subtract the calories you've already had from your daily target: $2000 - 1800 = 200$ calories
+3. Since each serving of chips has 250 calories, you can consume: 200 calories ÷ 250 calories/serving = 0.8 servings
+4. Since there are 5 servings in the bag, you can eat:
+
+0.8 servings $\times 300\mathrm{g / bag} = 240\mathrm{g}$
+
+So, you can eat up to $240\mathrm{g}$ of the bag of chips.
+
+Here's the answer:
+
+240 g
+
+Correct Solution from $Q$ : A delicious problem!
+
+Let's break it down step by step:
+
+1. The bag of chips has 250 calories per serving, and there are 5 servings in the bag.
+2. The total calories in the bag are: 250 calories/serving $\times$ 5 servings $= 1250$ calories.
+3. You have already consumed 1800 calories, so you have $2000 - 1800 = 200$ calories left for the day.
+4. Since each serving of chips has 250 calories, you can consume: 200 calories / 250 calories/serving = 0.8 servings.
+5. Since there are 5 servings in the bag, you can eat: $0.8\mathrm{servings} \times 300\mathrm{g}/5$ servings $= 0.8 \times 60\mathrm{g} = 48\mathrm{g}$ .
+
+So, you can eat up to $\boxed{48g}$ of the bag of chips.
+
+Enjoy your snacking, but don't overdo it!
+
+Problem from GSM8K: Nate's dog can dig six holes a day. He digs for 14 days while Nate is on vacation. When Nate gets home, he starts filling in 9 holes a day, but the dog keeps digging 6 new holes every night. How many weeks does it take him to fill in all the holes?
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : Nate's dog digs 6 holes a day for 14 days, so the total number of holes dug is $6 \times 14 = 84$ . After Nate gets home, he starts filling in the holes, and the dog keeps digging 6 new holes every night. So the number of holes to fill is always $84 + 6$ . Nate fills 9 holes a day, so the number of days it takes to fill all the holes is:
+
+$$
+\boxed {\left\lceil \frac {8 4 + 6}{9} \right\rceil = 1 1}
+$$
+
+Correct Solution from $Q^{\sharp}$ : The dog digs 6 holes a day, and Nate is away for 14 days, so the dog digs $6 * 14 = 84$ holes. Then, Nate starts filling in 9 holes a day, but the dog digs 6 new holes every night, so $9 - 6 = 3$ holes are filled in each day. So, each day 3 more holes are filled in. The number of holes filled in after x days is $84 - 3x$ . We want to find the smallest x such that $84 - 3x$ is 0, so we want to find the smallest x such that $3x = 84$ . $3x = 84$ is true when $x = 28$ , so it takes 28 days to fill in all the holes. Since there are 7 days in a week, it takes $28 / 7 = \boxed{4}$ weeks to fill in all the holes.
+
+Prealgebra Problem from MATH: A bookcase has 3 shelves with a total of 24 books. The top shelf has 8 mystery books. The middle shelf has 10 math books. The bottom shelf has 6 science books. Two books are now taken off each shelf. What fraction of the books remaining on the three shelves are math books? Express your answer as a common fraction.
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : To find the fraction of the books remaining on the three shelves that are math books, we first need to find the total number of books remaining on the three shelves.
+
+Initially, the bookcase had 24 books. Two books are taken off each shelf. Since there are 3 shelves, a total of 6 books are taken off.
+
+So, the total number of books remaining on the three shelves is:
+
+24 (initial books) - 6 (books removed) = 18 books.
+
+The middle shelf had 10 math books. Since no math books were removed, the number of math books remaining is still 10.
+
+Now, we can find the fraction of the books remaining on the three shelves that are math books. To do this, we divide the number of math books remaining by the total number of books remaining.
+
+Fraction of math books = Number of math books / Total number of books = 10 / 18
+
+We can simplify this fraction by dividing both the numerator and denominator by their greatest common divisor, which is 2.
+
+Simplified fraction $= 5 / 9$
+
+So, the fraction of the books remaining on the three shelves that are math books is: $\boxed{5 / 9}$
+
+Correct Solution from $Q \sharp$ : To find the fraction of the remaining books that are math books, we first need to find the total number of books remaining on the shelves after two books are taken off each shelf.
+
+Top shelf initially had 8 books. After removing 2 books, it has $8 - 2 = 6$ books.
+
+Middle shelf initially had 10 books. After removing 2 books, it has $10 - 2 = 8$ books.
+
+Bottom shelf initially had 6 books. After removing 2 books, it has $6 - 2 = 4$ books.
+
+So the total number of books remaining is $6 + 8 + 4 = 18$ books.
+
+Initially, there were 10 math books. Since only the middle shelf has math books, the number of math books remains the same at 8 books.
+
+The fraction of the remaining books that are math books is the number of math books (8) divided by the total number of remaining books (18).
+
+Fraction $= 8 / 18 = 4 / 9$
+
+So the answer is: $\frac{4}{9}$ .
+
+Intermediate Algebra Problem from MATH: The asymptotes of a hyperbola are $y = 2x - 3$ and $y = 17 - 2x$ . Also, the hyperbola passes through the point (4, 7). Find the distance between the foci of the hyperbola.
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : To find the distance between the foci of the hyperbola, we first need to determine the center and the standard form of the hyperbola. The asymptotes are given by $y = 2x - 3$ and $y = 17 - 2x$ . Their point of intersection will be the center of the hyperbola. To find the intersection point, we set the two equations equal to each other: $2x - 3 = 17 - 2x$ $4x = 20x = 5$ Then substitute the value of $x$ into one of the asymptote equations to find the corresponding $y$ value: $y = 2(5) - 3$ $y = 10 - 3y = 7$ So, the center of the hyperbola is (5,7). Next, we need to find the standard form of the hyperbola. The slopes of the asymptotes are $\pm 2$ , so the standard form of the hyperbola is given by:
+
+$$
+\frac {(y - 7) ^ {2}}{a ^ {2}} - \frac {(x - 5) ^ {2}}{b ^ {2}} = 1
+$$
+
+Since the slopes of the asymptotes are $\pm 2$ , we know that $b / a = 2$ . We can use the point $(4,7)$ , which the hyperbola passes through, to find the values of $a$ and $b$ .
+
+$$
+\begin{array}{l} \frac {(7 - 7) ^ {2}}{a ^ {2}} - \frac {(4 - 5) ^ {2}}{b ^ {2}} = 1 \\ \frac {(- 1) ^ {2}}{b ^ {2}} = 1 \\ \frac {1}{b ^ {2}} = 1 \\ b ^ {2} = 1 \\ b = 1 \\ \end{array}
+$$
+
+Since $b / a = 2$ , we have $a = b / 2 = 1 / 2$ . Now we can find the values of $a^2$ and $b^2$ .
+
+$$
+\begin{array}{l} a ^ {2} = (1 / 2) ^ {2} = 1 / 4 \\ b ^ {2} = 1 ^ {2} = 1 \\ \end{array}
+$$
+
+Now that we have the values of $a^2$ and $b^2$ , we can find the distance between the foci of the hyperbola using the equation:
+
+$$
+\begin{array}{l} c ^ {2} = a ^ {2} + b ^ {2} \\ c ^ {2} = 1 / 4 + 1 \\ c ^ {2} = 5 / 4 \\ c = \sqrt {5 / 4} = \sqrt {5} / 2 \\ \end{array}
+$$
+
+The distance between the foci of the hyperbola is $2c = 2\sqrt{5}/2 = \sqrt{5}$ . Therefore, the distance between the foci of the hyperbola is $\boxed{\sqrt{5}}$ .
+
+Correct Solution from $Q \sharp$ : To find the equation of the hyperbola, we first need to find the center and the standard form of the hyperbola. The center is the point of intersection of the two asymptotes. To find it, we can set the two equations equal to each other and solve for $x$ : $2x - 3 = 17 - 2x$ . Combine like terms: $4x = 20$ Divide by 4: $x = 5$ . Now substitute this value of $x$ into one of the asymptote equations to find $y$ : $y = 2x - 3y = 2(5) - 3y = 7$ . So, the center of the hyperbola is at (5,7). Since the slopes of the asymptotes are 2 and -2, the standard form of the hyperbola is: $\frac{(y - k)^2}{a^2} - \frac{(x - h)^2}{b^2} = 1$ where $(h,k)$ is the center of the hyperbola. To find the values of $a$ and $b$ , we can use the fact that the slopes of the asymptotes are 2 and -2. The slopes of the asymptotes are given by $\pm \frac{a}{b}$ . Therefore: $\frac{a}{b} = 2$ . We also know that the point (4,7) is on the hyperbola. We can substitute this point into the equation to get: $\frac{(7 - 7)^2}{a^2} - \frac{(4 - 5)^2}{b^2} = 1$ . This simplifies to: $\frac{1}{b^2} = 1$ . Now we can substitute this value of $b^2$ into the equation $\frac{a}{b} = 2$ to get: $\frac{a}{1} = 2$ . Now we can substitute the values of $a$ and $b$ into the standard form of the hyperbola to get: $\frac{(y - 7)^2}{4} - \frac{(x - 5)^2}{1} = 1$ . The distance between the foci of a hyperbola is given by $2c$ , where $c$ is the distance from the center to each focus. The relationship between $a$ , $b$ , and $c$ is given by: $c^2 = a^2 + b^2$ . So, the distance between the foci of the hyperbola is $2\sqrt{5}$ .
+
+Algebra Problem from MATH: Simplify the following expression to a simplified fraction:
+
+$$
+\sqrt {\frac {\frac {5}{\sqrt {8 0}} + \frac {\sqrt {8 4 5}}{9} + \sqrt {4 5}}{\sqrt {5}}}.
+$$
+
+Incorrect Solution from $\pi^{\mathrm{ref}}$ : To simplify the expression $\sqrt{\frac{\frac{5}{\sqrt{80}} + \frac{\sqrt{845}}{9} + \sqrt{45}}{\sqrt{5}}}$ , we need to follow the order of operations. First, we simplify the terms inside the square root: $\frac{5}{\sqrt{80}} = \frac{5}{\sqrt{16 \times 5}} =$
+
+$$
+\frac {5}{4 \sqrt {5}} = \frac {5}{4 \sqrt {5}} \times \frac {\sqrt {5}}{\sqrt {5}} = \frac {5 \sqrt {5}}{4 \times 5} = \frac {\sqrt {5}}{4}
+$$
+
+$$
+\sqrt {8 4 5} = \sqrt {1 3 \times 6 5} = \sqrt {1 3 \times 1 3 \times 5} = 1 3 \sqrt {5}
+$$
+
+$$
+\sqrt {4 5} = \sqrt {9 \times 5} = \sqrt {9} \times \sqrt {5} = 3 \sqrt {5}
+$$
+
+Now, we can rewrite the expression as: $\sqrt{\frac{\frac{\sqrt{5}}{4} + \frac{13\sqrt{5}}{9} + 3\sqrt{5}}{\sqrt{5}}}$ We can simplify the numerator by
+
+finding a common denominator: $\frac{\sqrt{5}}{4} + \frac{13\sqrt{5}}{9} + 3\sqrt{5} = \frac{9\sqrt{5}}{36} + \frac{52\sqrt{5}}{36} + \frac{108\sqrt{5}}{36} = \frac{169\sqrt{5}}{36}$ Now,
+
+we can rewrite the expression as: $\sqrt{\frac{169\sqrt{5}}{\frac{36}{\sqrt{5}}}}$ To simplify the expression, we can multiply the numerator
+
+and denominator by $\sqrt{5}$ : $\frac{\frac{169\sqrt{5}}{36}}{\sqrt{5}} = \frac{169\sqrt{5}}{36} \times \frac{\sqrt{5}}{\sqrt{5}} = \frac{169 \times 5}{36} = \frac{845}{36}$ . Now, we can rewrite the
+
+expression as: $\sqrt{\frac{845}{36}}$ We can simplify the square root by finding the square root of the numerator and
+
+denominator: $\sqrt{\frac{845}{36}} = \sqrt{\frac{5 \times 13^2}{6^2}} = \frac{\sqrt{5} \times 13}{\sqrt{6^2}} = \frac{\sqrt{5} \times 13}{6}$ So, the simplified fraction is $\boxed{\frac{13\sqrt{5}}{6}}$ .
+
+Correct Solution from $Q$ : To simplify the given expression, we first simplify the terms within the square root in the numerator.
+
+$$
+\frac {5}{\sqrt {8 0}} = \frac {5}{\sqrt {1 6 \cdot 5}} = \frac {5}{4 \sqrt {5}} = \frac {\sqrt {5}}{4}
+$$
+
+$$
+\sqrt {8 4 5} = \sqrt {1 3 \cdot 6 5} = \sqrt {1 3 \cdot 1 3 \cdot 5} = 1 3 \sqrt {5}
+$$
+
+$$
+\sqrt {4 5} = \sqrt {9 \cdot 5} = 3 \sqrt {5}
+$$
+
+Now, we substitute these simplified expressions back into the original expression:
+
+$$
+\sqrt {\frac {\frac {5}{\sqrt {8 0}} + \frac {\sqrt {8 4 5}}{9} + \sqrt {4 5}}{\sqrt {5}}} = \sqrt {\frac {\frac {\sqrt {5}}{4} + \frac {1 3 \sqrt {5}}{9} + 3 \sqrt {5}}{\sqrt {5}}}
+$$
+
+We can simplify the expression further by combining like terms in the numerator:
+
+$$
+\frac {\frac {\sqrt {5}}{4} + \frac {1 3 \sqrt {5}}{9} + 3 \sqrt {5}}{\sqrt {5}} = \frac {\frac {9 \sqrt {5} + 5 2 \sqrt {5} + 1 0 8 \sqrt {5}}{3 6}}{\sqrt {5}} = \frac {\frac {1 6 9 \sqrt {5}}{3 6}}{\sqrt {5}} = \frac {1 6 9}{3 6}
+$$
+
+Finally, we take the square root of the fraction:
+
+$$
+\sqrt {\frac {1 6 9}{3 6}} = \frac {\sqrt {1 6 9}}{\sqrt {3 6}} = \frac {1 3}{6}
+$$
+
+The answer is $\boxed{\frac{13}{6}}$
\ No newline at end of file
diff --git a/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/images.zip b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8536f7d13de01033dab50c678f92ba69b002858c
--- /dev/null
+++ b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:793cf038d35c7485677a3c07b4ab90392656accf8b842c33dd1aa389b3355eb0
+size 828860
diff --git a/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/layout.json b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..dd0a21833a528c2350d7ddac370cda2d6840408f
--- /dev/null
+++ b/NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b8a330036bfbb5933e02b68e0244886bbaaf4f825d3c5cde0436fd067442bdeb
+size 1653987
diff --git a/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_content_list.json b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..504cebb3b539d11ab2102ef3cdd33b6c78f32aac
--- /dev/null
+++ b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ba0191b6b0050207873b9ebfa605a54f5a0e503a50cec6d0210f9e9d6f9c77c
+size 230855
diff --git a/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_model.json b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f73e27e863935faa5b66d0a674c11643a392db52
--- /dev/null
+++ b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bed83b2a9a4711dec7c998a22faa6e397a625624aecb6a1deffaa71d13347bf7
+size 288228
diff --git a/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_origin.pdf b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2f81d189c4e828c7ee9674e63e82020c4a133826
--- /dev/null
+++ b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f46f45036ace6bf2ad7aed3db04f470b5ce200cf69151cbb5e7124285b65ca7e
+size 21083094
diff --git a/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/full.md b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3c5e5061573fe0bf97863d14f363ef45dc77c05f
--- /dev/null
+++ b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/full.md
@@ -0,0 +1,963 @@
+# $\Delta$ Energy: Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization
+
+Lin Zhu $^{1}$ , Yifeng Yang $^{1}$ , Xinbing Wang $^{1}$ , Qinying Gu $^{2}$ , Nanyang Ye $^{1}$ $^{1}$ Shanghai Jiao Tong University, $^{2}$ Shanghai Artificial Intelligence Laboratory {zhulin_sjtu, xwang8, ynylincoln}@sjtu.edu.cn, maxwellquadyang@gmail.com guqinying@pjlab.org.cn
+
+# Abstract
+
+Recent approaches for vision-language models (VLMs) have shown remarkable success in achieving fast downstream adaptation. When applied to real-world downstream tasks, VLMs inevitably encounter both the in-distribution (ID) data and out-of-distribution (OOD) data. The OOD datasets often include both covariate shifts (e.g., known classes with changes in image styles) and semantic shifts (e.g., test-time unseen classes). This highlights the importance of improving VLMs' generalization ability to covariate-shifted OOD data, while effectively detecting open-set semantic-shifted OOD classes. In this paper, inspired by the substantial energy change observed in closed-set data when re-aligning vision-language modalities—specifically by directly reducing the maximum cosine similarity to a low value—we introduce a novel OOD score, named $\Delta$ Energy. $\Delta$ Energy significantly outperforms the vanilla energy-based OOD score and provides a more reliable approach for OOD detection. Furthermore, $\Delta$ Energy can simultaneously improve OOD generalization under covariate shifts, which is achieved by lower-bound maximization for $\Delta$ Energy (termed EBM). EBM is theoretically proven to not only enhance OOD detection but also yields a domain-consistent Hessian, which serves as a strong indicator for OOD generalization. Based on this finding, we developed a unified fine-tuning framework that allows for improving VLMs' robustness in both OOD generalization and OOD detection. Extensive experiments on challenging OOD detection and generalization benchmarks demonstrate the superiority of our method, outperforming recent approaches by $10\% - 25\%$ in AUROC.
+
+# 1 Introduction
+
+Recent advances in pre-trained vision-language models (VLMs), such as CLIP (Radford et al., 2021), VLMo (Bao et al., 2022), MiniGPT-4 (Zhu et al., 2023a), etc., have shown promising results in visual-semantic learning. However, downstream use cases often involve further fine-tuning of VLMs. When applied to real-world downstream tasks, VLMs inevitably face challenges related to out-of-distribution (OOD) data, stemming from differences in data distributions between the training and test sets (Meinhausen and Buhlmann, 2015; Koh et al., 2020). As illustrated in Figure 1, these OOD datasets often involve closed-set OOD data that exhibit covariate shifts (i.e., changes in environments, while class labels remain the same as the in-distribution data), as well as open-set OOD data with semantic shifts (i.e., test-time new categories that were unseen during fine-tuning). It is crucial to distinguish these unknown categories from known ones, rather than blindly predicting them as known
+
+(A) Illustration of $\Delta$ Energy for OOD Detection
+
+| Different Types of Data in Real World | Motivation of ΔEnergy (E1-E0=ΔEnergy) |
| Data | Closed-Set ID | Closed-Set OOD | Open-Set OOD | Closed-set Data | V-L Similarity Top1 Top2 Top3 ... Energy E0 | Cropped V-L Similarity Top1 Top2 Top3 ... Energy E1 | Energy Change E1 E0 High Energy |
| Image Input | Scarf | Scarf | Do Not Perform Classification |
| Classification | Open-Set OOD | V-L Similarity Top1 Top2 Top3 ... Energy E0 | Cropped V-L Similarity Top1 Top2 Top3 ... Energy E1 | Energy Change E1 E0 Low Energy |
| Energy Changes | Huge | Huge | Small | | | | |
+
+(B) Illustration of $\Delta$ Energy for OOD Generalization
+
+| Image Feature | Top p % | |
| I1 | I2 | I3 | ... | ID-1 | ID | Masked Image Feature | Top p % |
| Text Feature | T1 | T2 | T3 | ... | ID-1 | ID | Masked Image Feature | Top p % | | | | | | | | | | | | | |
| Product Feature | P1 | P2 | P3 | ... | ID-1 | ID | Original Image Feature | Top p % | |
| ■ Masked Element | Pj = Ij × Tj | | |
+
+
+Figure 1: (A) Illustration of $\Delta$ Energy for OOD detection. Significant differences in $\Delta$ Energy are observed between closed-set data and open-set OOD data when the maximum cosine similarity is cropped to zero. (B) Illustration of the $\Delta$ Energy for OOD generalization. We introduce the EBM method to achieve domain-consistent Hessians, which simultaneously triggers bound optimization for $\Delta$ Energy. More details are in Section 3.2. (C) Comparison between our $\Delta$ Energy and EBM with state-of-the-art methods. In the radar plots, all values are normalized to the range [0, 1]. It is observed that recent methods aimed at improving VLMs' OOD detection may not scale well to handling different types of distribution shifts in challenging ImageNet-1k OOD datasets.
+
+classes (Wang et al., 2023b,a). Therefore, it is essential to develop robust models that enhance VLMs' generalization ability to closed-set OOD data, while also effectively detecting open-set OOD classes during fine-tuning.
+
+However, most previous studies (Wortsman et al., 2022; Chen et al., 2024; Jiang et al., 2023; Goyal et al., 2023; Wang et al., 2023a; Ming et al., 2022a; Bai et al., 2024; Li et al., 2024a) have primarily focused on improving VLMs' robustness to training classes or developing OOD detection method for unseen classes independently. Consequently, existing approaches are often highly specialized for a single task and are not capable of simultaneously addressing both aspects. Recent works (Yang et al., 2023; Zhang et al., 2023a) have taken into account both shift types and introduced full-spectrum OOD (FS-OOD) detection, which considers both detecting semantic shifts and being tolerant to covariate shifts. While the FS-OOD benchmark evaluates OOD detection performance across various distribution types, it may not focus on improving VLM's classification accuracy on covariate-shifted data. Several studies (Lafon et al., 2024; Zhu et al., 2024, 2025a) also have attempted to tackle this issue using multiple diverse prompts or through energy optimization techniques. However, these approaches (Lafon et al., 2024; Zhu et al., 2024) often require significantly more computational resources to train additional local prompts or have been evaluated on a narrow set of post-hoc functions for OOD detection. Thus, when fine-tuning VLMs for downstream tasks, the challenge of improving the VLMs' generalization ability to closed-set OOD data while simultaneously detecting open-set OOD classes that were unseen during fine-tuning remains largely underexplored.
+
+In this paper, we develop novel zero-shot and few-shot fine-tuning paradigms to go beyond the limitations of previous studies. We begin by proposing a new post-hoc OOD detection method, inspired by the following heuristic observation: Given the text prompts corresponding to ID data and input images, we compute the cosine similarities between the image features and text features.
+
+As shown in Figure 1 (A), when we crop the maximum cosine similarity to a low value (such as by resetting to zero), the resulting change in energy score (Liu et al., 2020) is substantially different between closed-set data and open-set semantic-shifted classes.
+
+# Takeaways for $\Delta$ Energy when aligning vision language modalities
+
+When re-aligning vision-language modalities by setting the maximum cosine similarity to zero, we define the resulting change in energy score as $\Delta$ Energy. As demonstrated in Theorem 3.2, the $\Delta$ Energy for ID data is consistently larger than that for OOD data, indicating that it provides a discriminative and effective method for OOD detection. Meanwhile, compared to the MCM method and raw energy scores, $\Delta$ Energy amplifies the difference between ID and OOD data—a property supported by both theoretical analysis and empirical evidence. Extensive experiments further demonstrate that our method outperforms state-of-the-art zero-shot OOD detection approaches on hard OOD detection benchmarks.
+
+Building on this insight, we propose leveraging the energy change to distinguish closed-set classes from open-set OOD classes. We introduce a zero-shot OOD detection method, termed $\Delta$ Energy, which quantifies the energy change resulting from modifying vision-language alignment (i.e., the cosine similarities). As demonstrated in Section 4, $\Delta$ Energy significantly outperforms recent methods in detecting hard OOD classes, providing a more reliable approach for OOD detection.
+
+Moreover, $\Delta$ Energy can be further optimized to enhance OOD detection while simultaneously improving OOD generalization. This is achieved through $\Delta$ Energy-based bound maximization (termed EBM) during few-shot adaptation of VLMs. As depicted in Figure 1 (B), we modify the vision-language alignment by retaining the $p\%$ of the image feature elements (with $p$ as a hyperparameter) and masking the remaining elements. The resulting masked features are then used to compute a new energy change between the original and masked models, which we refer to as EBM. As demonstrated in Theorem 3.4, minimizing EBM is theoretically shown to maximize the lower bound of $\Delta$ Energy. Moreover, the EBM method not only theoretically enhances the discrimination between closed-set known classes and open-set OOD classes based on the newly introduced OOD score (See Theorem 3.4), but also leads to stronger OOD generalization under covariate shifts (See Theorem 3.5). This allows us to fine-tune VLMs in a unified framework, enhancing both OOD generalization for closed-set OOD data and OOD detection for open-set OOD data.
+
+# 2 Preliminary
+
+In this section, we first provide the data setting in Notation 2.1 and formally define the target tasks. Based on the widely-used vision-language model CLIP (Radford et al., 2021), we then present the motivation for modifying the vision-language alignment through a masking operation.
+
+Notation 2.1. Given the in-distribution (ID) samples from the downstream task, $\{\mathbf{x_i},\mathbf{y_i}\}_{i = 1}^N$ , we define the classes of these samples as closed-set classes, while the other classes are considered as open-set OOD classes. The text prompts for the closed-set classes are defined as $\mathcal{T}_{\mathrm{in}} = \{t_1,t_2,\dots ,t_K\}$ , where $K$ represents the number of closed-set classes. Each text prompt $t_i$ can be formulated as "a photo of a {CLASS NAME}" . Based on a pre-trained VLM, we can obtain the zero-shot image features and text features, denoted as $\{\mathbf{z_I}(\mathbf{x_i})\}_{i = 1}^N$ and $\{\mathbf{z_T}(t_i)\}_{i = 1}^K$ , respectively. Both $\mathbf{z_I}(\mathbf{x_i})$ and $\mathbf{z_T}(t_i)$ are $D$ -dimensional features.
+
+Task definition Given a set of closed-set ID samples $\{\mathbf{x_i},\mathbf{y_i}\}_{i = 1}^N$ , drawn from a source domain $S$ , the model is tasked with learning a robust predictor $f:\mathcal{X}\rightarrow \mathcal{Y}$ , which maps inputs $\mathbf{x}\in \mathcal{X} = \mathbb{R}^{D_0}$ to outputs $\mathbf{y}\in \mathcal{V} = \mathbb{R}^K$ , where $D_0$ is the dimension of $\mathbf{x}$ and $K$ is the class number. Here, $N$ is the total number of the few-shot ID samples. To effectively address both closed-set OOD data (covariate shifts) and open-set OOD data (semantic shifts), we aim to enhance the robustness of predictor $f$ from two perspectives: 1) OOD generalization, which requires the model to generalize on closed-set classes from new domains $\mathcal{T}$ that exhibit covariate shifts; and 2) OOD detection, which enables the model to detect open-set OOD classes during test time.
+
+Effect of vision-language re-alignment through masking We illustrate how vision-language re-alignment is achieved through a specific masking strategy during fine-tuning. Given an image input $\mathbf{x}_i$ and the text prompt $t$ that yields the maximum cosine similarity, we denote the corresponding
+
+
+Figure 2: Overview of the proposed method. Based on the prompt-tuning approach, we freeze both the image encoder and the text encoder, making only the context vectors $(\theta = [\theta_{1}, \dots, \theta_{n}])$ learnable under the proposed objective function, as shown in Equation 8. During fine-tuning, we apply a masking operation to each ID image feature based on the top-1 similarity, as defined in Equation 6. We then compute the resulting energy change after modifying the vision-language alignment via masking, which allows us to perform bound optimization on $\Delta \mathrm{Energy}$ . In the inference phase, following Equation 1, we reset the top- $c$ cosine similarities and then compute $\Delta \mathrm{Energy}$ for OOD detection. Simultaneously, we use the fine-tuned text feature and unmasked image feature for classification at test time. The complete algorithm can be seen in Appendix G.
+
+zero-shot image feature and text feature as $\mathbf{z}_{\mathbf{I}}(\mathbf{x_i})$ and $\mathbf{z_T}(t)$ , respectively. We then compute their element-wise product, represented as $\mathbf{z_P}(\mathbf{x_i}) \coloneqq \mathbf{z_I}(\mathbf{x_i}) \odot \mathbf{z_T}(t)$ . Let $I_j, T_j$ , and $P_j$ denote the $j$ -th element of $\mathbf{z_I}(\mathbf{x_i})$ , $\mathbf{z_T}(t)$ , and $\mathbf{z_P}(\mathbf{x_i})$ , respectively, such that $I_j \cdot T_j = P_j$ . Based on the product vector $\mathbf{z_P}(\mathbf{x_i})$ , we mask (zero-out) elements in $\mathbf{z_I}(\mathbf{x_i})$ where $P_j > 0$ . This masking operation thus effectively reduces the maximum cosine similarity to a low value, achieving modified vision-language alignment. Moreover, from the attention visualization in Figure 1 (B), the pre-trained VLM initially focuses on the foreground object. However, after masking the elements of the image feature where $P_j > 0$ , the model's attention shifts and becomes more reliant on background information. In contrast, masking the elements where $P_j < 0$ preserves the model's original attention, which motivates us to leverage this consistency between the original and masked domains to improve OOD generalization. Additional visualizations are provided in Figure 3 in Appendix G.
+
+# 3 Methodology
+
+Building upon the heuristic observations as shown in Figure 1, we propose a novel OOD score, named $\Delta$ Energy, that measures the energy change when re-aligning vision-language modalities. We theoretically demonstrate that $\Delta$ Energy outperforms the widely-used MCM (Ming et al., 2022a) (see Theorems 3.2-3.3). Moreover, we introduce a $\Delta$ Energy-based bound maximization, which is proven to not only enhance OOD detection (see Theorem 3.4) but also lead to stronger OOD generalization (see Theorem 3.5). Before delving into the details, we provide the notations in this section as follows:
+
+Notation 3.1. We define the cosine similarity ${}^{2}$ between the image feature $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ and text feature $\mathbf{z}_{\mathbf{T}}(t_{j})$ as $s_{j}(\mathbf{x}_{\mathbf{i}}) = \mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})\cdot \mathbf{z}_{\mathbf{T}}(t_{j})$ . Let $\hat{y}_1 \coloneqq \operatorname{argmax}_{i\in [K]}s_i(\mathbf{x}_i)$ denote the index of the maximum cosine similarity and $\hat{y}_j \coloneqq \operatorname{argmax}_{i\in [K]\setminus \{\hat{y}_1,\dots ,\hat{y}_{j - 1}\}}s_i(\mathbf{x}_i)$ denote the index of the $j$ -th largest cosine similarity. And we denote the corresponding text features that have the $j$ -th largest similarity with $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ , as $\mathbf{h_j}(\mathbf{x_i}) \coloneqq \mathbf{z_T}(\hat{t_j} (\mathbf{x_i}))$ . Here, $\hat{t}_j(\mathbf{x_i})$ refers to the text prompt corresponding to the $j$ -th largest cosine similarity.
+
+# 3.1 $\Delta$ Energy for OOD detection
+
+The proposed $\Delta$ Energy, which measures the energy change after modifying the top- $c$ maximum cosine similarities3, unfolds as follows:
+
+- Based on a pre-trained VLM, for each image feature $\mathbf{z}_{\mathbf{I}}(\mathbf{x_i})$ , we first select the text feature sets $\{\mathbf{h_j}(\mathbf{x_i})\}_{j = 1}^c$ that have the top $c$ similarity with $\mathbf{z_I}(\mathbf{x_i})$ .
+- We then compute the product between each image feature $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ and the selected text feature $\mathbf{h}_{\mathbf{j}}(\mathbf{x}_{\mathbf{i}})$ . The product feature is represented as $\mathbf{z}_{\mathbf{P}}(\mathbf{x}_{\mathbf{i}}, \hat{t}_j) = \mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) \odot \mathbf{h}_{\mathbf{j}}(\mathbf{x}_{\mathbf{i}})$ .
+- For each text feature $\mathbf{h_j}(\mathbf{x_i})$ ( $j\in [1,\dots ,c]$ ), we denote the $j$ -th largest cosine similarity between the image feature and the text feature as $s_{\hat{y}_j}(\mathbf{x_i}) = \mathbf{z_I}(\mathbf{x_i})\cdot \mathbf{h_j}(\mathbf{x_i})$ . Let $\tilde{s}_{\hat{y}_j}(\mathbf{x_i})$ represents the new cosine similarity after re-alignment, which is achieved by:
+
+$$
+\tilde {s} _ {\hat {y} _ {j}} \left(\mathbf {x} _ {\mathbf {i}}\right) = 0 \tag {1}
+$$
+
+- Finally, we can compute the new OOD score as: $\Delta \mathrm{Energy}(\mathbf{x_i}) = E_1(\mathbf{x_i}) - E_0(\mathbf{x_i})$ . Based on the scaling temperature $\tau$ , $E_0(\mathbf{x_i})$ is the energy score before the re-alignment:
+
+$$
+E _ {0} \left(\mathbf {x} _ {\mathrm {i}}\right) = - \log \sum_ {j = 1} ^ {K} e ^ {s _ {j} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau} \tag {2}
+$$
+
+$E_{1}(\mathbf{x}_{\mathrm{i}})$ is the energy score after the re-alignment:
+
+$$
+E _ {1} \left(\mathbf {x} _ {\mathbf {i}}\right) = - \frac {1}{c} \sum_ {j = 1} ^ {c} \log \left[ e ^ {\bar {s} _ {\hat {y} _ {j}} \left(\mathbf {x} _ {\mathbf {i}}\right) / \tau} + \sum_ {p \neq \hat {y} _ {j}} e ^ {s _ {p} \left(\mathbf {x} _ {\mathbf {i}}\right) / \tau} \right] \tag {3}
+$$
+
+We provide formal guarantees that the proposed $\Delta$ Energy can provably surpass the widely-used VLM-based OOD detection method MCM (Ming et al., 2022a).
+
+Theorem 3.2. [OOD Detection Ability of $\Delta$ Energy] Suppose that the maximum cosine similarity for an ID sample $\mathbf{x}_{ID}$ is greater than that of an open-set OOD sample $\mathbf{x}_{OOD}$ , i.e., $s_{\hat{y}_1}(\mathbf{x}_{ID}) > s_{\hat{y}_1}(\mathbf{x}_{OOD})$ . Let $S_{\mathrm{Method}}(\mathbf{x})$ denote the score assigned to sample $\mathbf{x}$ under a given method. We have the following properties: 1) $S_{\Delta \mathrm{Energy}}(\mathbf{x}_{ID}) > S_{\Delta \mathrm{Energy}}(\mathbf{x}_{OOD})$ for ID $(\mathbf{x}_{ID})$ and open-set OOD $(\mathbf{x}_{OOD})$ samples. 2) Compared to the MCM method, $\Delta$ Energy amplifies the difference between ID and OOD data, i.e., $d_{\Delta \mathrm{Energy}} > d_{MCM}$ , where $d_{\mathrm{Method}} = S_{\mathrm{Method}}(\mathbf{x}_{ID}) - S_{\mathrm{Method}}(\mathbf{x}_{OOD})$ .
+
+Theorem 3.3. [The proposed OOD Score $\Delta$ Energy gets lower FPR than MCM] Given a task with closed-set ID label set $\mathcal{Y}_{\mathrm{in}} = \{y_1, y_2, \dots, y_K\}$ and a pre-trained VLM, for any test input $\mathbf{x}'$ , based on the scaling temperature $\tau$ , the maximum concept matching (MCM) score is computed as follows:
+
+$$
+S _ {\mathbf {M C M}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}) = \max _ {i} \frac {e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {j = 1} ^ {K} e ^ {s _ {j} (\mathbf {x} ^ {\prime}) / \tau}}.
+$$
+
+For any $c\in \{1,2,\dots ,K\}$ , if $s_{\hat{y}_1}(\mathbf{x}')\leq \tau \ln 2$ , we have
+
+$$
+\mathrm {F P R} ^ {\Delta \mathrm {E n e r g y}} (\tau , \lambda) \leq \mathrm {F P R} ^ {\mathrm {M C M}} (\tau , \lambda),
+$$
+
+where $\mathrm{FPR}^{\Delta \mathrm{Energy}}(\tau, \lambda)$ and $\mathrm{FPR}^{\mathrm{MCM}}(\tau, \lambda)$ is the false positive rate of $\Delta \mathrm{Energy}$ and MCM, respectively, based on the temperature $\tau$ and detection threshold $\lambda$ .
+
+# 3.2 The $\Delta$ Energy-based bound maximization enhances OOD detection and generalization
+
+Furthermore, as illustrated in Equation 6, we introduce a $\Delta$ Energy-based bound maximization function (EBM) during the fine-tuning process, which aims at increasing the lower bound of $\Delta$ Energy score for closed-set classes as demonstrated in Theorem 3.4. As illustrated in Theorem 3.5, the proposed objective function is theoretically proven to not only improve OOD detection but also lead to a domain-consistent Hessian, which serves as a strong indicator of OOD generalization.
+
+Specifically, motivated by further enlarging $\Delta$ Energy, we propose to minimize the following term:
+
+$$
+\mathcal {L} _ {\Delta E} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ E _ {2} \left(\mathbf {x} _ {\mathbf {i}}\right) - E _ {0} \left(\mathbf {x} _ {\mathbf {i}}\right) \right] \tag {4}
+$$
+
+where $N$ is the number of few-shot ID samples during fine-tuning and $E_{2}(\mathbf{x}_{\mathrm{i}})$ is the energy score for $\mathbf{x}_{\mathrm{i}}$ after masking on the image feature, which is formally calculated as:
+
+$$
+E _ {2} \left(\mathbf {x} _ {\mathrm {i}}\right) = - \log \sum_ {j = 1} ^ {K} e ^ {s _ {j} ^ {\prime} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau} \tag {5}
+$$
+
+$$
+s _ {j} ^ {\prime} \left(\mathbf {x} _ {\mathbf {i}}\right) = \left(\mathbf {z} _ {\mathbf {I}} \left(\mathbf {x} _ {\mathbf {i}}\right) \odot \mathbf {m} ^ {\prime} \left(\mathbf {x} _ {\mathbf {i}}\right)\right) \cdot \mathbf {z} _ {\mathbf {T}} \left(t _ {j}\right) \tag {6}
+$$
+
+Here, $\mathbf{m}'(\mathbf{x_i})$ is the mask that retains the top $p$ -proportion elements in $\mathbf{z_I}(\mathbf{x_i}) \odot \mathbf{h_1}(\mathbf{x_i})$ and $\mathbf{h_1}(\mathbf{x_i})$ is the text feature corresponding to the top-1 cosine similarity.
+
+Theorem 3.4. [EBM increase the lower bound of $\Delta$ Energy] Let $\mathbf{h}_{\mathbf{l}}(\mathbf{x}_{\mathbf{i}})$ denote the text feature that have the top-1 similarity with the image feature $\mathbf{z_I}(\mathbf{x_i})$ . The corresponding similarity is computed as $s_{\hat{y}_1}(\mathbf{x_i}) = \mathbf{z_I}(\mathbf{x_i})\cdot \mathbf{h_1}(\mathbf{x_i})$ . Suppose that $\mathcal{L}_{\Delta E}\leq \varepsilon_{E}$ , with $c = 1$ , under the condition that:
+
+$$
+\sum_ {i = 1} ^ {N} e ^ {s _ {\tilde {y} _ {1}} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau} - e ^ {\tilde {s} _ {\tilde {y} _ {1}} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau} \geq \sum_ {i = 1} ^ {N} \left(e ^ {\varepsilon_ {E}} - 1\right) e ^ {- E _ {2} \left(\mathbf {x} _ {\mathrm {i}}\right)} \tag {7}
+$$
+
+we have $\frac{1}{N}\sum_{i = 1}^{N}\Delta \mathrm{Energy}(\mathbf{x_i})\geq -\mathcal{L}_{\Delta E}.$
+
+Theorem 3.4 implies that if the change in the VLM's predictions after re-alignment is not too small and satisfies the condition as shown in Equation 7, minimizing $\mathcal{L}_{\Delta E}$ can increase the lower bound of $\Delta$ Energy for closed-set classes. Moreover, we theoretically demonstrated that the proposed EBM loss in Equation 6 can also lead to domain-consistent Hessians of classification loss, which serves as a strong indicator for OOD generalization (Rame et al., 2022; Hemati et al., 2023).
+
+Theorem 3.5. [EBM leads to domain-consistent Hessian] Given the ID training data sampled from domain $\mathcal{S}$ and the learnable parameter $\theta$ in VLM, we denote the masked domain as $\mathcal{S}'$ . We represent the empirical classification loss on the domain $\mathcal{D}$ as $\widehat{\mathcal{E}}_{\mathcal{D}}(\theta)$ . Let $\widehat{\mathbf{G}}_{\mathcal{D}}(\theta)$ and $\widehat{\mathbf{H}}_{\mathcal{D}}(\theta)$ be the gradient vector and Hessian matrix of empirical risk $\widehat{\mathcal{E}}_{\mathcal{D}}(\theta)$ over parameter $\theta$ , respectively. In this paper, we propose to minimize $\mathcal{L}_{\Delta E}$ . The distance between the unmasked and masked image feature is assumed to satisfy: $\| \mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) - (\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) \odot \mathbf{m}^{\prime}(\mathbf{x}_{\mathbf{i}})) \|_2 \leq \varepsilon$ . Then the local optimum $\theta$ of $\min \mathcal{L}_{\Delta E}$ satisfies:
+
+$$
+| \boldsymbol {\theta} ^ {\top} (\widehat {\mathbf {H}} _ {\mathcal {S}} (\boldsymbol {\theta}) - \widehat {\mathbf {H}} _ {\mathcal {S} ^ {\prime}} (\boldsymbol {\theta})) \boldsymbol {\theta} | \leq \frac {\varepsilon}{N} \sum_ {i = 1} ^ {N} | \boldsymbol {\theta} ^ {\top} \nabla_ {\boldsymbol {\theta}} ^ {2} \mathbf {z} _ {\mathbf {T}} (\mathbf {x} _ {\mathbf {i}}) \boldsymbol {\theta} |
+$$
+
+Proposition 3.6. [EBM bound OOD generalization] Let $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ and $\tilde{\mathbf{z}}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ denote the image feature from source domain $(S)$ and target domain $(\mathcal{T})$ , respectively. We assume that $||\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) - \tilde{\mathbf{z}}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})||_2 \leq \varepsilon_1$ . By applying the second-order Taylor expansion and utilizing the domain-consistent Hessians as outlined in Theorem 3.5, the OOD generalization gap between source domain $(S)$ and target domain $(\mathcal{T})$ is upper bounded by the following inequality:
+
+$$
+\max _ {\{\theta : | \widehat {\mathcal {E}} _ {\mathcal {S}} (\theta) - \widehat {\mathcal {E}} _ {\mathcal {S}} (\theta^ {*}) | \leq \epsilon \}} | \widehat {\mathcal {E}} _ {\mathcal {T}} (\theta) - \widehat {\mathcal {E}} _ {\mathcal {S}} (\theta^ {*}) | \lesssim | \widehat {\mathcal {E}} _ {\mathcal {T}} (\theta^ {*}) - \widehat {\mathcal {E}} _ {\mathcal {S}} (\theta^ {*}) | + \max \frac {1}{2} | \theta^ {\top} \widehat {\mathbf {H}} _ {\mathcal {S}} (\theta^ {*}) \theta | + O (\varepsilon_ {1})
+$$
+
+where $\theta^{*}$ is a local minimum across all domains, i.e., $\nabla_{\theta}\widehat{\mathcal{E}}_{\mathcal{D}}(\theta^{*}) = \mathbf{0}$ .
+
+Therefore, by connecting the EBM loss with the Hessians of empirical classification loss, we theoretically discover that the EBM loss can lead to a bound of the performance gap between closed-set ID data and closed-set OOD data. This implies that optimizing for $\Delta$ Energy with EBM loss also involves optimizing for OOD generalization.
+
+Table 1: OOD detection between closed-set data and open-set OOD data based on ImageNet-1k: OOD detection measured by AUROC and FPR95 over the mixture of closed-set test sets and open-set OOD test sets.
+
+| DATA | Method | Energy | ODIN | ReAct | CLIPN | MSP | MaxLogit | MCM | NegLabel | CSP | ΔEnergy (Ours) |
| ID vs. | FPR95↓ | 76.72 | 51.71 | 80.38 | 64.37 | 51.72 | 69.12 | 53.34 | 77.31 | 68.49 | 46.40 (2.58) |
| Semantic-shifted OOD | AUROC↑ | 76.94 | 85.61 | 74.06 | 81.44 | 85.64 | 80.28 | 85.81 | 75.73 | 78.84 | 87.10 (0.75) |
| Covariate-shifted OOD vs. | FPR95↓ | 83.94 | 61.25 | 85.05 | 84.44 | 69.20 | 79.80 | 70.30 | 82.81 | 79.76 | 67.16 (0.38) |
| Semantic-shifted OOD | AUROC↑ | 67.21 | 77.10 | 64.66 | 64.64 | 78.64 | 70.52 | 75.66 | 67.38 | 67.89 | 78.68 (0.57) |
+
+Table 2: OOD detection between closed-set data and open-set OOD data based on PACS and VLCS: OOD detection measured by AUROC and FPR95 over the mixture of closed-set OOD and open-set OOD test sets.
+
+| DATA | PACS vs. Open-Set (AUC↑ / FPR95↓) | VLCS vs. Open-Set (AUC↑ / FPR95↓) | AVG |
| Method | DTD | Food101 | Caltech101 | DTD | Food101 | Caltech101 | AUC↑ / FPR95↓ |
| Energy | 82.4 / 67.6 | 95.9 / 26.0 | 86.7 / 52.2 | 55.3 / 88.8 | 85.8 / 48.3 | 53.3 / 86.3 | 76.6 / 61.5 |
| ReAct | 89.5 / 44.9 | 98.1 / 9.9 | 89.8 / 43.2 | 52.8 / 89.7 | 86.7 / 47.8 | 61.4 / 83.2 | 79.7 / 53.1 |
| CLIPN | 93.6 / 40.3 | 96.2 / 25.6 | 88.1 / 56.1 | 80.4 / 62.7 | 88.9 / 46.6 | 72.5 / 74.1 | 86.6 / 50.9 |
| MaxLogit | 89.5 / 45.1 | 97.9 / 13.1 | 88.9 / 46.8 | 60.8 / 87.6 | 88.9 / 45.4 | 69.4 / 81.6 | 82.6 / 53.3 |
| MSP | 97.9 / 9.8 | 98.9 / 4.6 | 95.8 / 20.9 | 84.4 / 55.3 | 93.7 / 35.1 | 88.9 / 48.8 | 93.3 / 29.1 |
| ODIN | 99.1 / 1.8 | 99.3 / 1.0 | 97.4 / 6.7 | 83.1 / 46.5 | 91.3 / 28.2 | 86.8 / 40.9 | 92.8 / 17.9 |
| MCM | 98.9 / 4.3 | 99.2 / 3.4 | 97.0 / 13.7 | 84.2 / 55.1 | 93.3 / 36.4 | 88.5 / 50.4 | 93.5 / 27.2 |
| NegLabel | 99.3 / 4.2 | 97.7 / 15.8 | 95.7 / 31.6 | 84.8 / 54.3 | 79.4 / 74.6 | 62.7 / 84.3 | 86.6 / 49.1 |
| CSP | 99.6 / 1.8 | 99.2 / 2.6 | 97.8 / 12.4 | 88.9 / 38.7 | 78.3 / 67.4 | 67.3 / 73.0 | 88.5 / 32.7 |
| ΔEnergy (Ours) | 98.1 / 6.5 | 99.2 / 2.4 | 96.1 / 14.3 | 85.3 / 53.2 | 94.1 / 31.9 | 89.5 / 47.3 | 93.7 / 25.9 |
+
+# 3.3 Overview of the proposed method
+
+Our theoretical analysis thus leads to the design of a new fine-tuning framework with concurrent optimization for both tasks. As illustrated in Figure 2, we prioritize computational efficiency by adopting prompt-tuning techniques. In the fine-tuning process, both the image encoder and text encoder are frozen and only the context vectors $\theta$ are learnable. Let $\mathcal{L}_{\mathrm{CE}}$ denote the Cross-Entropy loss and $\lambda_0$ denote the hyperparameter that can be chosen based on the validation procedure. Then the final optimization objective of the EBM method is expressed as:
+
+$$
+\mathcal {L} _ {\mathrm {E B M}} = \mathcal {L} _ {\mathrm {C E}} + \lambda_ {0} e ^ {\mathcal {L} _ {\Delta E}} \tag {8}
+$$
+
+# 4 Experiments
+
+In this section, motivated by the remarkable success of the vision-language model CLIP (Radford et al., 2021) in learning general visual knowledge, we conduct experiments based on CLIP. First, we conduct extensive experiments to validate the effectiveness of the proposed $\Delta$ Energy in zero-shot OOD detection across various datasets. Furthermore, we evaluate the effect of the proposed EBM method in enhancing both OOD generalization and OOD detection. Due to space limitations, we provide ablation studies in Appendix G to validate our theoretical findings.
+
+# 4.1 Effectiveness of $\Delta$ Energy for OOD detection
+
+Dataset We evaluate OOD detection performance over 4 different benchmarks, including 1) the discrimination between closed-set OOD data and open-set OOD data based on ImageNet-1k, 2) the discrimination between closed-set OOD data and open-set OOD data based on cross-dataset images, 3) hard OOD detection on different splits of ImageNet-1k, 4) the conventional OOD detection benchmark. For the first two benchmark datasets, we evaluate the models' OOD detection capabilities under more challenging scenarios, where the datasets exhibit both covariate and semantic shifts. Models are required to distinguish between various types of closed-set OOD data (covariate shifts) and open-set OOD data (semantic shifts). Details of the two data settings are illustrated as follows:
+
+1) Setup-I: open-set discrimination on the large-scale ImageNet-1k dataset. Following the prior work (Zhu et al., 2024), we split ImageNet-1k (Krizhevsky et al., 2017) into open and closed sets w.r.t class labels. We randomly define $40\%$ classes of ImageNet as the closed-set, and the remaining $60\%$ as the open-set. The samples from ImageNet-A (Hendrycks et al., 2021b), ImageNet-R (Hendrycks et al., 2021a), ImageNet-Sketch (Wang et al., 2019), and ImageNet-V2 (Recht et al., 2019), which share the same class labels as the closed-set ID data, are considered as closed-set OOD data.
+2) Setup-II: open-set discrimination on cross-dataset images. Using cross-dataset examples as the open-set is another established protocol (Shafaei et al., 2018; Kong and Ramanan, 2021). Following
+
+Table 3: Tuning-based results on Setup-I: Comparison with competitive fine-tuning methods based on CLIP ViT-B/16 using 16 samples per class. In the testing phase of LoCoOp, NegPrompt and GalLoP, we use the GL-MCM score (Miyai et al., 2023) to compute OOD detection results.
+
+| Algorithm OOD Score | CoOp MCM | CoCoOp MCM | CLIP-Adapter MCM | Bayes-CAL MCM | DPLCLIP MCM | CRoFT MCM | LoCoOp GL | NegPrompt GL | GalLoP GL | EBM (Ours) ΔEnergy |
| ID ACC ↑ | 82.11 | 81.59 | 79.91 | 82.31 | 82.46 | 82.03 | 82.14 | 81.46 | 84.51 | 81.52 (0.4) |
| OOD ACC ↑ | 61.36 | 62.58 | 60.58 | 61.95 | 61.53 | 62.83 | 61.18 | 60.39 | 61.75 | 63.28 (0.2) |
| AUROC ↑ | 72.94 | 76.38 | 74.86 | 74.44 | 72.81 | 76.30 | 70.03 | 60.86 | 56.97 | 81.90 (1.9) |
| FPR95 ↓ | 73.15 | 70.30 | 70.92 | 72.34 | 73.07 | 69.78 | 74.33 | 86.66 | 91.17 | 65.90 (1.7) |
+
+the prior work (Gulrajani and Lopez-Paz, 2021; Cha et al., 2022; Ye et al., 2021), we leverage popular datasets like PACS (Li et al., 2017) or VLCS (Li et al., 2017) for domain generalization studies as the closed-set data. We evaluate the models' ability to distinguish between closed-set OOD and cross-dataset images by utilizing different styles of datasets like Caltech101 (Bansal et al., 2021), DTD (Sharan et al., 2014), and Food101 (Bossard et al., 2014) as open-set OOD examples. All overlapping classes are removed from the three open-set OOD datasets.
+
+In addressing the hard OOD detection scenarios, we follow the prior works (Ming et al., 2022a; Li et al., 2024a; Chen et al., 2024) and partition the ImageNet1k dataset into two parts: one part of the data serves as the ID, while the other serves as OOD. For conventional OOD detection, we use a popular benchmark in which ImageNet-1k (Krizhevsky et al., 2017) with 1,000 classes is used as the ID dataset, and the OOD datasets including subsets of Texture (Cimpoi et al., 2014), iNaturalist (Van Horn et al., 2018), Places (Zhou et al., 2017) and SUN (Xiao et al., 2010).
+
+Comparison methods To substantiate the effectiveness of the proposed OOD score, we conduct an empirical analysis of distinct categories of methodologies for OOD detection utilizing VLMs. These categories encompass zero-shot approaches and the methods that combine the CLIP image encoder with classical approaches. For zero-shot methods, we opted for 4 recent methods, MCM (Ming et al., 2022a), CLIPN (Wang et al., 2023a), NegLabel (Jiang et al., 2024) and CSP (Chen et al., 2024). MCM employs the original CLIP, utilizing the maximum softmax probability operation on the similarities for detection, and CLIPN involves an additional training phase during pre-training, specifically training a negative text encoder using large external data. Both the NegLabel and CSP methods introduce additional negative labels, enabling more accurate OOD detection. For the second group of methods, we adapt previous logits-based methodologies to the use of the CLIP image encoder, including MSP (Hendrycks and Gimpel, 2016), Energy (Liu et al., 2020), MaxLogit (Hendrycks et al., 2019), ReAct (Sun et al., 2021) and ODIN (Liang et al., 2017a). Following the previous studies (Wang et al., 2023a), we use CLIP based on ViT-B/16, which is pre-trained from OpenCLIP.
+
+Metrics Two OOD detection metrics are used. The first is the False Positive Rate at a $95\%$ True Negative Rate (FPR95), which denotes the rate of falsely identified OOD instances when the true negative rate is maintained at $95\%$ . The second is the Area Under the Receiver Operating Characteristic curve (AUROC), representing the measure of OOD ranking across various classification thresholds.
+
+Experiments results For zero-shot OOD detection, we set $c = 2$ and $\tau = 0.01$ in our $\Delta$ Energy. We present the results of $\Delta$ Energy and competitors in discriminating between closed-set data and open-set OOD data in Table 1-2. Due to space limitations, we provide the results on the traditional OOD detection and hard OOD detection datasets in Table 5-7 in Appendix G. It is observed that the proposed $\Delta$ Energy obtains the top-1 AUROC performance on all benchmarks. Compared with the vanilla energy-based OOD detection method (Liu et al., 2020), our $\Delta$ Energy method consistently achieves better OOD detection performance across all 4 benchmarks. Notably, as shown in Table 1-2, our approach surpasses the competitive NegLabel and CSP methods by a large margin, demonstrating the superiority of the proposed method in distinguishing different semantics in open-world scenarios. The inferior performances of NegLabel and CSP may stem from the underlying assumption of these methods—that OOD samples possess a variety of distinct visual properties—which may not hold in hard OOD detection scenarios where OOD samples are distributed closely with closed-set data. Furthermore, the presence of covariate shifts reduces the similarity between closed-set OOD and ID data, thereby making it more difficult to distinguish closed-set OOD data from open-set OOD data.
+
+# 4.2 Effectiveness of the EBM loss for both OOD generalization and OOD detection
+
+Datasets To substantiate the effectiveness of the proposed EBM method for both tasks, we evaluate our method under two data settings (Setup-I and Setup-II), each incorporating both covariate shifts and semantic shifts, as introduced in Section 4.1. For evaluating OOD generalization performance on
+
+Table 4: Tuning-based results on Setup-II: Comparison with competitive fine-tuning methods based on CLIP ViT-B/16 using 16 samples per class.
+
+| DATA | PACS
+OOD ACC ↑ | PACS vs. Open-Set (AUROC↑ / FPR95↓) | VLCS
+OOD ACC↑ | VLCS vs. Open-Set (AUROC↑ / FPR95↓) | AVG
+FPR↓ |
| Algorithm | DTD | Food101 | Caltech101 | DTD | Food101 | Caltech101 |
| ZS CLIP+ MCM | 96.1 (0.0) | 98.9 / 4.3 | 99.2 / 3.4 | 97.0 / 13.7 | 75.1 (0.0) | 84.2 / 55.1 | 93.3 / 36.4 | 88.5 / 50.4 | 27.2 |
| ZS CLIP + ΔEnergy | 96.1 (0.0) | 98.1 / 6.5 | 99.2 / 2.4 | 96.1 / 14.3 | 75.1 (0.0) | 85.3 / 53.2 | 94.1 / 29.7 | 89.5 / 47.3 | 25.5 |
| CoOp | 96.3 (0.7) | 98.9 / 4.6 | 99.2 / 3.1 | 97.4 / 11.2 | 78.3 (1.7) | 89.3 / 37.6 | 91.4 / 40.1 | 85.9 / 47.4 | 24.0 |
| CoCoOp | 96.8 (0.5) | 98.8 / 4.0 | 98.7 / 5.8 | 97.5 / 10.9 | 78.9 (0.8) | 88.3 / 44.9 | 89.2 / 44.6 | 86.8 / 48.6 | 26.5 |
| CLIP-Adapter | 96.1 (0.0) | 99.0 / 4.1 | 99.2 / 3.6 | 97.4 / 12.1 | 77.3 (0.7) | 85.9 / 52.4 | 93.8 / 35.5 | 89.3 / 48.5 | 26.0 |
| Bayes-CAL | 96.6 (0.5) | 98.5 / 7.2 | 98.3 / 8.8 | 95.9 / 16.6 | 79.6 (0.9) | 88.0 / 47.7 | 84.9 / 61.1 | 84.8 / 56.7 | 33.0 |
| DPLCLIP | 95.6 (0.2) | 96.6 / 21.6 | 97.1 / 18.1 | 92.2 / 35.1 | 76.5 (1.3) | 88.9 / 37.1 | 86.8 / 43.2 | 84.2 / 50.2 | 34.2 |
| LoCoOp | 96.5 (0.3) | 98.1 / 9.7 | 98.4 / 8.6 | 95.9 / 19.4 | 76.3 (0.7) | 86.1 / 50.9 | 84.3 / 62.8 | 83.3 / 59.7 | 35.2 |
| GalLop | 96.9 (0.2) | 98.6 / 5.6 | 98.3 / 8.9 | 95.4 / 18.2 | 81.3 (0.9) | 87.4 / 39.0 | 91.0 / 43.4 | 80.7 / 59.1 | 29.0 |
| CROFT | 97.3 (0.1) | 94.0 / 33.0 | 89.9 / 57.7 | 84.0 / 71.2 | 80.2 (1.0) | 90.2 / 40.6 | 80.1 / 70.3 | 79.0 / 66.4 | 56.5 |
| NegPrompt | 97.1(0.4) | 88.9 / 42.2 | 97.5 / 15.2 | 94.4 / 22.6 | 77.8 (0.6) | 93.4 / 29.6 | 89.8 / 39.6 | 75.2 / 72.4 | 36.9 |
| EBM + MCM | 97.2 (0.1) | 98.6 / 5.6 | 99.0 / 4.3 | 96.8 / 13.7 | 81.7 (0.6) | 91.6 / 32.6 | 93.3 / 32.7 | 86.5 / 44.1 | 22.2 |
| EBM + ΔEnergy | 97.2 (0.1) | 98.2 / 6.3 | 98.7 / 3.8 | 96.6 / 14.8 | 81.7 (0.6) | 91.7 / 30.2 | 93.6 / 28.6 | 86.4 / 42.5 | 21.0 |
+
+Setup-II, we utilize the leave-one-domain-out validation protocol (Gulrajani and Lopez-Paz, 2020) that uses three domains as closed-set ID data and the remaining one as closed-set OOD data.
+
+Comparison methods We conduct an empirical analysis of distinct categories of CLIP-based lightweight fine-tuning methods. In addition to comparing our approach with popular fine-tuning techniques, such as the widely-used CoOp (Zhou et al., 2021), CoCoOp (Zhou et al., 2022) and CLIP-Adapter (Gao et al., 2023), we also evaluate it against CLIP-based methods designed specifically for OOD generalization or OOD detection. For OOD generalization, we compare our EBM with methods like DPLCLIP (Zhang et al., 2021b) and Bayes-CAL (Zhu et al., 2023b, 2025b). For OOD detection, we consider approaches such as LoCoOp (Miyai et al., 2024b), NegPrompt (Li et al., 2024a), CRoFT (Zhu et al., 2024), and GalLoP (Lafon et al., 2024), with CRoFT and GalLoP explicitly designed to optimize both OOD generalization and detection.
+
+Experiment details We conduct experiments based on the CLIP ViT-B/16 model using 16 samples per ID classes. For the prompt learning methods, we use random initialization for context vectors and set the number of context tokens to 16. Without otherwise specified, methods are trained using the SGD optimizer with a learning rate of 0.002 and batch size of 32 for fair comparisons. We set the maximum training epoch to 30 for all models. For all baseline methods, we follow the hyperparameter searching protocols recommended in their original papers. For our EBM method, we search for $\lambda_0$ in [0.1, 0.5, 1.0] for Setup-I and we set $\lambda_0 = 2$ for Setup-II. We set $\tau = 0.01$ and $c = 2$ in $\Delta Energy$ , and we vary the masking proportion $p\%$ within the range of [0.4, 0.5, 0.6] in $\mathcal{L}_{\Delta E}$ . For experiments on each method, we repeat 3 times with different random splits to eliminate the effects of randomness. Finally, we report the average classification accuracy on closed-set test sets, as well as the average FPR95 and AUROC results for distinguishing between open-set OOD data and closed-set OOD data based on MCM (GL-MCM) (Ming et al., 2022a; Miyai et al., 2023) or our $\Delta Energy$ .
+
+Experiment results We present the results of setup-I in Table 3, where the proposed EBM method establishes the overall best performance in both OOD generalization and OOD detection. Notably, our method outperforms the competitive CRoFT (Zhu et al., 2024) method, which is introduced to achieve concurrent optimization on both tasks. Our method obtains a $0.45\%$ improvement on OOD accuracy in closed-set OOD data and more than $5\%$ improvements on AUROC when discriminating closed-set OOD datasets and open-set OOD classes. In contrast, although the competitors can achieve higher ID test accuracy, they often struggle with the OOD generalization or OOD detection task, even resulting in worse performance when compared to the zero-shot CLIP model. Moreover, it is observed that recent methods, such as GalLoP (Lafon et al., 2024) and NegPrompt (Li et al., 2024a), aimed at improving VLMs' OOD detection may not scale well to handling different types of distribution shifts in large-scale ImageNet-1k datasets. The results of setup-II are shown in Table 4, where the EBM method also demonstrates the best overall performance on both tasks. Consistent with the results of Setup-I, its competitors, especially those (Zhu et al., 2023b; Zhang et al., 2021b; Zhu et al., 2024) designed for OOD generalization, achieve even higher FPR95 scores compared to the CLIP model. Notably, for the more challenging VLCS dataset, our EBM method shows significant performance improvements over CRoFT and GalLoP on both tasks. This is likely due to the fact that CRoFT performs adapter-tuning on both the image and text inputs, which can lead to forgetting the general knowledge encoded in the pre-trained CLIP model. In contrast, based on fine-tuning language model, our method optimizes the energy change when re-aligning vision-language modalities, resulting in more robust performance across various datasets. These results highlight the EBM method's ability to more effectively enhance both OOD generalization and OOD detection.
+
+# 5 Conclusions
+
+Different from the vanilla energy-based OOD score, we propose a novel zero-shot OOD detection method, $\Delta$ Energy, which measures the energy change when re-aligning vision-language modalities. Both theoretical and experimental results demonstrate that $\Delta$ Energy provides more reliable OOD detection than previous methods. Furthermore, we introduce a $\Delta$ Energy-based bound maximization during fine-tuning VLMs. The proposed bound maximization is theoretically proven to not only improve OOD detection but also lead to optimization for OOD generalization. Building on this insight, we have developed a unified fine-tuning framework that enables the concurrent optimization of both tasks. Extensive experiments on challenging OOD detection and generalization benchmarks demonstrate the superiority of our method.
+
+# Acknowledgements
+
+This work is supported by National Natural Science Foundation of China (No.62572313, No.62106139).
+
+# References
+
+Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert D Nowak, and Yixuan Li. Feed two birds with one scone: Exploiting wild data for both out-of-distribution generalization and detection. In International Conference on Machine Learning, pages 1454-1471. PMLR, 2023.
+Yichen Bai, Zongbo Han, Bing Cao, Xiaoheng Jiang, Qinghua Hu, and Changqing Zhang. Id-like prompt learning for few-shot out-of-distribution detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17480-17489, 2024.
+Monika Bansal, Munish Kumar, Monika Sachdeva, and Ajay Mittal. Transfer learning for image classification using vgg19: Caltech-101 image data set. Journal of ambient intelligence and humanized computing, pages 1-12, 2021.
+Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Songhao Piao, and Furu Wei. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. Advances in Neural Information Processing Systems, 35:32897-32912, 2022.
+Petra Bevandić, Ivan Krešo, Marin Oršić, and Sinisa Šegvic. Dense outlier detection and open-set recognition based on training with noisy negative images. arXiv preprint arXiv:2101.09193, 2021.
+Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101-mining discriminative components with random forests. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI 13, pages 446-461. Springer, 2014.
+Junbum Cha, Kyungjae Lee, Sungrae Park, and Sanghyuk Chun. Domain generalization by mutual-information regularization with pre-trained models. In European Conference on Computer Vision, pages 440-457. Springer, 2022.
+Mengyuan Chen, Junyu Gao, and Changsheng Xu. Conjugated semantic pool improves ood detection with pre-trained vision-language models. arXiv preprint arXiv:2410.08611, 2024.
+Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3606-3613, 2014.
+Xuefeng Du, Gabriel Gozum, Yifei Ming, and Yixuan Li. Siren: Shaping representations for detecting out-of-distribution objects. Advances in Neural Information Processing Systems, 35:20434-20449, 2022a.
+Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. Vos: Learning what you don't know by virtual outlier synthesis. arXiv preprint arXiv:2202.01197, 2022b.
+
+Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR, 2016.
+Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, pages 1-15, 2023.
+Chuanxing Geng, Sheng-jun Huang, and Songcan Chen. Recent advances in open set recognition: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(10):3614-3631, 2020.
+Muhammad Waleed Gondal, Jochen Gast, Inigo Alonso Ruiz, Richard Droste, Tommaso Macri, Suren Kumar, and Luitpold Staudigl. Domain aligned clip for few-shot classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5721-5730, 2024.
+Koustava Goswami, Srikrishna Karanam, Prateksha Udhayanan, KJ Joseph, and Balaji Vasan Srinivasan. Copl: Contextual prompt learning for vision-language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 18090-18098, 2024.
+Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan. Finetune like you pretrain: Improved finetuning of zero-shot vision models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19338-19347, 2023.
+Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. arXiv preprint arXiv:2007.01434, 2020.
+Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In ICLR, 2021.
+Sobhan Hemati, Guojun Zhang, Amir Estiri, and Xi Chen. Understanding hessian alignment for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19004-19014, 2023.
+Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016.
+Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joe Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. Scaling out-of-distribution detection for real-world settings. arXiv preprint arXiv:1911.11132, 2019.
+Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340-8349, 2021a.
+Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262-15271, 2021b.
+Chen Huang, Skyler Seto, Samira Abnar, David Grangier, Navdeep Jaitly, and Josh Susskind. Aggregate-and-adapt natural language prompts for downstream generalization of clip. arXiv preprint arXiv:2410.23698, 2024.
+Tony Huang, Jack Chu, and Fangyun Wei. Unsupervised prompt learning for vision-language models. arXiv preprint arXiv:2204.03649, 2022.
+Jingjing Jiang, Ziyi Liu, and Nanning Zheng. Correlation information bottleneck: Towards adapting pretrained multimodal models for robust visual question answering, 2023.
+Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, and Bo Han. Negative label guided ood detection with pretrained vision-language models. arXiv preprint arXiv:2403.20078, 2024.
+Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19113-19122, 2023a.
+
+Muhammad Uzair Khattak, Syed Talal Wasim, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Self-regulating prompts: Foundational model adaptation without forgetting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15190-15200, 2023b.
+Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, et al. Wilds: A benchmark of in-the-wild distribution shifts. arXiv:2012.07421, 2020.
+Shu Kong and Deva Ramanan. Opengan: Open-set recognition via open data generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 813-822, 2021.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84-90, 2017.
+Marc Lafon, Elias Ramzi, Clément Rambour, Nicolas Audebert, and Nicolas Thome. Gallop: Learning global and local prompts for vision-language models. In European Conference on Computer Vision, pages 264-282. Springer, 2024.
+Chaohua Li, Enhao Zhang, Chuanxing Geng, and Songcan Chen. Recent advances in out-of-distribution detection with clip-like models: A survey. arXiv preprint arXiv:2505.02448, 2025.
+Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pages 5542-5550, 2017.
+Tianqi Li, Guansong Pang, Xiao Bai, Wenjun Miao, and Jin Zheng. Learning transferable negative prompts for out-of-distribution detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17584-17594, 2024a.
+Zheng Li, Xiang Li, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, and Jian Yang. Promptkd: Unsupervised prompt distillation for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26617-26626, 2024b.
+Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690, 2017a.
+Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690, 2017b.
+Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. Advances in neural information processing systems, 33:21464-21475, 2020.
+Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems, 31, 2018.
+Nicolai Meinshausen and Peter Buhlmann. Maximin effects in inhomogeneous large-scale data. The Annals of Statistics, 43(4), 2015.
+Yifei Ming and Yixuan Li. How does fine-tuning impact out-of-distribution detection for vision-language models? International Journal of Computer Vision, 132(2):596-609, 2024.
+Yifei Ming, Ziyang Cai, Jiuming Gu, Yiyou Sun, Wei Li, and Yixuan Li. Delving into out-of-distribution detection with vision-language representations. Advances in neural information processing systems, 35:35087-35102, 2022a.
+Yifei Ming, Ying Fan, and Yixuan Li. Poem: Out-of-distribution detection with posterior sampling. In International Conference on Machine Learning, pages 15650-15665. PMLR, 2022b.
+Atsuyuki Miyai, Qing Yu, Go Irie, and Kiyoharu Aizawa. Zero-shot in-distribution detection in multi-object settings using vision-language foundation models. arXiv preprint arXiv:2304.04521, 2023.
+
+Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Yueqian Lin, Qing Yu, Go Irie, Shafiq Joty, Yixuan Li, Hai Li, et al. Generalized out-of-distribution detection and beyond in vision language model era: A survey. arXiv preprint arXiv:2407.21794, 2024a.
+Atsuyuki Miyai, Qing Yu, Go Irie, and Kiyoharu Aizawa. Locoop: Few-shot out-of-distribution detection via prompt learning. Advances in Neural Information Processing Systems, 36, 2024b.
+Vivek Narayanaswamy, Yemen Mubarka, Rushil Anirudh, Deepta Rajan, and Jayaraman J. Thiagarajan. Exploring inlier and outlier specification for improved medical ood detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pages 4589-4598, 2023.
+Jun Nie, Yonggang Zhang, Zhen Fang, Tongliang Liu, Bo Han, and Xinmei Tian. Out-of-distribution detection with negative prompts. In The twelfth international conference on learning representations, 2024.
+Stanislav Pidhorskyi, Ranya Almohsen, and Gianfranco Doretto. Generative probabilistic novelty detection with adversarial autoencoders. Advances in neural information processing systems, 31, 2018.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. CoRR, abs/2103.00020, 2021.
+Alexandre Rame, Corentin Dancette, and Matthieu Cord. Fishr: Invariant gradient variances for out-of-distribution generalization. In International Conference on Machine Learning, pages 18347-18377. PMLR, 2022.
+Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize toImagenet? In International conference on machine learning, pages 5389-5400. PMLR, 2019.
+Kuniaki Saito, Donghyun Kim, and Kate Saenko. Openmatch: Open-set semi-supervised learning with open-set consistency regularization. Advances in Neural Information Processing Systems, 34: 25956-25967, 2021.
+Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. Breeds: Benchmarks for subpopulation shift. 2020.
+Walter J Scheirer, Lalit P Jain, and Terrance E Boult. Probability models for open set recognition. IEEE transactions on pattern analysis and machine intelligence, 36(11):2317-2324, 2014.
+Alireza Shafaei, Mark Schmidt, and James J Little. A less biased evaluation of out-of-distribution sample detectors. arXiv preprint arXiv:1809.04729, 2018.
+Lavanya Sharan, Ruth Rosenholtz, and Edward H Adelson. Accuracy and speed of material categorization in real-world images. Journal of vision, 14(9):12-12, 2014.
+Mainak Singha, Harsh Pal, Ankit Jha, and Biplab Banerjee. Ad-clip: Adapting domains in prompt space using clip. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4355-4364, 2023.
+Lin Song, Ruoyi Xue, Hang Wang, Hongbin Sun, Yixiao Ge, Ying Shan, et al. Meta-adapter: An online few-shot learner for vision-language model. Advances in Neural Information Processing Systems, 36:55361-55374, 2023.
+Xin Sun, Zhenning Yang, Chi Zhang, Keck-Voon Ling, and Guohao Peng. Conditional gaussian distribution learning for open set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13480-13489, 2020.
+Yiyou Sun, Chuan Guo, and Yixuan Li. React: Out-of-distribution detection with rectified activations. Advances in Neural Information Processing Systems, 34:144-157, 2021.
+
+Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, pages 20827-20840. PMLR, 2022.
+Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769-8778, 2018.
+Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. Advances in Neural Information Processing Systems, 32, 2019.
+Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang. Vim: Out-of-distribution with virtual-logit matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4921-4930, 2022.
+Hualiang Wang, Yi Li, Huifeng Yao, and Xiaomeng Li. Clipn for zero-shot ood detection: Teaching clip to say no. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1802-1812, 2023a.
+Haoliang Wang, Chen Zhao, Yunhui Guo, Kai Jiang, and Feng Chen. Towards effective semantic ood detection in unseen domains: A domain generalization perspective. arXiv preprint arXiv:2309.10209, 2023b.
+Zhengbo Wang, Jian Liang, Ran He, Nan Xu, Zilei Wang, and Tieniu Tan. Improving zero-shot generalization for clip with synthesized prompts. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3032-3042, 2023c.
+Syed Talal Wasim, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan, and Mubarak Shah. Vita-clip: Video and text adaptive clip via multimodal prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23034–23044, 2023.
+Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning, pages 23965-23998. PMLR, 2022.
+Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pages 3485-3492. IEEE, 2010.
+Zehao Xiao, Jiayi Shen, Mohammad Mahdi Derakhshani, Shengcai Liao, and Cees G. M. Snoek. Any-shift prompting for generalization over distributions. 2024.
+Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, and Ziwei Liu. Openood: Benchmarking generalized out-of-distribution detection. 2022.
+Jingkang Yang, Kaiyang Zhou, and Ziwei Liu. Full-spectrum out-of-distribution detection. International Journal of Computer Vision, 131(10):2607-2622, 2023.
+Nanyang Ye, Kaican Li, Lanqing Hong, Haoyue Bai, Yiting Chen, Fengwei Zhou, and Zhenguo Li. Ood-bench: Benchmarking and understanding out-of-distribution generalization datasets and algorithms. CoRR, abs/2106.03721, 2021.
+Hongjie Zhang, Ang Li, Jie Guo, and Yanwen Guo. Hybrid models for open set recognition. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part III 16, pages 102-117. Springer, 2020.
+Jingyang Zhang, Jingkang Yang, Pengyun Wang, Haoqi Wang, Yueqian Lin, Haoran Zhang, Yiyou Sun, Xuefeng Du, Yixuan Li, Ziwei Liu, Yiran Chen, and Hai Li. Openood v1.5: Enhanced benchmark for out-of-distribution detection. arXiv preprint arXiv:2306.09301, 2023a.
+
+Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprint arXiv:2111.03930, 2021a.
+Xin Zhang, Yusuke Iwasawa, Yutaka Matsuo, and Shixiang Shane Gu. Amortized prompt: Guide clip to domain transfer learning. arXiv preprint arXiv:2111.12853, 2021b.
+Yabin Zhang and Lei Zhang. Adaneg: Adaptive negative proxy guided ood detection with vision-language models. arXiv preprint arXiv:2410.20149, 2024.
+Yi Zhang, Ce Zhang, Xueting Hu, and Zhihai He. Unsupervised prototype adapter for vision-language models. In Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pages 197-209. Springer, 2023b.
+Yabin Zhang, Wenjie Zhu, Chenhang He, and Lei Zhang. Lapt: Label-driven automated prompt tuning for ood detection with vision-language models. In European Conference on Computer Vision, pages 271-288. Springer, 2024.
+Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6):1452-1464, 2017.
+Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. arXiv preprint arXiv:2109.01134, 2021.
+Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
+Zhi Zhou, Ming Yang, Jiang-Xin Shi, Lan-Zhe Guo, and Yu-Feng Li. Decoop: Robust prompt tuning with out-of-distribution detection. arXiv preprint arXiv:2406.00345, 2024.
+Deyao Zhu, Jun Chen, Xiaogian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023a.
+Lin Zhu, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. Bayesian cross-modal alignment learning for few-shot out-of-distribution generalization. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11461-11469, 2023b.
+Lin Zhu, Yifeng Yang, Qinying Gu, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. Croft: Robust fine-tuning with concurrent optimization for ood generalization and open-set ood detection. arXiv preprint arXiv:2405.16417, 2024.
+Lin Zhu, Yifeng Yang, Zichao Nie, Yuan Gao, Jiarui Li, Qinying Gu, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. Infobound: A provable information-bounds inspired framework for both ood generalization and ood detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025a.
+Lin Zhu, Weihan Yin, Fan Wu, Qinying Gu, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. Bayes-cal: Robust cross-modal alignment by bayesian approach for few-shot ood generalization. International Journal of Computer Vision, pages 1-34, 2025b.
+Yao Zhu, YueFeng Chen, Chuanlong Xie, Xiaodan Li, Rong Zhang, Hui Xue, Xiang Tian, Yaowu Chen, et al. Boosting out-of-distribution detection with typical features. Advances in Neural Information Processing Systems, 35:20758-20769, 2022.
+Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations, 2018.
+
+# NeurIPS Paper Checklist
+
+# 1. Claims
+
+Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
+
+Answer: [Yes]
+
+Justification: The abstract and introduction clearly state the paper's contributions and scope.
+
+Guidelines:
+
+- The answer NA means that the abstract and introduction do not include the claims made in the paper.
+- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
+- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
+- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
+
+# 2. Limitations
+
+Question: Does the paper discuss the limitations of the work performed by the authors?
+
+Answer: [Yes]
+
+Justification: The limitations are discussed in Appendix H.
+
+Guidelines:
+
+- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
+- The authors are encouraged to create a separate "Limitations" section in their paper.
+- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
+- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
+- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
+- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
+- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
+- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
+
+# 3. Theory assumptions and proofs
+
+Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
+
+Answer: [Yes]
+
+Justification: The full set of assumptions is provided in our theorems and the proofs are provided in Appendix.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include theoretical results.
+- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
+- All assumptions should be clearly stated or referenced in the statement of any theorems.
+- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
+- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
+- Theorems and Lemmas that the proof relies upon should be properly referenced.
+
+# 4. Experimental result reproducibility
+
+Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
+
+Answer: [Yes]
+
+Justification: We have provided necessary information for reproduction in Section 4.
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+
+- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
+
+- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
+
+- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
+
+- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
+
+(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
+(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
+(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
+(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
+
+# 5. Open access to data and code
+
+Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
+
+# Answer: [Yes]
+
+Justification: All utilized data are sourced from open-access platforms. The source code will be made publicly available upon the acceptance of this paper.
+
+# Guidelines:
+
+- The answer NA means that paper does not include experiments requiring code.
+- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
+- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
+- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
+- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
+- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
+
+# 6. Experimental setting/details
+
+Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
+
+# Answer: [Yes]
+
+Justification: All the training and test details necessary to understand the results have been provided in Section 4 and Appendix G.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
+- The full details can be provided either with the code, in appendix, or as supplemental material.
+
+# 7. Experiment statistical significance
+
+Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
+
+# Answer: [Yes]
+
+Justification: We provide the standard deviations in our main experiment results, which are commonly used as error bars, for the main experiment results under different random seeds.
+
+# Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
+- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
+- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
+
+- The assumptions made should be given (e.g., Normally distributed errors).
+- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
+- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
+- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
+- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
+
+# 8. Experiments compute resources
+
+Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
+
+Answer: [Yes]
+
+Justification: The computational cost is analyzed in Appendix I.
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
+- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
+- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
+
+# 9. Code of ethics
+
+Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
+
+Answer: [Yes]
+
+Justification: The research conforms with the NeurIPS Code of Ethics in every respect.
+
+Guidelines:
+
+- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
+- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
+- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
+
+# 10. Broader impacts
+
+Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
+
+Answer: [NA]
+
+Justification: Given the focus on improving VLMs in both OOD generalization and OOD detection, the paper prioritizes discussing technical methodologies and results rather than societal impacts.
+
+Guidelines:
+
+- The answer NA means that there is no societal impact of the work performed.
+- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
+- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
+
+- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
+- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
+- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
+
+# 11. Safeguards
+
+Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
+
+Answer: [NA]
+
+Justification: The paper aims to improve VLMs in both OOD generalization and OOD detection poses no such risks.
+
+# Guidelines:
+
+- The answer NA means that the paper poses no such risks.
+- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
+- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
+- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
+
+# 12. Licenses for existing assets
+
+Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
+
+Answer: [Yes]
+
+Justification: We have properly credited the used assets. All our codes will be available for free to researchers for non-commercial use.
+
+# Guidelines:
+
+- The answer NA means that the paper does not use existing assets.
+- The authors should cite the original paper that produced the code package or dataset.
+- The authors should state which version of the asset is used and, if possible, include a URL.
+- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
+- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
+- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
+
+- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
+- If this information is not available online, the authors are encouraged to reach out to the asset's creators.
+
+# 13. New assets
+
+Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
+
+Answer: [NA]
+
+Justification: The paper does not release new assets.
+
+Guidelines:
+
+- The answer NA means that the paper does not release new assets.
+- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
+- The paper should discuss whether and how consent was obtained from people whose asset is used.
+- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
+
+# 14. Crowdsourcing and research with human subjects
+
+Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
+
+Answer: [NA]
+
+Justification: The paper does not involve crowdsourcing nor research with human subjects.
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
+- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
+
+# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
+
+Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
+
+Answer: [NA]
+
+Justification: The paper does not involve crowdsourcing nor research with human subject.
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
+- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
+- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
+
+# 16. Declaration of LLM usage
+
+Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
+
+Answer: [NA]
+
+Justification: The core method development in this research does not involve LLMs as any important, original, or non-standard components.
+
+Guidelines:
+
+- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
+- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
+
+# A Related Works
+
+Robust fine-tuning methods for VLM For training efficiency, there have been many lightweight CLIP-based fine-tuning methods to enhance generalization performance via prompt tuning (Singha et al., 2023; Huang et al., 2022; Khattak et al., 2023a; Wang et al., 2023c; Wasim et al., 2023; Goswami et al., 2024; Huang et al., 2024) or adapter tuning (Gondal et al., 2024; Zhang et al., 2023b; Song et al., 2023). Prompt tuning methods aim to get better vision-language alignment via only fine-tuning the input prompts. For example, with only few-shot samples for learning, CoOp (Zhou et al., 2021) improved significantly in generalization ability over intensively-tuned manual prompts via prompt learning. Motivated by learning generalization prompts, CoCoOp (Zhou et al., 2022) is proposed to achieve generalization on unseen classes via conditional prompt learning. Adapter-tuning is another popular lightweight fine-tuning method, like CLIP-Adapter (Gao et al., 2023), Tip-Adapter (Zhang et al., 2021a). Both of them inject a lightweight bottleneck architecture after the image encoder or text encoder and perform residual-style feature blending with the original pre-trained embeddings. However, most previous studies have focused on improving models' robustness to covariate shifts, without being able to effectively address OOD detection.
+
+Note that several studies, such as CoCoOp (Zhou et al., 2022), PromptSRC (Khattak et al., 2023b), SHIP (Wang et al., 2023c), and Promptkd (Li et al., 2024b), have explored the base-to-new ability of VLMs, aiming to improve VLMs' classification performance to unseen classes. However, we emphasize that our approach differs from these methods in its handling of classes that were unseen during training. While these methods focus on classifying unseen classes, our method is designed specifically to detect these unseen classes without performing classification, ensuring safety in real-world applications. Although there have been several studies (Bai et al., 2023; Zhu et al., 2024) to handle OOD generalization and OOD detection simultaneously, these approaches are typically limited to traditional vision models or have been evaluated with a narrow set of post-hoc functions for OOD Detection. Thus, when fine-tuning VLMs for downstream tasks, the challenge of improving the models' generalization ability to closed-set OOD data while simultaneously detecting open-set OOD classes that were unseen during fine-tuning remains largely underexplored.
+
+OOD detection methods There are multiple lines of work addressing OOD detection, such as anomaly detection (Zong et al., 2018; Liang et al., 2017b), outlier detection (Bevandić et al., 2021; Saito et al., 2021), and open-set OOD recognition (Kong and Ramanan, 2021; Geng et al., 2020; Scheirer et al., 2014). These methods can be categorized into two main groups: post hoc methods (Zhu et al., 2022; Liu et al., 2020; Sun et al., 2021; Hendrycks and Gimpel, 2016; Liang et al., 2017b; Wang et al., 2022; Sun et al., 2022) and training-time regularization (Narayanaswamy et al., 2023; Bai et al., 2023; Malinin and Gales, 2018; Du et al., 2022b,a; Ming et al., 2022b). The former typically resort to post-hoc functions to recognize open-set without altering the DNN training process, like density estimation (Zhang et al., 2020), uncertainty modeling (Gal and Ghahramani, 2016), and input image reconstruction (Pidhorskyi et al., 2018; Sun et al., 2020). On the other hand, regularization-based methods aim to rectify the training process, compelling models to provide predictions with lower confidence. Recent studies (Zhou et al., 2024; Bai et al., 2024; Nie et al., 2024; Li et al., 2024a; Wang et al., 2023a; Miyai et al., 2024b; Zhang et al., 2024; Jiang et al., 2024; Ming et al., 2022a; Zhang and Zhang, 2024; Ming and Li, 2024) have explored the capability of zero-shot or few-shot OOD detection based on VLMs. More details about the CLIP-based OOD detection methods can be seen in these surveys (Miyai et al., 2024a; Li et al., 2025). However, while these studies primarily focus on handling semantic-shifted datasets, our research aims to simultaneously improve both OOD detection for semantic shifts and OOD generalization for covariate shifts, enabling more effective handling of diverse OOD datasets in real-world scenarios.
+
+Full-spectrum OOD detection. Recent works, such as OpenOOD v1.5 (Yang et al., 2022; Zhang et al., 2023a) and SEM (Yang et al., 2023), have taken into account both covariate shifts and semantic shifts and introduced full-spectrum OOD (FS-OOD) detection, which considers both detecting semantic shifts and being tolerant to covariate shifts. OpenOOD v1.5 extends benchmark evaluations to large-scale datasets (e.g., ImageNet) and foundation models (e.g., CLIP and DINOv2), broadening its scope to study FS-OOD detection under both types of distribution shifts. While the FS-OOD benchmark assesses OOD detection performance across diverse distribution types, it does not primarily focus on improving VLMs' classification accuracy (i.e., OOD generalization) under covariate shifts. Moreover, in practical applications, there is a strong motivation to create models that can not only detect semantically shifted OOD inputs but also generalize to covariate-shifted data. Within CLIP-based
+
+methods, OOD detection and generalization are often discussed in separate contexts, resulting in a trade-off between detection and generalization performance (Miyai et al., 2024a). In this paper, we focus on improving VLMs' OOD generalization ability on covariate-shifted OOD data while simultaneously detecting semantic-shifted samples. To substantiate the effectiveness of the proposed method for both tasks, we conduct an empirical analysis of distinct categories of competitive methods, including the CLIP-based zero-shot OOD detection method, lightweight tuning-based OOD detection methods, and lightweight tuning-based OOD generalization methods.
+
+# B Proof of Theorem 3.2
+
+We first provide the proof for property (1) in Theorem 3.2. In this paper, we achieve re-aligning vision-language modalities by disrupting the top- $c$ maximum cosine similarity to a low value. Let $\hat{y}_1\coloneqq \operatorname{argmax}_{i\in [K]}s_i(\mathbf{x}^{\prime})$ denote the index of the maximum cosine similarity for an OOD input $\mathbf{x}^{\prime}$ . We then reduce the maximum cosine similarity to zero according to Equation 1 and denote the difference between the original and re-aligned cosine similarity as $\tilde{s}_{\hat{y}_1}(\mathbf{x_i}) - s_{\hat{y}_1}(\mathbf{x_i}) = \mu (\mathbf{x_i})$ .
+
+Without loss of generality, we set $c = 1$ , the energy score before re-alignment is denoted as $E_0(\mathbf{x}^{\prime})$ which is calculated as:
+
+$$
+E _ {0} \left(\mathbf {x} ^ {\prime}\right) = - \log \sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} \tag {9}
+$$
+
+The energy score after re-alignment is denoted as $E_{1}(\mathbf{x}^{\prime})$ , which is calculated as:
+
+$$
+E _ {1} \left(\mathbf {x} ^ {\prime}\right) = - \log \left[ \sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} \right] \tag {10}
+$$
+
+Given the ID class labels $\mathcal{V}_{\mathrm{in}}$ , the newly proposed OOD score ( $\Delta$ Energy) is defined as:
+
+$$
+\begin{array}{l} S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) = E _ {1} \left(\mathbf {x} ^ {\prime}\right) - E _ {0} \left(\mathbf {x} ^ {\prime}\right) \\ = - \log [ \sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} ] + \log \sum_ {i = 1} ^ {K} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} \\ = \log \frac {\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}} \\ = \log \left[ 1 + \frac {e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}} \right] \tag {11} \\ = \log \left[ 1 + \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {(s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) - \mu (\mathbf {x} _ {\mathrm {i}})) / \tau}}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {(s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) - \mu (\mathbf {x} _ {\mathrm {i}})) / \tau}} \right] \\ = \log \left[ 1 + \frac {e ^ {\mu (\mathbf {x} _ {\mathrm {i}}) / \tau} - 1}{\frac {\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau}}{e ^ {(s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) - \mu (\mathbf {x} _ {\mathrm {i}})) / \tau}} + 1} \right] \\ \end{array}
+$$
+
+If we set $\tilde{s}_{\hat{y}_1}(\mathbf{x}') = 0$ , i.e., $\mu (\mathbf{x_i}) = s_{\hat{y}_1}(\mathbf{x}')$ , we have:
+
+$$
+S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) = E _ {1} \left(\mathbf {x} ^ {\prime}\right) - E _ {0} \left(\mathbf {x} ^ {\prime}\right) = \log \left[ 1 + \frac {e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} - 1}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + 1} \right] \tag {12}
+$$
+
+Given the ID sample $\mathbf{x}_{\mathrm{ID}}$ and OOD sample $\mathbf{x}_{\mathrm{OOD}}$ , we assume that the sum of non-maximal cosine similarities is similar for ID and OOD samples, i.e., $\sum_{i\neq \hat{y}_1}e^{s_i(\mathbf{x}_{\mathrm{ID}}) / \tau}\approx \sum_{i\neq \hat{y}_1}e^{s_i(\mathbf{x}_{\mathrm{OOD}}) / \tau}$ , which is reasonable under uniform similarity distributions as discussed in prior work (Ming et al., 2022a). Since the $S_{\Delta \mathrm{Energy}}(\mathbf{x};\mathcal{V}_{\mathrm{in}})$ is a monotonically increasing function with respect to the maximum similarity $s_{\hat{y}_1}(\mathbf{x})$ , the energy change for ID data is greater than that for OOD data.
+
+We then provide the more detailed results and proof for property (2) in Theorem 3.2 as follows:
+
+Theorem B.1. [Difference Amplification between ID and OOD by $\Delta$ Energy] Let $S_{\Delta \mathrm{Energy}}(\mathbf{x},\mathcal{Y}_{in})$ and $S_{\Delta \mathrm{MCM}}(\mathbf{x},\mathcal{Y}_{in})$ be the OOD score for any sample $\mathbf{x}$ by the $\Delta$ Energy and MCM method, respectively, where:
+
+$$
+S _ {\Delta \text {E n e r g y}} (\mathbf {x}; \mathcal {Y} _ {\mathrm {i n}}) = E _ {1} (\mathbf {x}) - E _ {0} (\mathbf {x}) = \log \left[ 1 + \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x}) / \tau} - 1}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x}) / \tau} + 1} \right] \tag {13}
+$$
+
+$$
+S _ {\Delta \mathrm {M C M}} (\mathbf {x}; \mathcal {Y} _ {\mathrm {i n}}) = \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x}) / \tau}}{e ^ {s _ {\hat {y} _ {1}} (\mathbf {x}) / \tau} + \sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x}) / \tau}} \tag {14}
+$$
+
+Suppose that the maximum cosine similarity for an ID sample $\mathbf{x}_{ID}$ is greater than that of an OOD sample $\mathbf{x}_{OOD}$ , i.e., $s_{\hat{y}_1}(\mathbf{x}_{ID}) > s_{\hat{y}_1}(\mathbf{x}_{OOD})$ . We also assume that the sum of non-maximal cosine similarities is similar for ID and OOD samples, i.e., $\sum_{i \neq \hat{y}_1} e^{s_i(\mathbf{x}_{ID}) / \tau} \approx \sum_{i \neq \hat{y}_1} e^{s_i(\mathbf{x}_{OOD}) / \tau}$ , which is reasonable under uniform similarity distributions as discussed in prior work (Ming et al., 2022a). Then the difference between ID and OOD under $\Delta$ Energy exceeds that of the MCM method:
+
+$$
+d _ {\Delta \text {E n e r g y}} > d _ {M C M}
+$$
+
+where $d_{\Delta \mathrm{Energy}} = S_{\Delta \mathrm{Energy}}(\mathbf{x}_{ID};\mathcal{Y}_{\mathrm{in}}) - S_{\Delta \mathrm{Energy}}(\mathbf{x}_{OOD};\mathcal{Y}_{\mathrm{in}})$ , and $d_{\mathrm{MCM}} = S_{\Delta \mathrm{MCM}}(\mathbf{x}_{ID};\mathcal{Y}_{\mathrm{in}}) - S_{\Delta \mathrm{MCM}}(\mathbf{x}_{OOD};\mathcal{Y}_{\mathrm{in}})$ . Thus, our $\Delta \mathrm{Energy}$ exhibits strictly stronger separability between ID and OOD distributions.
+
+Proof: Let $s = s_i(\mathbf{x}) / \tau$ , we simplify the $\Delta$ Energy score as $S_{\Delta \mathrm{Energy}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}}) = \log (1 + \frac{e^s - 1}{b + 1})$ and simplify the MCM score as $S_{\Delta \mathrm{MCM}}(\mathbf{x};\mathcal{Y}_{\mathrm{in}}) = \frac{e^s}{e^s + b}$ where $b = \sum_{i\neq \hat{y}_1}e^{s_i(\mathbf{x}) / \tau}$ .
+
+First, we analyze the gradient with respect to $s_{\hat{y}_1}$ :
+
+$$
+\frac {\partial S _ {\Delta \text {E n e r g y}}}{\partial s _ {\hat {y} _ {1}}} = \frac {1}{\tau} \cdot \frac {e ^ {s \hat {y} _ {1} / \tau}}{e ^ {s \hat {y} _ {1} / \tau} + b} = \frac {1}{\tau} S _ {\mathrm {M C M}} \tag {15}
+$$
+
+$$
+\frac {\partial S _ {\mathrm {M C M}}}{\partial s _ {\hat {y} _ {1}}} = \frac {1}{\tau} \cdot \frac {e ^ {s \hat {y} _ {1} / \tau} b}{(e ^ {s \hat {y} _ {1} / \tau} + b) ^ {2}} = \frac {1}{\tau} S _ {\mathrm {M C M}} (1 - S _ {\mathrm {M C M}}) \tag {16}
+$$
+
+Since $0 < S_{\mathrm{MCM}} \leq 1$ , we have $\frac{\partial S_{\Delta \mathrm{Energy}}}{\partial s_{\hat{y}_1}} > \frac{\partial S_{\mathrm{MCM}}}{\partial s_{\hat{y}_1}}$ and $d_{\Delta \mathrm{Energy}} > d_{\mathrm{MCM}}$ .
+
+Next, we consider the large-scale setting where the number of ID classes $K$ is large. Let $b \approx \sum_{i \neq \hat{y}_1} e^{s_i(\mathbf{x}) / \tau}$ . In this case, the denominator term $b$ becomes significantly larger than the maximum similarity term, i.e., $b \gg e^s$ . We then perform Taylor Expansion for $S_{\Delta \mathrm{Energy}}(\mathbf{x}, \mathcal{Y}_{\mathrm{in}})$ and $S_{\Delta \mathrm{MCM}}(\mathbf{x}, \mathcal{Y}_{\mathrm{in}})$ .
+
+Taylor Expansion of $S_{\Delta \mathrm{Energy}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}})$ : For $b\gg e^s$ , we approximate $S_{\Delta \mathrm{Energy}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}})$ using $\log (1 + \epsilon)\approx \epsilon -\frac{\epsilon^2}{2}\colon S_{\Delta \mathrm{Energy}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}})\approx \frac{e^s}{b} -\frac{e^{2s}}{2b^2}$ .
+
+Taylor Expansion of $S_{\Delta \mathrm{MCM}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}})$ : Similarly, we expand $S_{\Delta \mathrm{MCM}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}})$ : $S_{\Delta \mathrm{MCM}}(\mathbf{x},\mathcal{Y}_{\mathrm{in}}) = \frac{e^s}{e^s + b} \approx \frac{e^s}{b} - \frac{e^{2s}}{b^2}$ .
+
+Then we can compute the difference between ID and OOD under the $\Delta$ Energy and MCM score, respectively:
+
+$$
+d _ {\Delta \text {E n e r g y}} \approx \left(\frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {I D}}) / \tau}}{b} - \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {O O D}}) / \tau}}{b}\right) - \frac {1}{2} \left(\frac {e ^ {2 s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {I D}}) / \tau}}{b ^ {2}} - \frac {e ^ {2 s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {O O D}}) / \tau}}{b ^ {2}}\right) \tag {17}
+$$
+
+$$
+d _ {\mathrm {M C M}} \approx \left(\frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {I D}}) / \tau}}{b} - \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {O O D}}) / \tau}}{b}\right) - \left(\frac {e ^ {2 s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {I D}}) / \tau}}{b ^ {2}} - \frac {e ^ {2 s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {O O D}}) / \tau}}{b ^ {2}}\right) \tag {18}
+$$
+
+which leads to the following property:
+
+$$
+d _ {\Delta \text {E n e r g y}} - d _ {\mathrm {M C M}} \approx \frac {e ^ {2 s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {I D}}) / \tau} - e ^ {2 s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {O O D}}) / \tau}}{2 b ^ {2}} > 0 \quad \left(\text {s i n c e} s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {I D}}) > s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {O O D}})\right) \tag {19}
+$$
+
+In large-scale hard OOD detection scenarios where $s_{\hat{y}_1}(\mathbf{x}_{\mathrm{ID}}) - s_{\hat{y}_1}(\mathbf{x}_{\mathrm{OOD}}) \ll 1$ (small maximum similarity gap) and $K \gg s_{\hat{y}_1}(\mathbf{x}_{\mathrm{ID}}) > s_{\hat{y}_1}(\mathbf{x}_{\mathrm{OOD}})$ . As demonstrated in Equation 17 and Equation 18, $\Delta \text{Energy}$ 's logarithmic form introduces a less aggressive decay in the higher-order term compared to MCM, preserving discriminability. For large $K$ , MCM's separability diminishes quadratically $(\mathcal{O}(1 / K^2))$ , while $\Delta \text{Energy}$ decays more slowly due to the smaller coefficient in the second-order term. Thus, $\Delta \text{Energy}$ is better for large-scale OOD detection where $K$ is large and the maximum similarity gap is small $(s_{\hat{y}_1}(\mathbf{x}_{\mathrm{ID}}) - s_{\hat{y}_1}(\mathbf{x}_{\mathrm{OOD}}) \ll 1)$ .
+
+# C Proof of Theorem 3.3
+
+In Theorem 3.3, we provide formal guarantees that the proposed $\Delta$ Energy can provably reduce the false positive rate (FPR) compared to the widely-used VLM-based OOD detection method MCM (Ming et al., 2022a). Before the proof of Theorem 3.3, we first introduce the MCM method as follows:
+
+For any test input $\mathbf{x}'$ , we calculate the label-wise matching score based on the cosine similarity between the image feature $\mathbf{z}_{\mathbf{I}}(x')$ and the concept vector (text feature) $\mathbf{z}_{\mathbf{T}}(t_i)$ : $s_i(\mathbf{x}') = \mathbf{z}_{\mathbf{I}}(\mathbf{x}') \cdot \mathbf{z}_{\mathbf{T}}(t_i)$ . Note that both the image feature $\mathbf{z}_{\mathbf{I}}(x')$ and the concept vector $\mathbf{z}_{\mathbf{T}}(t_i)$ are normalized features in this paper. The maximum concept matching (MCM) score is computed as follows:
+
+$$
+S _ {\mathbf {M C M}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}) = \max _ {i} \frac {e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {j = 1} ^ {K} e ^ {s _ {j} (\mathbf {x} ^ {\prime}) / \tau}},
+$$
+
+Under the Assumption C.1, Ming et.al (Ming et al., 2022a) have provided the formal guarantees for MCM that using softmax can provably reduce the false positive rate (FPR) compared to that without softmax, as illustrated in Theorem C.2.
+
+Assumption C.1. [(Ming et al., 2022a)] Let $z \coloneqq 1\{y \in \mathcal{Y}_{\mathrm{in}}\}$ . $Q_{\mathbf{x}}$ denotes the out-of-distribution $\mathbb{P}_{\mathbf{x} | z = 0}$ (marginal distribution of $\mathbf{x}$ conditioned on $z = 0$ ). Assume $\exists \delta > 0$ such that
+
+$$
+Q _ {\mathbf {x}} \left(\frac {1}{K - 1} \sum_ {i \neq \hat {y} _ {1}} \left[ s _ {\hat {y} _ {2}} (\mathbf {x}) - s _ {i} (\mathbf {x}) \right] \leqslant \delta\right) = 1,
+$$
+
+where $\hat{y}_1 \coloneqq \operatorname{argmax}_{i \in [K]} s_i(\mathbf{x})$ and $\hat{y}_2 \coloneqq \operatorname{argmax}_{i \neq \hat{y}_1, i \in [K]} s_i(\mathbf{x})$ denote the indices of the largest and second-largest cosine similarities for an OOD input $\mathbf{x}$ .
+
+Theorem C.2. [(Ming et al., 2022a)] Given a task with ID label set $\mathcal{Y}_{\mathrm{in}} = \{y_1, y_2, \dots, y_K\}$ and a pre-trained VLM. If $Q_{\mathbf{x}}$ satisfies Assumption C.1, then there exists a constant $T = \frac{\lambda(K - 1)(\lambda^{\mathrm{wo}} + \delta - s_{\tilde{y}2})}{K\lambda - 1}$ such that for any temperature $\tau > T$ , we have
+
+$$
+\mathrm {F P R} (\tau , \lambda) \leq \mathrm {F P R} ^ {\mathrm {w o}} (\lambda^ {\mathrm {w o}}),
+$$
+
+where $\mathrm{FPR}(\tau, \lambda)$ is the false positive rate based on softmax scaling with temperature $\tau$ and detection threshold $\lambda$ ; $\mathrm{FPR}^{\mathrm{wo}}(\lambda^{\mathrm{wo}})$ is the false positive rate without softmax scaling based on threshold $\lambda^{\mathrm{wo}}$ .
+
+Now we present the proof of Theorem 3.3 as follows:
+
+Proof: The newly proposed OOD score $(\Delta \text{Energy})$ is defined as:
+
+$$
+\begin{array}{l} S _ {\Delta \mathrm {E n e r g y}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}) = E _ {1} - E _ {0} \\ = - \log \left[ \sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} \right] + \log \sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} \\ = \log \frac {\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}} \\ = \log \left[ 1 + \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}} \right] \\ \leq \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}} = \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}} \\ \leq \frac {e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + 2 e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} - e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}} \tag {20} \\ \end{array}
+$$
+
+When $c \in \{2, \dots, K\}$ , without loss of generality, we take $c = 2$ as example and we have:
+
+$$
+\begin{array}{l} S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) = E _ {1} - E _ {0} \\ = \frac {1}{2} \left[ - \log \left[ \sum_ {i \neq \hat {y} _ {1}} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} \right] + \log \sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} \right] \\ + \frac {1}{2} \left[ - \log \left[ \sum_ {i \neq \hat {y} _ {2}} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {2}} \left(\mathbf {x} ^ {\prime}\right) / \tau} \right] + \log \sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} \right] \tag {21} \\ \leq \frac {1}{2} \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + 2 e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}} \\ + \frac {1}{2} \frac {e ^ {s _ {\hat {y} _ {2}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau} + 2 e ^ {\tilde {s} _ {\hat {y} _ {2}} (\mathbf {x} ^ {\prime}) / \tau} - e ^ {s _ {\hat {y} _ {2}} (\mathbf {x} ^ {\prime}) / \tau}} \\ \end{array}
+$$
+
+For $c = 1$ , if $2e^{\tilde{s}_{\hat{y}_1}(\mathbf{x}') / \tau} - e^{s_{\hat{y}_1}(\mathbf{x}') / \tau} \geq 0$ , i.e., $s_{\hat{y}_1}(\mathbf{x}') - \tilde{s}_{\hat{y}_1}(\mathbf{x}') \leq \tau$ in 2, we have
+
+$$
+S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) \leq \frac {e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + 2 e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau} - e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}} \leq \frac {e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau}} = S _ {\mathbf {M C M}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right)
+$$
+
+For $c = 2$ , we have $s_{\hat{y}_2}(\mathbf{x}') \leq s_{\hat{y}_1}(\mathbf{x}')$ , if $2e^{\tilde{s}_{\hat{y}_2}(\mathbf{x}') / \tau} - e^{s_{\hat{y}_2}(\mathbf{x}') / \tau} \geq 0$ , i.e., $s_{\hat{y}_2}(\mathbf{x}') - \tilde{s}_{\hat{y}_2}(\mathbf{x}') \leq \tau \ln 2$ , we have
+
+$$
+\begin{array}{l} \frac {e ^ {s _ {\hat {y} _ {2}} \left(\mathbf {x} ^ {\prime}\right) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + 2 e ^ {\tilde {s} _ {\hat {y} _ {2}} \left(\mathbf {x} ^ {\prime}\right) / \tau} - e ^ {s _ {\hat {y} _ {2}} \left(\mathbf {x} ^ {\prime}\right) / \tau}} \leq \frac {e ^ {s _ {\hat {y} _ {1}} \left(\mathbf {x} ^ {\prime}\right) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} \left(\mathbf {x} ^ {\prime}\right) / \tau} + 2 e ^ {\tilde {s} _ {\hat {y} _ {2}} \left(\mathbf {x} ^ {\prime}\right) / \tau} - e _ {(t)} ^ {s _ {\hat {y} _ {2}} \left(\mathbf {x} ^ {\prime}\right) / \tau}} \tag {22} \\ \leq \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} ^ {\prime}) / \tau}}{\sum_ {i = 1} ^ {K} e ^ {s _ {i} (\mathbf {x} ^ {\prime}) / \tau}} = S _ {\mathbf {M C M}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}) \\ \end{array}
+$$
+
+which leads to
+
+$$
+S _ {\Delta \text {E n e r g y}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}) \leq S _ {\text {M C M}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}})
+$$
+
+Let the OOD detection functions be represented by:
+
+$$
+G \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}\right) = \left\{ \begin{array}{l l} 1 & S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}\right) \geq \lambda \\ 0 & S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}\right) < \lambda \end{array} \right\}, \tag {23}
+$$
+
+then we have
+
+$$
+\begin{array}{l} \operatorname {F P R} ^ {\Delta \text {E n e r g y}} (\tau , \lambda) = \mathbb {P} \left(G \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) = 1 \mid z = 0\right) \\ = Q _ {\mathbf {x} ^ {\prime}} \left(G \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) = 1\right) \\ = Q _ {\mathbf {x} ^ {\prime}} \left(S _ {\Delta \text {E n e r g y}} \left(\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\text {i n}}\right) > \lambda\right) \tag {24} \\ \leq Q _ {\mathbf {x} ^ {\prime}} \left(S _ {\mathbf {M C M}} (\mathbf {x} ^ {\prime}; \mathcal {Y} _ {\mathrm {i n}}) > \lambda\right) = \mathrm {F P R} ^ {\mathrm {M C M}} (\tau , \lambda) \\ \end{array}
+$$
+
+Thus, we complete the proof.
+
+# D Proof of Theorem 3.4
+
+Proof: To further enlarge the energy change between the masked VLM and unmasked VLM for closed-set classes, we propose to minimize the following loss:
+
+$$
+\mathcal {L} _ {\Delta E} = \frac {1}{N} \sum_ {i = 1} ^ {N} E _ {2} \left(\mathbf {x} _ {\mathbf {i}}\right) - E _ {0} \left(\mathbf {x} _ {\mathbf {i}}\right) \tag {25}
+$$
+
+where $E_{2}(\mathbf{x}_{i})$ is the energy score for $\mathbf{x}_{i}$ after masking on the image feature, which is formally calculated as:
+
+$$
+E _ {2} (\mathbf {x} _ {\mathbf {i}}) = - \log \sum_ {j = 1} ^ {K} e ^ {s _ {j} ^ {\prime} (\mathbf {x} _ {\mathbf {i}})}
+$$
+
+$$
+s _ {j} ^ {\prime} \left(\mathbf {x} _ {i}\right) = \left(\mathbf {z} _ {\mathbf {I}} \left(\mathbf {x} _ {i}\right) \odot \mathbf {m} ^ {\prime} \left(\mathbf {x} _ {i}\right)\right) \cdot \mathbf {z} _ {\mathbf {T}} \left(t _ {j}\right)
+$$
+
+where $\mathbf{m}'(\mathbf{x_i})$ is the mask that retains the top $p$ -proportion elements in $\mathbf{z_I}(\mathbf{x_i}) \odot \mathbf{h_1}(\mathbf{x_i})$
+
+Now we prove that $-\mathcal{L}_{\Delta E}$ is the lower bound of $\sum_{i=1}^{N} \Delta \mathrm{Energy}(\mathbf{x}_i)$ . Here, we represent the optimization term for $\mathbf{x}_i$ as: $\mathcal{L}_{\Delta E}(\mathbf{x}_i) := E_2(\mathbf{x}_i) - E_0(\mathbf{x}_i)$ . Then the relationship between $\Delta \mathrm{Energy}(\mathbf{x}_i)$ and $\mathcal{L}_{\Delta E}(\mathbf{x}_i)$ can be formulated as:
+
+$$
+\begin{array}{l} e ^ {\Delta \mathrm {E n e r g y} (\mathbf {x} _ {\mathrm {i}})} - e ^ {- \mathcal {L} _ {\Delta E} (\mathbf {x} _ {\mathrm {i}})} = \frac {\sum_ {j = 1} ^ {K} e ^ {s _ {j} (\mathbf {x} _ {\mathrm {i}}) / \tau}}{\sum_ {j \neq \hat {y} _ {1}} e ^ {s _ {j} (\mathbf {x} _ {\mathrm {i}}) / \tau} + e ^ {\bar {s} _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {i}}) / \tau}} - \frac {\sum_ {j \neq \hat {y} _ {1}} e ^ {s _ {j} ^ {\prime} (\mathbf {x} _ {\mathrm {i}}) / \tau} + e ^ {s _ {\hat {y} _ {1}} ^ {\prime} (\mathbf {x} _ {\mathrm {i}}) / \tau}}{\sum_ {j = 1} ^ {K} e ^ {s _ {j} (\mathbf {x} _ {\mathrm {i}}) / \tau}} \\ = \frac {\sum_ {j = 1} ^ {K} e ^ {s _ {j} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau}}{\sum_ {j \neq \hat {y} _ {1}} e ^ {s _ {j} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau}} - \frac {\sum_ {j \neq \hat {y} _ {1}} e ^ {s _ {j} ^ {\prime} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau} + e ^ {s _ {\hat {y} _ {1}} ^ {\prime} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau}}{\sum_ {j = 1} ^ {K} e ^ {s _ {j} \left(\mathbf {x} _ {\mathrm {i}}\right) / \tau}} \tag {26} \\ = \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathbf {i}}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathbf {i}}) / \tau}}{\sum_ {j \neq \hat {y} _ {1}} e ^ {s _ {j} (\mathbf {x} _ {\mathbf {i}}) / \tau} + e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathbf {i}}) / \tau}} - \frac {\sum_ {j = 1} ^ {K} [ e ^ {s _ {j} ^ {\prime} (\mathbf {x} _ {\mathbf {i}}) / \tau} - e ^ {s _ {j} (\mathbf {x} _ {\mathbf {i}}) / \tau} ]}{\sum_ {j = 1} ^ {K} e ^ {s _ {j} (\mathbf {x} _ {\mathbf {i}}) / \tau}} \\ \geq \frac {e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathbf {i}}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathbf {i}}) / \tau}}{\sum_ {j \neq \hat {y} _ {1}} e ^ {s _ {j} (\mathbf {x} _ {\mathbf {i}}) / \tau} + e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathbf {i}}) / \tau}} - \frac {\sum_ {j = 1} ^ {K} \left[ e ^ {s _ {j} ^ {\prime} (\mathbf {x} _ {\mathbf {i}}) / \tau} - e ^ {s _ {j} (\mathbf {x} _ {\mathbf {i}}) / \tau} \right]}{\sum_ {j = 1} ^ {K} e ^ {s _ {j} (\mathbf {x} _ {\mathbf {i}}) / \tau}} \\ \end{array}
+$$
+
+where the inequality in the last line follows $\tilde{s}_{\hat{y}_1}(\mathbf{x_i})\leq s_{\hat{y}_1}(\mathbf{x_i})$
+
+Under the condition that $e^{s_{\hat{y}_1}(\mathbf{x_i}) / \tau} - e^{\tilde{s}_{\hat{y}_1}(\mathbf{x_i}) / \tau}\geq (e^{\varepsilon_E} - 1)\sum_{j = 1}^{K}e^{s_j'(\mathbf{x_i}) / \tau} = (e^{\varepsilon_E} - 1)e^{-E_2(\mathbf{x_i})}$ and that $\mathcal{L}_{\Delta E}\leq \varepsilon_{E}$ , it is straightforward to derive the following inequality:
+
+$$
+e ^ {s _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {i}}) / \tau} - e ^ {\tilde {s} _ {\hat {y} _ {1}} (\mathbf {x} _ {\mathrm {i}}) / \tau} \geq \sum_ {j = 1} ^ {K} [ e ^ {s _ {j} ^ {\prime} (\mathbf {x} _ {\mathrm {i}}) / \tau} - e ^ {s _ {j} (\mathbf {x} _ {\mathrm {i}}) / \tau} ]
+$$
+
+Thus we have $\Delta \mathrm{Energy}(\mathbf{x_i})\geq -\mathcal{L}_{\Delta E}(\mathbf{x_i})$ . Thus complete the proof.
+
+# E Proof of Theorem 3.5
+
+Proof: In the prompt-tuning framework of our proposed EBM, only $n$ context vectors are learnable, and we denote the learnable context vectors as $\theta = [\theta_1,\dots ,\theta_n]$ . The $\mathcal{L}_{\Delta E}$ can be represented by:
+
+$$
+\mathcal {L} _ {\Delta E} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\log \sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}), \mathbf {z} _ {\mathbf {T}} (t _ {j}; \theta) \rangle - \log \sum_ {j = 1} ^ {K} \exp \langle \mathbf {m} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}), \mathbf {z} _ {\mathbf {T}} (t _ {j}; \theta) \rangle\right) \tag {27}
+$$
+
+where $\mathbf{m}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) = \mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})\odot \mathbf{m}^{\prime}(\mathbf{x}_{\mathbf{i}})$ is the masked image feature. Now we expand $\nabla_{\theta}\mathcal{L}_{\Delta E}$ as follows:
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {\Delta E} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ \frac {\nabla_ {\theta} \sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle}{\sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle} - \frac {\nabla_ {\theta} \sum_ {j = 1} ^ {K} \exp \langle \mathbf {m} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle}{\sum_ {j = 1} ^ {K} \exp \langle \mathbf {m} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle} \right] \tag {28}
+$$
+
+We denote
+
+$$
+\mathbf {a} _ {\mathbf {0}} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\log \sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}), \mathbf {z} _ {\mathbf {T}} (t _ {j}; \theta) \rangle\right)
+$$
+
+$$
+\mathbf {a} _ {\mathbf {1}} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\log \sum_ {j = 1} ^ {K} \exp \langle \mathbf {m} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}), \mathbf {z} _ {\mathbf {T}} (t _ {j}; \theta) \rangle\right)
+$$
+
+Then we have $\nabla_{\theta}\mathcal{L}_{\Delta E} = \nabla_{\theta}\mathbf{a}_0 - \nabla_{\theta}\mathbf{a}_1$ . The local optimization of $\mathcal{L}_{\Delta E}$ leads to $\nabla_{\theta}\mathbf{a}_0 = \nabla_{\theta}\mathbf{a}_1$ .
+
+Let $S^{(i)}$ represent the cosine similarity between the image feature $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ and the text feature corresponding to its ground-truth label. The empirical classification loss, $\mathcal{E}_{\mathcal{D}}(\theta)$ , can be calculated as:
+
+$$
+\begin{array}{l} \mathcal {E} _ {\mathcal {D}} (\theta) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {\exp S ^ {(i)}}{\sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle} \tag {29} \\ = \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ \log \sum_ {j = 1} ^ {K} \exp \langle \mathbf {z _ {I}} (\mathbf {x _ {i}}), \mathbf {z _ {T}} (t _ {j}; \theta) \rangle - S ^ {(i)} \right] \\ \end{array}
+$$
+
+Accordingly, the gradient vector of empirical risk $\widehat{\mathcal{E}}_{\mathcal{D}}(\theta)$ with respect to parameter $\theta$ is represented as:
+
+$$
+\widehat {\mathbf {G}} _ {\mathcal {D}} (\theta) = \nabla_ {\theta} \widehat {\mathcal {E}} _ {\mathcal {D}} (\theta) = \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ \frac {\nabla_ {\theta} \sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle}{\sum_ {j = 1} ^ {K} \exp \langle \mathbf {z} _ {\mathbf {I}} (\mathbf {x} _ {\mathbf {i}}) , \mathbf {z} _ {\mathbf {T}} (t _ {j} ; \theta) \rangle} - \nabla_ {\theta} S ^ {(i)} \right] = - \mathbf {a} - \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla_ {\theta} S ^ {(i)} \tag {30}
+$$
+
+And the Hessian matrix of empirical risk with respect to parameter $\theta$ is calculated as:
+
+$$
+\widehat {\mathbf {H}} _ {\mathcal {D}} (\theta) = \nabla_ {\theta} ^ {2} \widehat {\mathcal {E}} _ {\mathcal {D}} (\theta) = - \nabla_ {\theta} \mathbf {a} - \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla_ {\theta} ^ {2} S ^ {(i)} \tag {31}
+$$
+
+The local optimum solution of Equation 4, i.e., $\nabla_{\theta}\mathcal{L}_{\Delta E} = \mathbf{0}$ , gives the following equation:
+
+$$
+\hat {\mathbf {H}} _ {\mathcal {S}} (\theta) - \hat {\mathbf {H}} _ {\mathcal {S} ^ {\prime}} (\theta) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\nabla_ {\theta} ^ {2} S ^ {(i)} - \nabla_ {\theta} ^ {2} S _ {m} ^ {(i)}\right) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla_ {\theta} ^ {2} \left[ \left(\mathbf {z} _ {\mathbf {I}} \left(\mathbf {x} _ {\mathbf {i}}\right) - \mathbf {m} _ {\mathbf {I}} ^ {(\mathbf {i})}\right) \cdot \mathbf {z} _ {\mathbf {T}} \left(\mathbf {x} _ {\mathbf {i}}\right) \right] \tag {32}
+$$
+
+Finally, we can conclude that the local optimum solution of Equation 4 leads to the following property:
+
+$$
+\left| \theta^ {\top} \left(\widehat {\mathbf {H}} _ {\mathcal {S}} (\theta) - \widehat {\mathbf {H}} _ {\mathcal {S} ^ {\prime}} (\theta)\right) \theta \right| = \left| \theta^ {\top} \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla_ {\theta} ^ {2} \left[ \left(\mathbf {z} _ {\mathbf {I}} \left(\mathbf {x} _ {\mathbf {i}}\right) - \mathbf {m} _ {\mathbf {I}} ^ {(\mathrm {i})}\right) \cdot \mathbf {z} _ {\mathbf {T}} \left(\mathbf {x} _ {\mathbf {i}}\right) \theta \right] \right| \leq O (\varepsilon) \tag {33}
+$$
+
+# F Proof of Proposition 3.6
+
+Proof: Let $\theta^{*}$ be the local minimum across all domains, i.e., $\nabla_{\theta} \widehat{\mathcal{E}}_{\mathcal{D}}(\theta^{*}) = 0$ , and $\mathcal{D} = \{S, \mathcal{T}\}$ . By Taylor expansion, the OOD generalization gap between source domain $(S)$ and target domain $(\mathcal{T})$ is
+
+Table 5: Conventional OOD detection Results: OOD detection performance for ImageNet-1k as ID. In the table, we extend our $\Delta$ Energy method to the zero-shot method CSP, leveraging the informative information from extra OOD labels as detailed in Equation 36.
+
+| Method | Texture | iNaturalist | Places | SUN | Avg |
| AUC↑ | FPR95↓ | AUC↑ | FPR95↓ | AUC↑ | FPR95↓ | AUC↑ | FPR95↓ | AUC↑ | FPR95↓ |
| CLIP-based post-hoc methods |
| MSP | 74.84 | 73.66 | 77.74 | 74.57 | 72.18 | 79.12 | 73.97 | 76.95 | 74.98 | 76.22 |
| MaxLogit | 88.63 | 48.72 | 88.03 | 60.88 | 87.45 | 55.54 | 91.16 | 44.83 | 88.82 | 52.49 |
| Energy | 88.22 | 50.39 | 87.18 | 64.98 | 87.33 | 57.40 | 91.17 | 46.42 | 88.48 | 54.80 |
| ReAct | 88.13 | 49.88 | 86.87 | 65.57 | 87.42 | 56.85 | 91.04 | 46.17 | 88.37 | 54.62 |
| ODIN | 87.85 | 51.67 | 94.65 | 30.22 | 85.54 | 55.06 | 87.17 | 54.04 | 88.80 | 47.75 |
| Tuning-based methods |
| NegPrompt | 91.60 | 35.21 | 98.73 | 6.32 | 93.34 | 27.60 | 95.55 | 22.89 | 94.81 | 23.01 |
| ID-Like | 94.32 | 25.27 | 98.19 | 8.98 | 91.15 | 41.74 | 91.64 | 42.03 | 93.83 | 29.51 |
| LoCoOp | 90.19 | 42.28 | 96.86 | 16.05 | 91.98 | 32.87 | 95.07 | 23.44 | 93.52 | 28.66 |
| LSN+CoOp | 89.52 | 31.57 | 95.47 | 23.48 | 90.87 | 36.43 | 93.45 | 29.84 | 92.33 | 31.97 |
| LSN+CoCoOp | 90.42 | 38.54 | 95.83 | 21.56 | 91.25 | 34.48 | 94.35 | 26.32 | 92.96 | 30.22 |
| GalLoP | 90.40 | 38.40 | 97.10 | 13.70 | 91.30 | 32.50 | 94.00 | 24.90 | 93.20 | 27.30 |
| Zero-shot methods |
| MCM | 86.11 | 57.77 | 94.61 | 30.91 | 89.77 | 44.69 | 92.57 | 34.59 | 90.76 | 42.74 |
| CLIPN | 90.93 | 40.83 | 95.27 | 23.94 | 92.28 | 33.45 | 93.92 | 26.17 | 93.10 | 31.10 |
| NegLabel | 90.22 | 43.56 | 99.49 | 1.91 | 91.64 | 35.59 | 95.49 | 20.53 | 94.21 | 25.40 |
| CSP | 93.86 | 25.52 | 99.60 | 1.54 | 92.90 | 29.32 | 96.66 | 13.66 | 95.76 | 17.51 |
| CSP+ΔEnergy (Ours) | 94.33 | 21.44 | 99.72 | 0.82 | 92.66 | 28.87 | 96.60 | 13.75 | 95.83 | 16.22 |
+
+Table 6: Hard OOD detection Results #1: OOD detection measured by AUROC and FPR95 over 4 different splits of ImageNet-1k. Details of the 4 splits are in Table 8.
+
+| Method | Split-1 | Split-2 | Split-3 | Split-4 | Avg |
| AUC↑ | FPR95↓ | AUC↑ | FPR95↓ | AUC↑ | FPR95↓ | AUC↑ | FPR95↓ | AUC↑ | FPR95↓ |
| MCM | 97.93 | 9.17 | 88.10 | 56.40 | 90.34 | 33.05 | 98.72 | 4.73 | 93.77 | 25.83 |
| CLIPN | 99.38 | 2.07 | 97.77 | 10.55 | 90.03 | 36.85 | 98.83 | 4.68 | 96.50 | 13.53 |
| MSP | 77.85 | 63.60 | 68.73 | 83.63 | 79.10 | 70.55 | 82.40 | 65.52 | 77.02 | 70.83 |
| MaxLogit | 99.87 | 0.49 | 98.06 | 8.69 | 90.96 | 34.34 | 99.35 | 2.66 | 97.06 | 11.55 |
| Energy | 99.88 | 0.46 | 98.18 | 8.40 | 90.65 | 35.02 | 99.36 | 2.83 | 97.02 | 11.68 |
| ReAct | 99.34 | 0.72 | 97.91 | 9.33 | 90.72 | 35.65 | 99.12 | 2.94 | 96.77 | 12.16 |
| ODIN | 98.78 | 1.12 | 98.23 | 8.18 | 89.92 | 37.20 | 98.76 | 13.20 | 96.42 | 14.92 |
| ΔEnergy (Ours) | 99.93 | 0.37 | 99.00 | 5.16 | 91.14 | 30.09 | 99.40 | 2.83 | 97.37 | 9.61 |
+
+upper bounded as shown in the following equation:
+
+$$
+\begin{array}{l} \max _ {\{\theta : | \widehat {\mathcal {E}} _ {S} (\theta) - \widehat {\mathcal {E}} _ {S} (\theta^ {*}) | \leq \epsilon \}} | \widehat {\mathcal {E}} _ {\mathcal {T}} (\theta) - \widehat {\mathcal {E}} _ {S} (\theta^ {*}) | \\ \approx \max _ {\{\theta : \frac {1}{2} | \theta^ {\top} \widehat {\mathbf {H}} _ {\mathcal {S}} (\theta^ {*}) \theta | \leq \epsilon \}} \left| \widehat {\mathcal {E}} _ {\mathcal {T}} (\theta^ {*}) + \frac {1}{2} \theta^ {\top} \widehat {\mathbf {H}} _ {\mathcal {T}} (\theta^ {*}) \theta - \widehat {\mathcal {E}} _ {\mathcal {S}} (\theta^ {*}) \right| \\ \lesssim \left| \widehat {\mathcal {E}} _ {\mathcal {T}} \left(\theta^ {*}\right) - \widehat {\mathcal {E}} _ {\mathcal {S}} \left(\theta^ {*}\right) \right| + \max _ {\{\theta : \frac {1}{2} | \theta^ {\top} \widehat {\mathbf {H}} _ {\mathcal {S}} \left(\theta^ {*}\right) \theta | \leq \epsilon \}} \frac {1}{2} \left| \theta^ {\top} \widehat {\mathbf {H}} _ {\mathcal {T}} \left(\theta^ {*}\right) \theta \right| \tag {34} \\ \lesssim | \hat {\mathcal {E}} _ {\mathcal {T}} (\theta^ {*}) - \hat {\mathcal {E}} _ {\mathcal {S}} (\theta^ {*}) | + \max _ {\{\theta : \frac {1}{2} | \theta^ {\top} \widehat {\mathbf {H}} _ {\mathcal {S}} (\theta^ {*}) \theta | \leq \epsilon \}} \frac {1}{2} \left| \theta^ {\top} [ \widehat {\mathbf {H}} _ {\mathcal {T}} (\theta^ {*}) - \widehat {\mathbf {H}} _ {\mathcal {S}} (\theta^ {*}) + \widehat {\mathbf {H}} _ {\mathcal {S}} (\theta^ {*}) ] \theta \right| \\ \lesssim | \widehat {\mathcal {E}} _ {\mathcal {T}} (\boldsymbol {\theta} ^ {*}) - \widehat {\mathcal {E}} _ {\mathcal {S}} (\boldsymbol {\theta} ^ {*}) | + \max \frac {1}{2} | \boldsymbol {\theta} ^ {\top} [ \widehat {\mathbf {H}} _ {\mathcal {T}} (\boldsymbol {\theta} ^ {*}) - \widehat {\mathbf {H}} _ {\mathcal {S}} (\boldsymbol {\theta} ^ {*}) ] \boldsymbol {\theta} | + \epsilon \\ \end{array}
+$$
+
+For each image feature $\mathbf{z}_{\mathbf{I}}$ from the source domain, the image features $\tilde{\mathbf{z}}_{\mathbf{I}}$ from the target domain, which share the same label with $\mathbf{z}_{\mathbf{I}}$ , is assumed to satisfy: $||\mathbf{z}_{\mathbf{I}} - \tilde{\mathbf{z}}_{\mathbf{I}}||_2 \leq \varepsilon_1$ . Since we optimize the EBM loss based on the unmasked image features and masked image features, we have the following approximation:
+
+$$
+\left| \theta^ {\top} \left[ \widehat {\mathbf {H}} _ {\mathcal {T}} \left(\theta^ {*}\right) - \widehat {\mathbf {H}} _ {\mathcal {S}} \left(\theta^ {*}\right) \right] (\theta) \theta \right| \leq O \left(\varepsilon_ {1}\right) \tag {35}
+$$
+
+# G More Experiment Results
+
+More experiment details We present experiment details for the baseline models as follows:
+
+Table 7: Hard OOD detection Results #2. Comparison with state-of-the-art zero-shot methods on hard OOD detection datasets. In the table, OOD detection is measured by AUROC and FPR95 over 6 hard OOD detection datasets. Details of those datasets can be seen in prior researches (Chen et al., 2024; Ming et al., 2022a).
+
+| ID datasets | ImageNet-10 | ImageNet-20 | ImageNet-10 | ImageNet-100 | ImageNet-1k | WaterBirds |
| OOD datasets | ImageNet-20 | ImageNet-10 | ImageNet-100 | ImageNet-10 | ImageNet-O | Placesbg |
| Method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 |
| MCM | 98.60 | 6.00 | 98.09 | 13.04 | 99.39 | 2.50 | 87.20 | 60.00 | 78.59 | 64.27 | 87.45 | 33.62 |
| NegLabel | 98.80 | 5.00 | 98.04 | 11.60 | 99.37 | 2.50 | 87.93 | 49.40 | 85.78 | 56.65 | 87.99 | 29.16 |
| CSP | 99.02 | 3.30 | 98.79 | 3.40 | 99.33 | 2.22 | 89.59 | 42.40 | 88.08 | 51.50 | 92.88 | 12.07 |
| ΔEnergy (Ours) | 99.11 | 2.80 | 99.01 | 3.20 | 99.40 | 1.80 | 91.05 | 44.80 | 90.49 | 41.25 | 93.45 | 11.00 |
+
+Table 8: The 4 ImageNet-1k splits for hard OOD detection following the prior work (Li et al., 2024a). Given are the numbers of classes : training / test samples.
+
+ | ID | OOD |
| Split-1 | All dog classes116: 1856 / 5800 | Non-animal classes166: — / 8300 |
| Split-2 | Half of hunting dog classes30: 480 / 1500 | Other 4-legged animal classes55: — / 2750 |
| Split-3 | Mix of common classes151: 2416 / 7550 | Mix of common classes164: — / 8200 |
| Split-4 | First 100 classes100: 1600 / 5000 | Remaining 900 classes900: — / 45000 |
+
+# Algorithm 1 Algorithm of the proposed EBM method
+
+1: Input: ID data $\{\mathbf{x_i},\mathbf{y_i}\}$ $(i\in 1,\dots ,N)$ , ID class names of the $K$ -way classification, masking proportion $p$ , text prompts $\{t_1,t_2,\cdot ,t_K\}$ , hyperparameter $\lambda_0$ , and maximum epoch $T$ .
+2: for $t = 1$ to $T$ do
+3: Calculate the ID image features $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ and fine-tuned text features $\mathbf{z}_{\mathbf{T}}(t_j;\theta)$ when prompt-tuning the VLM;
+4: Compute the cosine similarity between image features $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}})$ and text features $\mathbf{z}_{\mathbf{T}}(t_j;\theta)$ and denote the text feature with the top-1 similarity as $\mathbf{h}_{\mathbf{1}}(\mathbf{x}_{\mathbf{i}};\theta)$ ;
+5: Compute the element-wise product $\mathbf{z}_{\mathbf{P}}(\mathbf{x}_{\mathbf{i}}) \coloneqq \mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) \odot \mathbf{h}_{\mathbf{1}}(\mathbf{x}_{\mathbf{i}}; \theta)$ and generate the mask, denoted as $\mathbf{m}'(\mathbf{x}_{\mathbf{i}})$ , which retains the top $p$ -proportion elements in $\mathbf{z}_{\mathbf{P}}(\mathbf{x}_{\mathbf{i}})$ ;
+6: Perform masking on the image feature and represent the masked image feature as $\mathbf{z}_{\mathbf{I}}(\mathbf{x}_{\mathbf{i}}) \odot \mathbf{m}'(\mathbf{x}_{\mathbf{i}})$ ;
+7: Gradient update under the proposed loss as illustrated in Equation 8;
+8: end for
+9: Output: Learnable content vectors $\theta$ .
+
+For zero-shot OOD detection methods such as CSP and NegLabel, all hyperparameters and OOD score calculation procedures are directly adopted from their respective papers (Chen et al., 2024; Jiang et al., 2024) without modification.
+
+For the tuning-based methods, based on the code of CoOp (Zhou et al., 2021), we train models with SGD optimizer with a learning rate of $2e - 2$ . The batch size is set to 32 for all tuning-based experiments. For the specific hyperparameter for each method, we follow the setting of the original paper.
+
+For the tuning-based methods for improving performances on closed-set data, such as CoOp (Zhou et al., 2021), CoCoOp (Zhou et al., 2022), DPLCLIP (Zhang et al., 2021b), and Bayes-CAL (Zhu et al., 2023b), we use random initialization for context vectors and set the number of context tokens as 16, set the class token position (CTP) as "end", and set the class-specific context (CSC) as "False". This configuration has shown the best average performance according to CoOp's paper. For the DPLCLIP (Zhang et al., 2021b) method, we set the additional hyperparameters of DPLCLIP (Zhang et al., 2021b) as: "mlp_depth=3", "mlp_width=512", and "mlp_dropout=0.1". For the CLIP-Adapter (Gao et al., 2023) method, we adopt image adapter only with the residual ratio of 0.5 for Setup-I and
+
+
+Figure 3: The significant prediction difference between closed-set data and open-set OOD data when vision-language re-alignment is applied to the zero-shot CLIP model (Radford et al., 2021). This difference offers a novel approach to distinguishing between closed-set and open-set classes. Based on the element-wise product between CLIP's image and text features, the masked ZS $\mathrm{CLIP}(+)$ model zeroes out the elements of the image feature where the corresponding values in the product are negative. In contrast, the opposite operation is applied in ZS $\mathrm{CLIP}(-)$ . It is observed that masking the elements where $P_{j} < 0$ preserves the model's original attention, which motivates us to leverage this consistency between the original and masked domains to improve OOD generalization.
+
+0.2 for Setup-II, and we use the bottleneck adapter with a hidden dimension that is 1/4 of the original feature dimension. This hyperparameter configuration has been demonstrated as the most effective for generic image datasets, such as ImageNet, in the original research (Gao et al., 2023).
+
+For tuning-based OOD detection methods, we adopt the recommended hyperparameter settings for LoCoOp, NegPrompt, and GalLoP. For LoCoOp (Miyai et al., 2024b), following the original paper, we set $\lambda = 0.25$ . The hyperparameter $K$ is searched over the range [100, 200] for Setup-I, and [2, 3, 4, 5] for Setup-II, based on validation data. For GalLoP, we follow its publicly available source code and adopt the same hyperparameter settings as reported in Table 3 of (Lafon et al., 2024), including configurations for local prompts, global prompts, tokens per prompt, and other relevant settings. In the NegPrompt method (Li et al., 2024a), we follow its source code and train the model using all the classes from the ID dataset and train a shared positive prompt and two shared negative prompts w.r.t. each training ID class. The hyperparameters $\beta$ and $\gamma$ are set to 0.1 and 0.05, respectively. In the first stage, CoOp is trained for 100 epochs to obtain the positive prompts. In the second stage, the positive prompts are frozen, and our model is trained for 10 epochs to learn the negative prompts. During the testing phase of LoCoOp, NegPrompt, and GalLoP, we use the GL-MCM score (Miyai et al., 2023) to compute OOD detection results.
+
+For experiments on each method, we repeat 3 times with different random splits to eliminate the effects of randomness. The hyperparameters in each method are selected based on the test accuracy on validation sets.
+
+$\Delta$ Energy based on negative OOD labels The NegLabel (Jiang et al., 2024) and CSP (Chen et al., 2024) methods introduce massive negative labels to boost OOD detection. The extended label space provides a novel perspective to distinguish OOD samples, leveraging extra clues by examining the similarities between images and the extended labels. The CSP method extends the NegLabel by "make up" the OOD label candidates, which are not standard class names but beneficial for the process.
+
+Table 9: Ablations on the hyperparameter $c$ . $\Delta$ Energy achieves the overall best performance on both AUROC and FPR95 when $c = 2$ .
+
+| Data | c=1 | c=2 | c=3 | c=4 | c=5 | c=6 |
| FPR | AUROC | FPR | AUROC | FPR | AUROC | FPR | AUROC | FPR | AUROC | FPR | AUROC |
| ID VS Open-Set OOD | 49.60 | 86.46 | 46.40 | 87.10 | 47.23 | 86.85 | 47.93 | 86.44 | 48.13 | 86.06 | 49.97 | 85.69 |
| Closed-set OOD VS Open-Set OOD | 66.54 | 78.49 | 67.16 | 78.68 | 68.30 | 78.43 | 68.40 | 78.07 | 69.04 | 77.79 | 69.27 | 77.50 |
+
+Table 10: Performance of ImageNet-1K-trained model on the test sets with covariate shifts (such as ImageNet_V2, ImageNet_R, ImageNet_A, and ImageNet_S) and concept shifts (such as ImageNet-Superclass).
+
+| Dataset | ImageNet (ID) | ImageNet_V2 | ImageNet_R | ImageNet_A | ImageNet_S | ImageNet-Superclass | Avg OOD Acc |
| CLIP | 68.80 | 73.97 | 46.09 | 47.77 | 60.90 | 33.18 | 52.38 |
| CoOp | 71.86 | 76.00 | 48.34 | 50.13 | 64.23 | 36.90 | 55.12 |
| CoCoOp | 71.10 | 76.18 | 48.75 | 50.63 | 64.07 | 37.17 | 55.36 |
| CoOp+EBM (Ours) | 71.70 | 77.10 | 49.02 | 51.35 | 64.78 | 38.24 | 56.10 |
+
+Thus, we also extend our $\Delta$ Energy method to CSP, leveraging the informative information from extra OOD labels. Let $\mathcal{V}_{\mathrm{in}}$ denote the ID labels and $\mathcal{V}_{\mathrm{OOD}}$ denote the OOD labels. Thus, $\Delta$ Energy is calculated as follows:
+
+$$
+\Delta \operatorname {E n e r g y} (\mathbf {x} _ {\mathbf {i}}) = \Delta \operatorname {E n e r g y} (\mathbf {x} _ {\mathbf {i}}; \mathcal {Y} _ {\text {i n}}) - \Delta \operatorname {E n e r g y} (\mathbf {x} _ {\mathbf {i}}; \mathcal {Y} _ {\mathrm {O O D}}) \tag {36}
+$$
+
+Ablations on the hyperparameter $c$ We conduct ablation studies on the hyperparameter $c$ , focusing on its effect on the discrimination between closed-set data and open-set OOD data using ImageNet-1k. The AUROC and FPR95 performances are reported in Table 9. The results demonstrate that: 1) There is a trade-off between the AUROC and FPR95. 2) $\Delta$ Energy achieves the overall best performance on both AUROC and FPR95 when $c = 2$ . Therefore, unless otherwise specified, we set $c = 2$ in $\Delta$ Energy for all experiments.
+
+OOD generalization performance under concept shift We conduct experiments on the concept-shifted ImageNet-Superclass dataset (Xiao et al., 2024; Santurkar et al., 2020), where each image is annotated with its corresponding superclass label. For the superclass labels, we use the open-source annotations provided in https://github.com/s-kumano/imagenet-superclass/blob/main/superclass_names.txt. In Table 10, we report the performance of our ImageNet-1K-trained model on test sets annotated with subclasses. From Table 10, our EBM-based method outperforms baseline models under both covariate and concept shifts. These results demonstrate the effectiveness of our proposed approach in learning domain-invariant features, thereby enhancing the model's ability to generalize under various distribution shifts.
+
+Fine-tuning accuracy of the proposed EBM on standard datasets used in CLIP We also evaluate the effect of the proposed EBM loss on fine-tuning accuracy across 11 standard datasets used in CLIP. We implement the EBM loss on PromptSRC and train the models using 16-shot settings with a ViT-B/16 backbone. We train 50 epochs forImagenet and 200 epochs for other datasets with SGD, following the same training setup as in PromptSRC. The EBM hyperparameter was set to $p\% = 0.6$ and we searched over the values of $\{0.1, 0.5, 1.0, 2.0\}$ .
+
+The performance results are reported in Table 11, demonstrating that the proposed EBM method further improves test accuracy on PromptSRC. Since the objective of EBM is to minimize $\mathcal{L}_{\Delta E}$ , it encourages the model to make equally high-confidence predictions for both the original and partially masked features, thus facilitating the learning of domain-invariant features between the original domain and the masked domain. The improvements observed in Table 11 empirically suggest that regularization on masked features effectively enhances the model's generalization performance.
+
+# H Limitations and Future Work
+
+As demonstrated in Theorems 3.2-3.3 and B.1, our method outperforms the MCM approach by enlarging the difference between the ID and open-set OOD samples. The empirical results in Tables 1-2 and Tables 6-7 further support our theoretical findings. While our method also improves performance on conventional OOD detection benchmarks, the gains are less pronounced compared to the hard OOD scenarios. This may be due to the reduced impact of amplifying the distinction between
+
+Table 11: Performances of fine-tuning accuracy on 11 standard datasets used in CLIP.
+
+| Data | ImageNet | Caltech101 | OxfordPets | Cars | Flowers102 | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | Avg |
| CLIP | 66.7 | 92.2 | 88.4 | 65.5 | 70.7 | 84.8 | 24.8 | 62.3 | 44.1 | 48.3 | 64.7 | 64.8 |
| CoOp | 71.7 | 95.6 | 91.9 | 83.1 | 97.1 | 84.2 | 43.4 | 74.7 | 69.9 | 84.9 | 82.2 | 79.9 |
| CoCoOp | 71.0 | 95.2 | 93.3 | 71.6 | 87.8 | 87.2 | 31.2 | 72.2 | 63.0 | 73.3 | 78.1 | 74.9 |
| MaPLe | 72.3 | 96.0 | 92.8 | 83.6 | 97.0 | 85.3 | 48.4 | 75.5 | 71.3 | 92.3 | 85.0 | 81.8 |
| PLOT | 72.6 | 96.0 | 93.6 | 84.6 | 97.6 | 87.1 | 46.7 | 76.0 | 71.4 | 92.0 | 85.3 | 82.1 |
| PromptSRC | 73.2 | 96.1 | 93.7 | 83.8 | 97.6 | 86.5 | 50.8 | 77.2 | 72.7 | 92.4 | 86.5 | 82.8 |
| Ours | 73.6 | 96.5 | 94.4 | 85.3 | 98.2 | 87.6 | 51.3 | 77.3 | 73.3 | 93.5 | 86.7 | 83.4 |
+
+Table 12: Comparison of computational efficiency between our method and prior approaches.
+
+| Category | Method | Time | GPU (MB) | Batch Size |
| Zero-shot | MCM | 17s | 1706 | 100 |
| ΔEnergy (Ours) | 18s | 1706 | 100 |
| Fine-tuning | CoOp | 18min | 7658 | 32 |
| LoCoOp | 25min | 8320 | 32 |
| GaLoP | 140min | 47492 | 32 |
| EBM (Ours) | 18min | 7756 | 32 |
+
+ID and OOD samples when the inherent difference between ID and open-set OOD samples is already substantial. Future work may explore strategies to further enhance $\Delta$ Energy by incorporating CSP's informative negative labels.
+
+# I Computation Efficiency
+
+The proposed zero-shot OOD detection method does not require fine-tuning of VLM parameters. Instead, it introduces a novel post-hoc scoring function to identify open-set OOD samples. All computations are performed in the latent space during the alignment between vision and language representations, enabling improved OOD detection performances with comparable inference time and computational cost to existing methods such as MCM and raw energy scores. Compared to methods like NegLabel and CSP, $\Delta$ Energy does not rely on additional negative labels, making it more computationally efficient. For tuning-based approaches, our proposed EBM method enables joint optimization for both OOD generalization and OOD detection by the novel optimization objective as defined in Equation 8. It re-aligns vision-language representations in the latent space without introducing extra prompts, as required by vanilla CoOp (Zhou et al., 2021), thereby it is more efficient over methods such as LoCoOp (Miyai et al., 2024b) and GalLoP (Lafon et al., 2024). A detailed comparison of computation cost is provided in Table 12. Here, GalLoP is trained using four NVIDIA RTX 4090 GPUs, while all other experiments are conducted on a single NVIDIA RTX 4090 GPU.
\ No newline at end of file
diff --git a/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/images.zip b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fe8e79746ab348971890466bab7de269f64540a8
--- /dev/null
+++ b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f2e89a066fa2cc41f1379512533d672d39b7693cf6d0955b6db25f0fa6e4f47
+size 1582228
diff --git a/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/layout.json b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..11709466e8cad556a8e089f15191e9c5e0fcd7d6
--- /dev/null
+++ b/NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:38c8966e1e6f1a2e1a793efb0d75f3a967562002aef831bd9e6423729f7c552c
+size 1213575
diff --git a/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_content_list.json b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ac777e83a9406458678b8af6a0533b2553d50199
--- /dev/null
+++ b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ddca4faf9a894c910908d1262293af88907ff1b45a8db71576c39fb843fe8a4c
+size 233520
diff --git a/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_model.json b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..97654323145d13164280fc89d8abd39095e3a90b
--- /dev/null
+++ b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7117f856c1d42855c96974493d8e8efa44f7d882736d2e12e1f1343e3fb510ad
+size 286578
diff --git a/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_origin.pdf b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..97b60a781cbbd607fea6bd7f4e0746ea8d061a4e
--- /dev/null
+++ b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3249a812495cc1faf6697a31194a1f3d44907d972d7dd72b7b4afc114c2f9726
+size 41109155
diff --git a/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/full.md b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..22652a061c2c61e13e526ea6b739cede910f0f23
--- /dev/null
+++ b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/full.md
@@ -0,0 +1,1133 @@
+# $\Psi$ -Sampler: Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models
+
+Taehoon Yoon* Yunhong Min* Kyeongmin Yeo* Minhyuk Sung
+
+KAIST
+
+{taehoon, dbsghd363,aaaa,mhsung}@kaist.ac.kr
+
+# Abstract
+
+We introduce $\Psi$ -SAMPLEER, an SMC-based framework incorporating pCNL-based initial particle sampling for effective inference-time reward alignment with a score-based generative model. Inference-time reward alignment with score-based generative models has recently gained significant traction, following a broader paradigm shift from pre-training to post-training optimization. At the core of this trend is the application of Sequential Monte Carlo (SMC) to the denoising process. However, existing methods typically initialize particles from the Gaussian prior, which inadequately captures reward-relevant regions and results in reduced sampling efficiency. We demonstrate that initializing from the reward-aware posterior significantly improves alignment performance. To enable posterior sampling in high-dimensional latent spaces, we introduce the preconditioned Crank-Nicolson Langevin (pCNL) algorithm, which combines dimension-robust proposals with gradient-informed dynamics. This approach enables efficient and scalable posterior sampling and consistently improves performance across various reward alignment tasks, including layout-to-image generation, quantity-aware generation, and aesthetic-preference generation, as demonstrated in our experiments.
+
+Project Webpage: https://psi-sampler.github.io/
+
+# 1 Introduction
+
+Recently, a shift in the scaling law paradigm from pre-training to post-training has opened new possibilities for achieving another leap in AI model performance, as exemplified by the unprecedented AGI score of GPT-o3 [1] and DeepSeek's "Aha moment" [2]. Breakthroughs in LLMs have also extended to score-based generative models [3-7], resulting in significant improvements in user preference alignment [8]. Similar to the autoregressive generation process in LLMs, the denoising process in score-based generative models can be interpreted as a Sequential Monte Carlo (SMC) [9-11] process with a single particle at each step. This perspective allows inference-time alignment to be applied analogously to LLMs by populating multiple particles at each step and selecting those that score highly under a given reward function [8, 12-17]. A key distinction is that score-based generative models enable direct estimation of the final output from any noisy intermediate point via Tweedie's formula [18], facilitating accurate approximation of the optimal value function [19, 8, 20] through expected reward estimation.
+
+However, previous SMC-based approaches [12, 15, 14, 21, 22], where each SMC step is coupled with the denoising process of score-based generative models, are limited in their ability to effectively explore high-reward regions, as the influence of the reward signal diminishes over time due to vanishing diffusion coefficient. Thus, rather than relying on particle exploration during later stages, it is more critical to identify effective initial latents that are well-aligned with the reward model
+
+from the outset. In this work, we address this problem and propose an MCMC-based initial particle population method that generates strong starting points for the subsequent SMC process. This direction is particularly timely given recent advances in distillation techniques for score-based generative models [23-27], now widely adopted in state-of-the-art models [28, 29]. These methods yield straighter generative trajectories and clearer Tweedie estimates [18] from early steps, enabling more effective exploration from the reward-informed initial distribution.
+
+A straightforward baseline for generating initial particles is the Top- $K$ -of- $N$ strategy: drawing multiple samples from the standard Gaussian prior and selecting those with the highest reward scores. Though effective, this naive approach offers limited improvement in subsequent SMC due to its reliance on brute-force sampling. Motivated by these limitations, we explore Markov Chain Monte Carlo (MCMC) [30-35] methods based on Langevin dynamics, which are particularly well-suited to our setting since we sample from the initial posterior distribution, whose form is known. Nevertheless, applying MCMC in our problem presents unique challenges: the exploration space is extremely high-dimensional (e.g., 65,536 for FLUX [28]), posing significant challenges for conventional MCMC methods. In particular, the Metropolis-Hastings (MH) accept-reject mechanism, when used with standard Langevin-based samplers, becomes ineffective in such high-dimensional regimes, as the acceptance probability rapidly diminishes and most proposals are rejected.
+
+Our key idea for enabling effective particle population from the initial reward-informed distribution is to leverage the Preconditioned Crank-Nicolson (pCN) algorithm [36-38], which is designed for function spaces or infinite-dimensional Hilbert spaces. When combined with the Langevin algorithm (yielding pCNL), its semi-implicit Euler formulation allows for efficient exploration in a high-dimensional space. Furthermore, when augmented with the MH correction, the acceptance rate is significantly improved compared to vanilla MALA. We therefore propose performing pCNL over the initial posterior distribution and selecting samples at uniform intervals along the resulting Markov chain. These samples are then used as initial particles for the subsequent SMC process across the denoising steps. We refer to the entire pipeline—PCNL-based initial particle sampling followed by SMC-based Inference-time reward alignment—as PSI $(\Psi)$ -Sampler. To the best of our knowledge, this is the first work to apply the pCN algorithm in the context of generative modeling.
+
+In our experiments, we evaluate three reward alignment tasks: layout-to-image generation (placing objects in designated bounding boxes within the image), quantity-aware generation (aligning the number of objects in the image with the specified count), and aesthetic-preference generation (enhancing visual appeal). We compare our $\Psi$ -SAMPLEER against the base SMC method [14] with random initial particle sampling, SMC combined with initial particle sampling via Top-K-of-N, ULA, and MALA, as well as single-particle methods [39, 40]. Across all tasks, $\Psi$ -SAMPLEER consistently achieves the best performance in terms of the given reward and generalizes well to the held-out reward, matching or surpassing existing baselines. Its improvement over the base SMC method highlights the importance of posterior-based initialization, while its outperformance over ULA and MALA further confirms the limitations of these methods in extremely high-dimensional spaces.
+
+# 2 Related Work
+
+# 2.1 Inference-Time Reward Alignment
+
+Sequential Monte Carlo (SMC) [9-11] has proven effective in guiding the generation process of score-based generative models for inference-time reward alignment [22, 21, 15, 14, 12]. Prior SMC-based methods differ in their assumptions and applicability. For instance, FPS [15] and MCGdiff [22] are specifically designed for linear inverse problems and thus cannot generalize to arbitrary reward functions. SMC-Diff [21] depends on the idealized assumption that the learned reverse process exactly matches the forward noisig process—an assumption that rarely holds in practice. TDS [14] and DAS [12] employ twisting and tempering strategies respectively to improve approximation accuracy while reducing the number of required particles. Despite these variations, all aforementioned SMC-based approaches share a common limitation that they initialize particles from the standard Gaussian prior, which is agnostic to the reward function. This mismatch can result in poor coverage of high-reward regions and reduced sampling efficiency.
+
+In addition to multi-particle systems like SMC, single-particle approaches have also been explored for inference-time reward alignment [39-42]. These methods guide generation by applying reward gradients along a single sampling trajectory. However, they are inherently limited in inference-time
+
+reward alignment, as simply increasing the number of denoising steps does not consistently lead to better sample quality. In contrast, SMC-based methods allow users to trade computational cost for improved reward alignment, making them more flexible and scalable in practice.
+
+# 2.2 Fine-Tuning-Based Reward Alignment
+
+Beyond inference-time methods, another line of work focuses on fine-tuning score-based generative models for reward alignment. Some approaches perform supervised fine-tuning by weighting generated samples according to their reward scores and updating the model to favor high-reward outputs [43, 44], while others frame the denoising process as a Markov Decision Process (MDPs) and apply reinforcement learning techniques such as policy gradients [45] or entropy-regularized objectives to mitigate overoptimization [46-48]. These RL-based methods are especially useful when the reward model is non-differentiable but may miss gradient signals when available. More recent methods enable direct backpropagation of reward gradients through the generative process [49, 50]. Alternatively, several works [20, 51, 52] adopt a stochastic optimal control (SOC) perspective, deriving closed-form optimal drift and initial distributions using pathwise KL objectives.
+
+While fine-tuning-based methods are an appealing approach, they have practical limitations in that they necessitate costly retraining whenever changes are made to the reward function or the pretrained model. Further, it has been shown that fine-tuning-based methods exhibit mode-seeking behavior [12], which leads to low diversity in the generated samples.
+
+# 3 Problem Definition & Background
+
+# 3.1 Background: Score-Based Generative Models
+
+Given a standard Gaussian distribution $p_1 = \mathcal{N}(\mathbf{0},\mathbf{I})$ and data distribution $p_0$ , score-based generative models are trained to estimate the score function, which is the gradient of log-density, at intermediate distributions $p_t$ along a probability path connecting $p_1$ to $p_0$ .
+
+In score-based generative models [3-7], the data generation process is typically described by a reverse-time stochastic differential equation (SDE) [4]:
+
+$$
+\mathrm {d} \mathbf {x} _ {t} = \mathbf {f} \left(\mathbf {x} _ {t}, t\right) \mathrm {d} t + g (t) \mathrm {d} \mathbf {W}, \quad \mathbf {f} \left(\mathbf {x} _ {t}, t\right) = \mathbf {u} \left(\mathbf {x} _ {t}, t\right) - \frac {g (t) ^ {2}}{2} \nabla \log p _ {t} \left(\mathbf {x} _ {t}\right), \quad \mathbf {x} _ {1} \sim p _ {1} \tag {1}
+$$
+
+where $\mathbf{f}(\mathbf{x}_t,t)$ and $g(t)$ denote the drift and diffusion coefficients, respectively, and $\mathbf{W}$ is a $d$ -dimensional standard Brownian motion. The term $\mathbf{u}(\mathbf{x}_{\mathbf{t}},\mathbf{t})$ corresponds to the velocity field in flow-based model [53, 27, 54] and also corresponds to the drift term of the probability flow ODE (PF-ODE) in diffusion models [4]. We assume that the generation process proceeds in decreasing time, i.e., from $t = 1$ to $t = 0$ , following the convention commonly adopted in the score-based generative modeling literature [4, 5].
+
+The deterministic flow-based generative model can be recovered by setting the diffusion coefficient $g(t) = 0$ , thereby reducing the SDE to an ODE. Note that flow-based models [53, 27], originally formulated as an ODE, can be extended to an SDE formulation that shares the same intermediate distributions $p_t$ , thereby allowing stochasticity to be introduced during generation [7, 6, 13]. Moreover, the velocity field $\mathbf{u}(\mathbf{x}_t, t)$ can be readily transformed into a score function [7, 54]. For these reasons, we categorize both diffusion and flow-based models as the score-based generative models.
+
+# 3.2 Inference-Time Reward Alignment Using Score-Based Generative Models
+
+Inference-time reward alignment [8, 12-17] aims to generate high-reward samples $\mathbf{x}_0\in \mathbb{R}^d$ without fine-tuning the pretrained score-based generative model. The reward associated with each sample is evaluated using a task-specific reward function $r:\mathbb{R}^d\to \mathbb{R}$ , which may quantify aspects such as aesthetic quality or the degree to which a generated image satisfies user-specified conditions.
+
+But to avoid over-optimization [46, 20, 12] with respect to the reward function, which may lead to severe distributional drift or adversarial artifacts, a regularization term is introduced to encourage the generated samples to remain close to the prior of the pre-trained generative model. This trade-off is captured by defining a target distribution $p_0^*$ that balances reward maximization with prior adherence,
+
+formally expressed as:
+
+$$
+p _ {0} ^ {*} = \arg \max _ {q} \underbrace {\mathbb {E} _ {\mathbf {x} _ {0} \sim q} [ r (\mathbf {x} _ {0}) ]} _ {(a)} - \alpha \underbrace {\mathcal {D} _ {\mathrm {K L}} [ q \| p _ {0} ]} _ {(b)}. \tag {2}
+$$
+
+Here, term (a) in Eq. 2 encourages the generation of high-reward samples, while term (b), the KL-divergence, enforces proximity to the pre-trained model's prior distribution $p_0$ . The parameter $\alpha \in \mathbb{R}_+$ controls the strength of this regularization: larger values of $\alpha$ lead to stronger adherence to the prior, typically resulting in lower reward but higher proximity to the support of the generative model.
+
+The target distribution $p_0^*$ has a closed-form expression, given by:
+
+$$
+p _ {0} ^ {*} (\mathbf {x} _ {0}) = \frac {1}{Z _ {0}} p _ {0} (\mathbf {x} _ {0}) \exp \left(\frac {r (\mathbf {x} _ {0})}{\alpha}\right) \tag {3}
+$$
+
+where $Z_{0}$ is normalizing constant. Detailed derivation using calculus of variations can be found in Kim et al. [13]. This reward-aware target distribution has been widely studied in the reinforcement learning literature [55-59]. Analogous ideas have also been adopted to fine-tuning score-based generative models [60, 51, 52, 45, 48-50, 46, 43, 20]. As in our case, this target distribution also serves as the objective from which one aims to sample in inference-time reward alignment task [8, 12, 13].
+
+Since sample generation in score-based models proceeds progressively through a sequence of timesteps, it becomes important to maintain proximity with the pretrained model not just at the endpoint, but throughout the entire generative trajectory. To account for this, the original objective in Eq. 2 is extended to a trajectory-level formulation. Although there are some works [50, 46, 45] that frame this problem as entropy-regularized Markov Decision Process (MDPs), where each denoising step of score-based generative model corresponds to a policy in RL, we adopt a stochastic optimal control (SOC) perspective [20, 51, 52], which naturally aligns with the continuous-time structure of score-based generative models and yields principled expressions for both the optimal drift and the optimal initial distribution.
+
+Building on this, the entropy-regularized SOC framework proposed by Uehara et al. [20] provides closed-form approximations for the optimal initial distribution, optimal control function, and optimal transition kernel that together enable sampling from the reward-aligned target distribution defined in Eq. 3 using score-based generative models.
+
+The optimal initial distribution can be derived using the Feynman-Kac formula and approximated via Tweedie's formula [18] as:
+
+$$
+\tilde {p} _ {1} ^ {*} (\mathbf {x} _ {1}) := \frac {1}{Z _ {1}} p _ {1} (\mathbf {x} _ {1}) \exp \left(\frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}\right) \tag {4}
+$$
+
+where $\mathbf{x}_{0|t} \coloneqq \mathbb{E}_{\mathbf{x}_0 \sim p_{0|t}}[\mathbf{x}_0]$ denotes Tweedie's formula [18], representing the conditional expectation under $p_{0|t} \coloneqq p(\mathbf{x}_0|\mathbf{x}_t)$ . Under the same approximation, the transition kernel satisfying the optimality condition is approximated by:
+
+$$
+\tilde {p} _ {\theta} ^ {*} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right) = \frac {\exp \left(r \left(\mathbf {x} _ {0 \mid t - \Delta t}\right) / \alpha\right)}{\exp \left(r \left(\mathbf {x} _ {0 \mid t}\right) / \alpha\right)} p _ {\theta} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right). \tag {5}
+$$
+
+where $p_{\theta}(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t)$ is a transition kernel of the pretrained score-based generative model.
+
+Further details on the SOC framework and its theoretical foundations in the context of reward alignment are provided in the Appendix A.
+
+# 3.3 Sequential Monte Carlo (SMC) with Denoising Process
+
+For reward-alignment tasks, recent works [12, 15, 14, 21, 22] have demonstrated that Sequential Monte Carlo (SMC) can efficiently generate samples from the target distribution in Eq. 3. When applied to score-based generative models, the denoising process is coupled with the sequential structure of SMC. Specifically, several prior works [8, 12, 14] adopt Eq. 5 as the intermediate target transition kernel for sampling from Eq. 3.
+
+In general, SMC methods [9-11] are a class of algorithms for sampling from sequences of probability distributions. Starting from $K$ particles sampled i.i.d. from the initial distribution, SMC approximates a target distribution by maintaining a population of $K$ weighted particles, which are repeatedly updated through a sequence of propagation, reweighting, and resampling steps. The weights are
+
+updated over time according to the following rule:
+
+$$
+w _ {t - \Delta t} ^ {(i)} = \frac {p _ {\operatorname {t a r}} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right)}{q \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right)} w _ {t} ^ {(i)} \tag {6}
+$$
+
+where $p_{\mathrm{tar}}$ is an intermediate target kernel we want to sample from, and $q(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t)$ is a proposal kernel used during propagation. As the number of particles $K$ increases, the approximation improves due to the asymptotic consistency of the SMC framework [61, 62].
+
+Following [8, 12, 14], which derives both the intermediate target transition kernel and the associated proposal for reward-guided SMC, we compute the weight at each time step as:
+
+$$
+w _ {t - \Delta t} ^ {(i)} = \frac {\exp (r (\mathbf {x} _ {0 | t - \Delta t}) / \alpha) p _ {\theta} (\mathbf {x} _ {t - \Delta t} | \mathbf {x} _ {t})}{\exp (r (\mathbf {x} _ {0 | t}) / \alpha) q (\mathbf {x} _ {t - \Delta t} | \mathbf {x} _ {t})} w _ {t} ^ {(i)}, \tag {7}
+$$
+
+where $p_{\mathrm{tar}}$ is set as Eq. 5. The proposal distribution $q(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t)$ is obtained by discretizing the reverse-time SDE with an optimal control. This yields the following proposal with the Tweedie's formula [18]:
+
+$$
+q \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right) = \mathcal {N} \left(\mathbf {x} _ {t} - \mathbf {f} \left(\mathbf {x} _ {t}, t\right) \Delta t + g ^ {2} (t) \nabla \frac {r \left(\mathbf {x} _ {0 \mid t}\right)}{\alpha} \Delta t, g (t) ^ {2} \Delta t \mathbf {I}\right). \tag {8}
+$$
+
+Details on SMC and its connection to reward-guided sampling are provided in the Appendix B.
+
+# 3.4 Limitations of Previous SMC-Based Reward Alignment Methods
+
+While prior work [12, 15, 14, 21, 22] has demonstrated the effectiveness of SMC in inference-time reward alignment, these approaches typically rely on sampling initial particles from the standard Gaussian prior. We argue that sampling particles directly from the posterior in Eq. 4, rather than the prior, is essential for better high-reward region coverage and efficiency in SMC. First, the effectiveness of the SMC proposal distribution Eq. 8 diminishes over time making it increasingly difficult to guide particles toward high-reward regions in later steps. As the diffusion coefficient $g(t)^2 \to 0$ as $t \to 0$ , it weakens the influence of the reward signal $\nabla r(\mathbf{x}_{0|t})$ , since it is scaled by $g^2(t)$ in the proposal. Second, the initial position of particles becomes particularly critical when the reward function is highly non-convex and multi-modal. While the denoising process may, in principle, help particles escape local modes and explore better regions, this becomes increasingly difficult over time, not only due to the vanishing diffusion coefficient, but also because the intermediate distribution becomes less perturbed and more sharply concentrated, reducing connectivity between modes [63]. In contrast, at early time steps (e.g., $t = 1$ ), the posterior distribution is more diffuse and better connected across modes, enabling more effective exploration. Furthermore, recent score-based generative models distilled for trajectory straightening have made the approximation of the optimal initial distribution in Eq. 4 sufficiently precise. These observations jointly motivate allocating computational effort to obtaining high-quality initial particles that are better aligned with the reward signal.
+
+# 4 $\Psi$ -Sampler: pCNL-Based Initial Particle Sampling
+
+In this work, we propose $\Psi$ -Sampler, a framework that combines efficient initial particle sampling with SMC-based inference-time reward alignment for score-based generative models. The initial particles are sampled using the Preconditioned Crank-Nicolson Langevin (pCNL) algorithm, hence the name PCNL-based initial particle sampling followed by SMC-based Inference-time reward alignment. The key idea is to allocate computational effort to the initial particle selection by sampling directly from the posterior distribution defined in Eq. 4. This reward-informed initialization ensures that the particle set is better aligned with the target distribution from the outset, resulting in improved sampling efficiency and estimation accuracy in the subsequent SMC process.
+
+While the unnormalized density of the posterior distribution in Eq. 4 has an analytical form, drawing exact samples from it remains challenging. A practical workaround is to approximate posterior sampling via a Top-K-of-N strategy: generate $N$ samples from the prior, and retain the top $K$ highest-scoring samples as initial particles. This variant of Best-of- $N$ [64-66] resembles rejection sampling and serves as a crude approximation to posterior sampling [67, 68]. We find that even this simple selection-based approximation leads to meaningful improvements. But considering that sampling space is high-dimensional, one can adopt Markov Chain Monte Carlo (MCMC) [30-35] which is known to be effective at sampling from high-dimensional space. In what follows, we briefly introduce Langevin-based MCMC algorithms that we adopt for posterior sampling.
+
+# 4.1 Background: Langevin-Based Markov Chain Monte Carlo Methods
+
+Langevin-based MCMC refers to a class of samplers that generate proposals by discretizing the Langevin dynamics, represented as stochastic differential equation (SDE),
+
+$$
+\mathrm {d} \mathbf {x} = \frac {1}{2} \nabla \log p _ {\operatorname {t a r}} (\mathbf {x}) \mathrm {d} t + \mathrm {d} \mathbf {W}, \tag {9}
+$$
+
+whose stationary distribution is the target density $p_{\mathrm{tar}}$ . A single Euler-Maruyama discretization of the Langevin dynamics with step size $\epsilon > 0$ produces the proposal
+
+$$
+\mathbf {x} ^ {\prime} = \mathbf {x} + \frac {\epsilon}{2} \nabla \log p _ {\operatorname {t a r}} (\mathbf {x}) + \sqrt {\epsilon} \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}). \tag {10}
+$$
+
+Accepting every proposal yields the Unadjusted Langevin Algorithm (ULA) [31]. As a result, the Markov chain induced by ULA converges to a biased distribution whose discrepancy arises from the discretization error. In particular, since ULA does not include a correction mechanism, it does not guarantee convergence to the target distribution $p_{\mathrm{tar}}$ . Metropolis-Adjusted Langevin Algorithm (MALA) [31, 32] combines the Langevin proposal Eq. 10 with the Metropolis-Hastings (MH) [69, 30] correction, a general accept-reject mechanism that eliminates discretization bias. Given the current state $\mathbf{x}$ and a proposal $\mathbf{x}' \sim q(\mathbf{x}'|\mathbf{x})$ , the move is accepted with probability:
+
+$$
+a _ {\mathbf {M}} \left(\mathbf {x}, \mathbf {x} ^ {\prime}\right) = \min \left(1, \frac {p _ {\operatorname {t a r}} \left(\mathbf {x} ^ {\prime}\right) q \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)}{p _ {\operatorname {t a r}} (\mathbf {x}) q \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)}\right). \tag {11}
+$$
+
+This rule enforces detailed balance, so $p_{\mathrm{tar}}$ is an invariant distribution of the resulting Markov chain. While MALA is commonly used in practice due to its simplicity and gradient-based efficiency, it becomes increasingly inefficient in extremely high-dimensional settings, as is typical in image generative models (e.g., 65,536 for FLUX [28]). With a fixed step size, its acceptance probability degenerates as $d \to \infty$ . Theoretically, to maintain a reasonable acceptance rate, the step size must shrink with dimension, typically at the optimal rate of $O(d^{-1/3})$ [70], which leads to extremely slow mixing and inefficient exploration in extremely high-dimension space.
+
+# 4.2 Preconditioned Crank-Nicolson Langevin (pCNL) Algorithm
+
+To address high-dimensional sampling challenges (Sec. 4.1), infinite-dimensional MCMC methods [36-38] were developed, particularly for PDE-constrained Bayesian inverse problems. These methods remain well-posed even when dimensionality increases. Among them, the preconditioned Crank-Nicolson (pCN) algorithm offers a simple, dimension-robust alternative to Random Walk Metropolis (RWM), though it fails to leverage the potential function, limiting its efficiency.
+
+To overcome this limitation, the preconditioned Crank-Nicolson Langevin (pCNL) algorithm has been proposed [36, 37], which augments the dimension-robustness of pCN with the gradient-informed dynamics of Langevin methods (Eq. 9), thereby improving sampling efficiency in high-dimensional settings. The pCNL algorithm employs a semi-implicit Euler (Crank-Nicolson-type) discretization of Langevin dynamics as follows:
+
+$$
+\mathbf {x} ^ {\prime} = \mathbf {x} + \frac {\epsilon}{2} \left(- \frac {\mathbf {x} + \mathbf {x} ^ {\prime}}{2} + \nabla \frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}\right) + \sqrt {\epsilon} \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}). \tag {12}
+$$
+
+assuming prior is $\mathcal{N}(\mathbf{0},\mathbf{I})$ as in our case. This Crank-Nicolson update admits an explicit closed-form solution, and hence retains the dimension-robustness of pCN, only when the drift induced by the prior is linear, as with a standard Gaussian prior. Therefore, in our setting, it is applicable only at $t = 1$ , making it a particularly useful method that aligns with our proposal to sample particles from the posterior distribution Eq. 4, where the prior is the standard Gaussian. With $\rho = (1 - \epsilon /4) / (1 + \epsilon /4)$ , we can rewrite above equation as:
+
+$$
+\mathbf {x} ^ {\prime} = \rho \mathbf {x} + \sqrt {1 - \rho^ {2}} \left(\mathbf {z} + \frac {\sqrt {\epsilon}}{2} \nabla \frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}\right), \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}). \tag {13}
+$$
+
+Note that pCNL also adopts MH correction in Eq. 11 to guarantee convergence to the correct target distribution. The pCN algorithm maintains a well-defined, non-zero acceptance probability even in the infinite-dimensional limit, allowing the use of fixed step sizes regardless of the dimension $d$ [36, 37]. This property stems from its prior-preserving proposal, which ensures that the Gaussian reference measure is invariant under the proposal mechanism. This robustness carries over to pCNL, whose proposal inherits pCN's ability to handle Gaussian priors in a dimension-independent manner. We include the detailed acceptance probability formulas for MALA and pCNL in the Appendix C.
+
+
+(A) Original
+Figure 1: Toy sampling-method comparison. Each panel visualizes both the initial samples (blue) and their corresponding clean data samples (red). From left to right: (A) samples from the original score-based generative model; (B) the target distribution defined by Eq. 3; (C) results from SMC; (D) results from MALA+SMC; and (E) results from our proposed $\Psi$ -Sampler.
+
+
+(B) Ground Truth
+
+
+(C) SMC
+
+
+(D) MALA+SMC
+
+
+(E) $\Psi$ -SAMPLEER
+
+# 4.3 Initial Particle Sampling
+
+To sample initial particles using MCMC for the subsequent SMC process, we follow standard practices to ensure effective mixing and reduce sample autocorrelation. Specifically, we discard the initial portion of each chain as burn-in [71] and apply thinning by subsampling at fixed intervals to mitigate high correlation between successive samples. A constant step size is used across iterations. Although adaptive step size schemes may improve convergence, we opt for a fixed-step approach for simplicity. Once the initial particles are sampled, we apply the existing SMC-based method [14, 8].
+
+Comparison of SMC Initialization in a Toy Experiment. In Fig. 1, we present a 2D toy experiment comparing SMC performance when initializing particles from the prior versus the posterior. We train a simple few-step score-based generative model on a synthetic dataset where the clean data distribution $p_0$ is a 6-mode Gaussian Mixture Model (GMM), shown as red dots in Fig.1 (A). The prior distribution is shown in blue, and the gray lines depict sampling trajectories during generation. We define a reward function that assigns high scores to samples from only a subset of the GMM modes, yielding a target distribution at $t = 0$ (Eq. 3), as illustrated in Fig. 1 (B) (red dots). The corresponding optimal initial distribution—the posterior at $t = 1$ (Eq. 4)—is shown as blue dots in Fig. 1 (B). We compare (C) standard SMC with prior sampled particles, (D) SMC with posterior samples from MALA, and (E) our $\Psi$ -Sampler. All settings use the same total number of function evaluations (NFE). Prior-based SMC (C) uses 100 NFE; MALA+SMC and $\Psi$ -Sampler allocate 50 NFE for MCMC and use fewer particles for SMC. While MALA-based initialization method (D) significantly improves alignment with the target distribution (red dots in (B)) over prior-based method (C), some modes remain underrepresented. In contrast, $\Psi$ -Sampler (E) provides tighter alignment with the target distribution and the posterior distribution, illustrating its effectiveness in sampling high-quality samples. Full experimental details are provided in Appendix E.
+
+# 5 Experiments
+
+# 5.1 Experiment Setup
+
+We validate our approach across three applications: layout-to-image generation, quantity-aware generation, and aesthetic-preference image generation. In our experiments, the held-out reward refers to an evaluation metric that is not accessible during generation and is used solely to assess the generalization of the method. Full details for each application are provided in Appendix D.
+
+For the layout-to-image generation task, where the goal is to place user-specified objects within designated bounding boxes, we use predicted bounding box information from a detection model [72] and define the reward as the mean Intersection-over-Union (mIoU) between the predicted and target bounding boxes. For the quantity-aware image generation task, which involves generating a user-specified object in a specified quantity, we use the predicted count from a counting model [73] and define the reward as the negative smooth L1 loss between the predicted and target counts. In both tasks, we include evaluations using held-out reward models to assess generalization. Specifically, for layout-to-image generation, we report mIoU evaluated with a different detection model [74] (held-out reward model); for quantity-aware image generation, we report mean absolute error (MAE) and counting accuracy using an alternative counting model [75] (held-out reward model). For aesthetic-preference image generation task, which aims to produce visually appealing images, we use an aesthetic score prediction model [76] as the reward model and use its predicted score as the reward. Across all applications, we further evaluate the generated images using ImageReward [77]
+
+| Tasks | Metrics | Single Particle | SMC-Based Methods |
| Sampling from Prior | Sampling from Posterior |
| DPS [39] | FreeDoM [40] | TDS [14] | DAS [12] | Top-K-of-N | ULA | MALA | Ψ-SAMPLER |
| Layout to Image | GroundingDINO†[72]↑ | 0.166 | 0.177 | 0.417 | 0.363 | 0.425 | 0.370 | 0.401 | 0.467 |
| mIoU [74]↑ | 0.215 | 0.229 | 0.402 | 0.342 | 0.427 | 0.374 | 0.401 | 0.471 |
| ImageReward [77]↑ | 0.705 | 0.713 | 0.962 | 0.938 | 0.957 | 0.838 | 0.965 | 1.035 |
| VQA [78]↑ | 0.684 | 0.650 | 0.794 | 0.784 | 0.855 | 0.783 | 0.789 | 0.810 |
| Quantity Aware | T2I-Count†[73]↓ | 14.187 | 15.214 | 1.804 | 1.151 | 1.077 | 3.035 | 1.601 | 0.850 |
| MAE [75]↓ | 15.7 | 15.675 | 5.3 | 4.175 | 3.675 | 4.825 | 3.575 | 2.925 |
| Acc (%) [75]↑ | 0.0 | 0.0 | 27.5 | 15.0 | 12.5 | 22.5 | 25.0 | 32.5 |
| ImageReward [77]↑ | 0.746 | 0.665 | 0.656 | 0.507 | 0.752 | 0.743 | 0.742 | 0.796 |
| VQA [78]↑ | 0.957 | 0.953 | 0.943 | 0.907 | 0.960 | 0.943 | 0.941 | 0.951 |
| Aesthetic Preference | Aesthetic†[76]↑ | 6.139 | 6.310 | 6.853 | 6.935 | 6.879 | 6.869 | 6.909 | 7.012 |
| ImageReward [77]↑ | 1.116 | 1.132 | 1.135 | 1.166 | 1.133 | 1.100 | 1.155 | 1.171 |
| VQA [78]↑ | 0.968 | 0.959 | 0.970 | 0.970 | 0.961 | 0.961 | 0.952 | 0.963 |
+
+Table 1: Quantitative comparison of $\Psi$ -SAMPLEER and baselines across three task domains. Bold indicates the best performance, while underline denotes the second-best result for each metric. Metrics marked with $\dagger$ are used as seen reward during reward-guided sampling, where others are held-out reward. Higher values indicate better performance ( $\uparrow$ ), unless otherwise noted ( $\downarrow$ ).
+
+and VQAScore [78], which assess overall image quality and text-image alignment. The baselines and our methods are categorized into three groups:
+
+- Single-Particle: DPS [39] and FreeDoM [40] are methods not based on SMC but instead use a single particle trajectory and perform gradient ascent. They are limited in scaling up the search space due to the use of a single particle.
+- SMC & Initial Particles from Prior: TDS [14] is the SMC-based method we take as the base for our methods. DAS [12] is a variant introducing tempering strategy.
+- SMC & Initial Particles from Posterior: We evaluate four posterior-based initialization strategies: Top- $K$ -of- $N$ , ULA, MALA, and $\Psi$ -SAMPLEER. ULA and MALA use a small step size (0.05) to ensure non-zero acceptance, while $\Psi$ -SAMPLEER employs a larger step size (0.5) for improved performance (See Sec. 5.4).
+
+We use 25 denoising steps for SMC-based methods and 50 for single-particle methods to compensate for their limited exploration. For SMC-based methods, we match the total number of function evaluations (NFE) across all methods, allocating half of the budget to initial particle sampling for posterior-based methods. We use FLUX [28] as the pretrained score-based generative model. Full experimental details are provided in Appendix D.
+
+# 5.2 Quantitative Results
+
+We present quantitative results in Tab.1. Across all tasks, $\Psi$ -SAMPLEER consistently achieves the best performance on the given reward and strong generalization to held-out rewards. For SMC-based methods, sampling particles from the posterior distribution yields significant improvements over those that sample directly from the prior, highlighting the importance of posterior-informed initialization. This improvement is particularly notable in complex tasks where high-reward outputs are rare, such as layout-to-image generation and quantity-aware generation. For example, in quantity-aware generation, negative smooth L1 loss improves from 1.804 with TDS (base SMC) to 1.077 with Top- $K$ -of- $N$ and further to 0.850 with our $\Psi$ -SAMPLEER. Similarly, for layout-to-image generation, mIoU increases from 0.417 (TDS) to 0.425 with Top- $K$ -of- $N$ and 0.467 with $\Psi$ -SAMPLEER. In contrast, initializing with ULA or MALA yields only marginal gains or even degraded performance, due to the lack of Metropolis-Hastings correction in ULA and the limited exploration capacity of MALA in high-dimensional spaces. Single-particle methods consistently underperform compared to SMC-based methods.
+
+Ablation Study. We conduct an ablation study that examines how performance varies under different allocations of a fixed total NFE between the initial particle sampling stage (via Top- $K$ -of- $K$ or MCMC) and the subsequent SMC stage; full results and analysis are provided in Appendix G.
+
+Additional Results Conducted with Other Score-Based Generative Models. We additionally provide quantitative and qualitative results on all three applications using another score-based generative model, SANA-Sprint [29] in Appendix H.
+
+
+Figure 2: Qualitative results for each application demonstrate that $\Psi$ -SAMPLEER consistently generates images aligned with the given conditions. Detailed analysis of each case is provided in Sec. 5.3.
+
+
+Figure 3: Performance comparison of MALA and pCNL across different evaluation metrics with varying step sizes. Conducted on layout-to-image generation application.
+
+
+
+
+
+
+
+# 5.3 Qualitative Results
+
+We additionally present qualitative results for each application in Fig. 2. For the layout-to-image generation task, we display the input bounding box locations alongside the corresponding phrases from the text prompt, using matching colors for each phrase and its associated bounding box. In the quantity-aware image generation task, we overlay the predicted object centroids—obtained from a held-out counting model [75]—to facilitate visual comparison. Below each image, we display the predicted count along with the absolute difference from the target quantity, formatted as $(\Delta \cdot)$ . The best-performing case is highlighted in blue. For the aesthetic preference task, we display the generated images alongside their predicted aesthetic scores. The first row corresponds to the prompt "Tiger", and the second to "Rabbit". As shown, $\Psi$ -SAMPLEER produces high-quality results across all applications, matching the trends observed in the quantitative evaluations.
+
+From the first and second rows of Fig. 2, we observe that baseline methods often fail to place objects correctly within the specified bounding boxes or generate them in entirely wrong locations. For instance, in the first row, most baselines fail to position the bird accurately, and in the second row, none correctly place the car. For quantity-aware generation, the fourth row shows the counted results corresponding to the third row. While $\Psi$ -SAMPLEER successfully generates the target number of blueberries in an image, the baselines exhibit large errors—Top- $K$ -of- $N$ comes closest but still misses some. In rows 5 and 6, only $\Psi$ -SAMPLEER correctly generates the target number of coins. In the aesthetic preference task, although all methods produce realistic images, $\Psi$ -SAMPLEER generates the most visually appealing image with the highest aesthetic score. Additional qualitative examples are provided in the Appendix I.
+
+# 5.4 Evaluation of Initial Particles
+
+In Fig.4, we compare MALA and pCNL on the layout-to-image generation task across varying step sizes using four metrics: acceptance probability, reward (mIoU via GroundingDINO [72]), held-out reward (Salience DETR [74]), and sample diversity (LPIPS MPD [79]). All metrics are directly computed from the Tweedie estimates [18] of MCMC samples, before the SMC stage. As the step size increases, MALA's acceptance probability rapidly drops to near-zero, while pCNL maintains stable acceptance probability. Larger step sizes generally improve reward scores, with performance tapering off at excessively large steps. Held-out reward trends mirror this pattern, suggesting that the improvements stem from genuinely higher-quality samples rather than reward overfitting [60, 20]. Although LPIPS MPD slightly declines with increasing step size due to reduced acceptance, pCNL at step size 2.0 maintains diversity on par with MALA at 0.05. Additional results for other tasks are included in the Appendix F.
+
+# 6 Conclusion and Limitation
+
+We present a novel approach for inference-time reward alignment in score-based generative models by initializing SMC particles from the reward-aware posterior distribution. To address the challenge of high-dimensional sampling, we leverage the preconditioned Crank-Nicolson Langevin (pCNL) algorithm. Our method consistently outperforms existing baselines across tasks and reward models, demonstrating the effectiveness of posterior-guided initialization in enhancing sample quality under fixed compute budgets.
+
+Limitations and Societal Impact. A limitation of our approach is that it assumes access to differentiable reward models and depends on accurate Tweedie approximations at early denoising steps. Also, while our method improves fine-grained control in generative modeling, it may also be misused to produce misleading or harmful content, such as hyper-realistic fake imagery. These risks highlight the importance of responsible development and deployment practices, including transparency, content verification, and appropriate use guidelines.
+
+# 7 Acknowledgements
+
+This work was supported by the NRF of Korea (RS-2023-00209723); IITP grants (RS-2019-II190075, RS-2022-II220594, RS-2023-00227592, RS-2024-00399817, RS-2025-25441313, RS-2025-25443318, RS-2025-02653113); and the Technology Innovation Program (RS-2025-02317326), all funded by the Korean government (MSIT and MOTIE), as well as by the DRB-KAIST SketchThe-Future Research Center.
+
+# References
+
+[1] OpenAI. Introducing OpenAI o3 and o4-mini. https://openai.com/index/introducing-o3-and-o4-mini/, 2025.
+[2] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.
+[3] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015.
+[4] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2020.
+[5] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020.
+[6] Michael S. Albergo, Nicholas M. Boffi, and Eric Vanden-Eijnden. Stochastic interpolants: A unifying framework for flows and diffusions. arXiv preprint arXiv:2303.08797, 2023.
+[7] Nanye Ma, Mark Goldstein, Michael S. Albergo, Nicholas M. Boffi, Eric Vanden-Eijnden, and Saining Xie. SiT: Exploring flow and diffusion-based generative models with scalable interpolant transformers. arXiv preprint arXiv:2401.08740, 2024.
+[8] Masatoshi Uehara, Yulai Zhao, Chenyu Wang, Xiner Li, Aviv Regev, Sergey Levine, and Tommaso Biancalani. Inference-time alignment in diffusion models with reward-guided generation: Tutorial and review. arXiv preprint arXiv:2501.09685, 2025.
+[9] Arnaud Doucet, Nando De Freitas, Neil James Gordon, et al. Sequential Monte Carlo methods in practice, volume 1. Springer, 2001.
+[10] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential monte carlo samplers. Journal of the Royal Statistical Society Series B: Statistical Methodology, 68(3):411-436, 2006.
+[11] Nicolas Chopin, Omiros Papaspiliopoulos, et al. An introduction to sequential Monte Carlo, volume 4. Springer, 2020.
+[12] Sunwoo Kim, Minkyu Kim, and Dongmin Park. Test-time alignment of diffusion models without reward over-optimization. In ICLR, 2025.
+[13] Jaihoon Kim, Taehoon Yoon, Jisung Hwang, and Minhyuk Sung. Inference-time scaling for flow models via stochastic generation and rollover budget forcing. arXiv preprint arXiv:2503.19385, 2025.
+[14] Luhuan Wu, Brian L. Trippe, Christian A Naesseth, John Patrick Cunningham, and David Blei. Practical and asymptotically exact conditional sampling in diffusion models. In NeurIPS, 2023.
+[15] Zehao Dou and Yang Song. Diffusion posterior sampling for linear inverse problem solving: A filtering perspective. In ICLR, 2024.
+[16] Anuj Singh, Sayak Mukherjee, Ahmad Beirami, and Hadi J. Rad. CoDe: Blockwise control for denoising diffusion models. Transactions on Machine Learning Research, 2025.
+[17] Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, and Masatoshi Uehara. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv preprint arXiv:2408.08252, 2024.
+[18] Bradley Efron. Tweedie's formula and selection bias. Journal of the American Statistical Association, 106(496):1602-1614, 2011.
+
+[19] Masatoshi Uehara, Yulai Zhao, Tommaso Biancalani, and Sergey Levine. Understanding reinforcement learning-based fine-tuning of diffusion models: A tutorial and review. arXiv preprint arXiv:2407.13734, 2024.
+[20] Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Tommaso Biancalani, and Sergey Levine. Fine-tuning of continuous-time diffusion models as entropy-regularized control. arXiv preprint arXiv:2402.15194, 2024.
+[21] Brian L. Trippe, Jason Yim, Doug Tischer, David Baker, Tamara Broderick, Regina Barzilay, and Tommi S. Jaakkola. Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem. In ICLR, 2023.
+[22] Gabriel Cardoso, Yazid Janati el idrissi, Sylvain Le Corff, and Eric Moulines. Monte Carlo guided denoising diffusion models for Bayesian linear inverse problems. In ICLR, 2024.
+[23] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency Models. In ICML, 2023.
+[24] Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency Trajectory Models: Learning probability flow ODE trajectory of diffusion. In ICLR, 2024.
+[25] Cheng Lu and Yang Song. Simplifying, stabilizing and scaling continuous-time consistency models. In ICLR, 2025.
+[26] Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, and Robin Rombach. Fast high-resolution image synthesis with latent adversarial diffusion distillation. arXiv preprint arXiv:2403.12015, 2024.
+[27] Xingchao Liu, Chengyue Gong, and qiang liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In ICLR, 2023.
+[28] Black Forest Labs. FLUX. https://github.com/black-forest-labs/flux, 2024.
+[29] Junsong Chen, Shuchen Xue, Yuyang Zhao, Jincheng Yu, Sayak Paul, Junyu Chen, Han Cai, Enze Xie, and Song Han. SANA-Sprint: One-step diffusion with continuous-time consistency distillation. arXiv preprint arXiv:2503.09641, 2025.
+[30] W. K. Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97-109, 1970.
+[31] Gareth O. Roberts and Richard L. Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4):341 - 363, 1996.
+[32] Gareth O. Roberts and Osnat Stramer. Langevin diffusions and metropolis-hastings algorithms. Methodology and computing in applied probability, 4:337-357, 2002.
+[33] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid monte carlo. Physics letters B, 195(2):216-222, 1987.
+[34] Mark Girolami and Ben Calderhead. Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society Series B: Statistical Methodology, 73(2):123-214, 2011.
+[35] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. Chapman and Hall/CRC, May 2011.
+[36] S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White. MCMC methods for functions: Modifying old algorithms to make them faster. Statistical Science, 28(3), August 2013.
+[37] Alexandros Beskos, Mark Girolami, Shiwei Lan, Patrick E. Farrell, and Andrew M. Stuart. Geometric MCMC for infinite-dimensional inverse problems. Journal of Computational Physics, 335:327-351, April 2017.
+[38] Alexandros Beskos, Gareth Roberts, Andrew Stuart, and Jochen Voss. MCMC methods for diffusion bridges. Stochastics and Dynamics, 08(03):319-350, 2008.
+[39] Hyungjin Chung, Jeongsol Kim, Michael Thompson McCann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In ICLR, 2023.
+[40] Jiwen Yu, Yinhuai Wang, Chen Zhao, Bernard Ghanem, and Jian Zhang. FreeDoM: Training-free energy-guided conditional diffusion model. In ICCV, 2023.
+
+[41] Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Roni Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. In ICLR, 2024.
+[42] Haotian Ye, Haowei Lin, Jiaqi Han, Minkai Xu, Sheng Liu, Yitao Liang, Jianzhu Ma, James Zou, and Stefano Ermon. TFG: Unified training-free guidance for diffusion models. In NeurIPS, 2024.
+[43] Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192, 2023.
+[44] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023.
+[45] Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. In *ICLR*, 2024.
+[46] Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Reinforcement learning for fine-tuning text-to-image diffusion models. In NeurIPS, 2023.
+[47] Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In CVPR, 2024.
+[48] Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model. In CVPR, 2024.
+[49] Kevin Clark, Paul Vicol, Kevin Swersky, and David J. Fleet. Directly fine-tuning diffusion models on differentiable rewards. In ICLR, 2024.
+[50] Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text-to-image diffusion models with reward backpropagation. arXiv preprint arXiv:2310.03739, 2024.
+[51] Wenpin Tang. Fine-tuning of diffusion models via stochastic control: entropy regularization and beyond. arXiv preprint arXiv:2403.06279, 2024.
+[52] Carles Domingo-Enrich, Michal Drozdal, Brian Karrer, and Ricky T. Q. Chen. Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control. In ICLR, 2025.
+[53] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In ICLR, 2023.
+[54] Yaron Lipman, Marton Havasi, Peter Holderrieth, Neta Shaul, Matt Le, Brian Karrer, Ricky T. Q. Chen, David Lopez-Paz, Heli Ben-Hamu, and Itai Gat. Flow matching guide and code. arXiv preprint arXiv:2412.06264, 2024.
+[55] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In ICML, 2017.
+[56] Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. A theory of regularized markov decision processes. In ICML, 2019.
+[57] John Schulman, Xi Chen, and Pieter Abbeel. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440, 2017.
+[58] Gergely Neu, Anders Jonsson, and Vicenc Gomez. A unified view of entropy-regularized markov decision processes. arXiv preprint arXiv:1705.07798, 2017.
+[59] Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909, 2018.
+[60] Masatoshi Uehara, Yulai Zhao, Ehsan Hajiramezanali, Gabriele Scalia, Gokcen Eraslan, Avantika Lal, Sergey Levine, and Tommaso Biancalani. Bridging model-based optimization and generative modeling via conservative fine-tuning of diffusion models. In NeurIPS, 2024.
+[61] Nicolas Chopin. Central limit theorem for sequential monte carlo methods and its application to bayesian inference. The Annals of Statistics, 32(6), December 2004.
+
+[62] Pierre Moral. Feynman-Kac formulae: genealogical and interacting particle systems with applications. Springer, 2004.
+[63] Beomsu Kim and Jong Chul Ye. Denoising MCMC for accelerating diffusion-based generative models. arXiv preprint arXiv:2209.14593, 2022.
+[64] Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In ICML, 2023.
+[65] Lin Gui, Cristina Garbacea, and Victor Veitch. BoNBon alignment for large language models and the sweetness of best-of-n sampling. In NeurIPS, 2024.
+[66] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. WebGPT: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2022.
+[67] Ahmad Beirami, Alekh Agarwal, Jonathan Berant, Alexander D'Amour, Jacob Eisenstein, Chirag Nagpal, and Ananda Theertha Suresh. Theoretical guarantees on the best-of-n alignment policy. arXiv preprint arXiv:2401.01879, 2024.
+[68] Joy Qiping Yang, Salman Salamatian, Ziteng Sun, Ananda Theertha Suresh, and Ahmad Beirami. Asymptotics of language model alignment. In 2024 IEEE International Symposium on Information Theory (ISIT), pages 2027-2032, 2024.
+[69] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087-1092, 1953.
+[70] Gareth O. Roberts and Jeffrey S. Rosenthal. Optimal scaling of discrete approximations to langevin diffusions. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 60(1):255-268, 1998.
+[71] Charles J. Geyer. Practical markov chain monte carlo. Statistical Science, 7(4):473-483, 1992.
+[72] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection. In ECCV, 2024.
+[73] Yifei Qian, Zhongliang Guo, Bowen Deng, Chun Tong Lei, Shuai Zhao, Chun Pong Lau, Xiaopeng Hong, and Michael P Pound. T2ICount: Enhancing cross-modal understanding for zero-shot counting. In CVPR, 2025.
+[74] Xiuquan Hou, Meiqin Liu, Senlin Zhang, Ping Wei, and Badong Chen. Salience DETR: Enhancing detection transformer with hierarchical salience filtering refinement. In CVPR, 2024.
+[75] N. Amini-Naieni, T. Han, and A. Zisserman. CountGD: Multi-modal open-world counting. In NeurIPS, 2024.
+[76] Christoph Schuhmann. Laion aesthetic predictor. https://laion.ai/blog/laion-aesthetics/, 2022. Accessed: 2024-09-29.
+[77] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. ImageReward: learning and evaluating human preferences for text-to-image generation. In NeurIPS, 2023.
+[78] Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. In ECCV, 2024.
+[79] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
+[80] Eslam Mohamed Bakr, Pengzhan Sun, Xiaoqian Shen, Faizan Farooq Khan, Li Erran Li, and Mohamed Elhoseiny. HRS-Bench: Holistic, reliable and scalable benchmark for text-to-image models. In ICCV, 2023.
+[81] OpenAI. GPT-4o system card. arXiv preprint arXiv:2410.21276, 2024.
+[82] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. In CVPR, 2023.
+
+[83] Jiayu Xiao, Liang Li, Henglei Lv, Shuhui Wang, and Qingming Huang. R&b: Region and boundary aware zero-shot grounded text-to-image generation. In ICLR, 2024.
+[84] Quynh Phung, Songwei Ge, and Jia-Bin Huang. Grounded text-to-image synthesis with attention refocusing. arXiv preprint arXiv:2306.05427, 2023.
+[85] Yuseung Lee, TaeHoon Yoon, and Minhyuk Sung. Groundit: Grounding diffusion transformers via noisy patch transplantation. In NeurIPS, 2024.
+[86] Wonjun Kang, Kevin Galim, Hyung Il Koo, and Nam Ik Cho. Counting guidance for high fidelity text-to-image synthesis. In WACV, 2025.
+[87] Lital Binyamin, Yoad Towel, Hilit Segev, Eran Hirsch, Royi Rassin, and Gal Chechik. Make it count: Text-to-image generation with an accurate number of objects. arXiv preprint arXiv:2406.10210, 2024.
+[88] Oz Zafar, Lior Wolf, and Idan Schwartz. Iterative object count optimization for text-to-image diffusion models. arXiv preprint arXiv:2408.11721, 2024.
+
+# NeurIPS Paper Checklist
+
+The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit.
+
+Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist:
+
+- You should answer [Yes], [No], or [NA].
+- [NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available.
+- Please provide a short (1–2 sentence) justification right after your answer (even for NA).
+
+The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of your paper, and its final version will be published with the paper.
+
+The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes]" is generally preferable to "[No]", it is perfectly acceptable to answer "[No]" provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No]" or "[NA]" is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found.
+
+IMPORTANT, please:
+
+- Delete this instruction block, but keep the section heading "NeurIPS Paper Checklist",
+- Keep the checklist subsection headings, questions/answers and guidelines below.
+- Do not modify the questions and only use the provided macros for your answers.
+
+# 1. Claims
+
+Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that the abstract and introduction do not include the claims made in the paper.
+- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
+- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
+- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
+
+# 2. Limitations
+
+Question: Does the paper discuss the limitations of the work performed by the authors?
+
+Answer: [Yes]
+
+Justification:
+
+# Guidelines:
+
+- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
+- The authors are encouraged to create a separate "Limitations" section in their paper.
+- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
+- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
+- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
+- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
+- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
+- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
+
+# 3. Theory assumptions and proofs
+
+Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
+
+Answer: [NA]
+
+Justification: We do not contain new theorems rather provide toy experiments results to support our claim as well as main experiment results.
+
+Guidelines:
+
+- The answer NA means that the paper does not include theoretical results.
+- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
+- All assumptions should be clearly stated or referenced in the statement of any theorems.
+- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
+- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
+- Theorems and Lemmas that the proof relies upon should be properly referenced.
+
+# 4. Experimental result reproducibility
+
+Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+
+- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
+- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
+- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
+- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
+(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
+(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
+(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
+(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
+
+# 5. Open access to data and code
+
+Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
+
+# Answer: [No]
+
+Justification: Our datasets are based on a combination of existing resources, including a subset of the HRS-Spatial dataset [80], GPT-4o-generated samples [81], and the animal dataset from [45]. We plan to release both the code and data in future revision.
+
+# Guidelines:
+
+- The answer NA means that paper does not include experiments requiring code.
+- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
+- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
+- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
+- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
+- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
+
+- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
+
+# 6. Experimental setting/details
+
+Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
+- The full details can be provided either with the code, in appendix, or as supplemental material.
+
+# 7. Experiment statistical significance
+
+Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
+
+Answer: [No]
+
+Justification: Due to limited time and computational resources, we were unable to conduct a full statistical significance analysis prior to submission. However, we plan to incorporate appropriate statistical evaluations, such as standard deviations and confidence intervals, as well as additional ablation studies in a future revision.
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
+- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
+- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
+- The assumptions made should be given (e.g., Normally distributed errors).
+- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
+- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
+- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
+- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
+
+# 8. Experiments compute resources
+
+Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that the paper does not include experiments.
+
+- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
+- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
+- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
+
+# 9. Code of ethics
+
+Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
+- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
+- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
+
+# 10. Broader impacts
+
+Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that there is no societal impact of the work performed.
+- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
+- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
+- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
+- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
+- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
+
+# 11. Safeguards
+
+Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
+
+Answer: [NA]
+
+Justification: Our work does not involve the release of any models or datasets that pose a high risk of misuse. The models used in our experiments are based on publicly available architectures, and our dataset construction process avoids the inclusion of any sensitive or harmful content.
+
+# Guidelines:
+
+- The answer NA means that the paper poses no such risks.
+- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
+- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
+- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
+
+# 12. Licenses for existing assets
+
+Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
+
+Answer: [Yes]
+
+Justification:
+
+Guidelines:
+
+- The answer NA means that the paper does not use existing assets.
+- The authors should cite the original paper that produced the code package or dataset.
+- The authors should state which version of the asset is used and, if possible, include a URL.
+- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
+- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
+- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
+- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
+- If this information is not available online, the authors are encouraged to reach out to the asset's creators.
+
+# 13. New assets
+
+Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
+
+Answer: [NA]
+
+Justification: Although our dataset construction includes a set of GPT-4o-generated samples, which may be considered a new asset, this portion has not been publicly released at the time of submission. As such, we do not consider any part of our submission to constitute a released new asset for the purposes of this checklist item. We plan to release all new data in a future revision.
+
+# Guidelines:
+
+- The answer NA means that the paper does not release new assets.
+- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
+- The paper should discuss whether and how consent was obtained from people whose asset is used.
+
+- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
+
+# 14. Crowdsourcing and research with human subjects
+
+Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
+
+Answer: [NA]
+
+Justification: This work does not involve any crowdsourcing tasks or experiments with human subjects.
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
+- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
+
+# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
+
+Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
+
+Answer: [NA]
+
+Justification: This research does not involve human participants and therefore does not require IRB or equivalent ethical review.
+
+Guidelines:
+
+- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
+- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
+- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
+- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
+
+# 16. Declaration of LLM usage
+
+Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
+
+Answer: [NA]
+
+Justification: We used LLMs such as GPT-4o [81] for auxiliary purposes including dataset generation, but they were not involved in the development of the core methodology or experiments.
+
+Guidelines:
+
+- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
+- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
+
+# Appendix
+
+# A Reward Alignment with Stochastic Optimal Control
+
+In the reward alignment task for continuous-time generative models [20, 52], which our method builds upon, Uehara et al. [20] introduce both an additional drift term $\psi$ (often referred to as a control vector field) and a modified initial distribution $\bar{p}_1$ . Then the goal is to find $\psi$ and $\bar{p}_1$ such that the resulting final distribution at time $t = 0$ matches the target distribution $p_0^*$ defined in Eq. 3. Accordingly, the original reverse-time SDE used for generation (Eq. 1) is replaced by a controlled SDE:
+
+$$
+\mathrm {d} \mathbf {x} _ {t} = \left(\mathbf {f} \left(\mathbf {x} _ {t}, t\right) - \psi \left(\mathbf {x} _ {t}, t\right)\right) \mathrm {d} t + g (t) \mathrm {d} \mathbf {W}, \quad \mathbf {x} _ {1} \sim \bar {p} _ {1}. \tag {14}
+$$
+
+This entropy-regularized stochastic optimal control framework adopts a pathwise optimization that integrates KL divergence penalties over trajectories and thus optimization formulation of Eq. 2 changes accordingly:
+
+$$
+\psi^ {*}, p _ {1} ^ {*} = \underset {\psi , \bar {p} _ {1}} {\arg \max } \mathbb {E} _ {\mathbb {P} ^ {\psi , \bar {p} _ {1}}} [ r (\mathbf {x} _ {0}) ] - \alpha \mathcal {D} _ {\mathrm {K L}} \left[ \mathbb {P} ^ {\psi , \bar {p} _ {1}} \| \mathbb {P} ^ {\text {d a t a}} \right], \tag {15}
+$$
+
+where $\mathbb{P}^{\psi, \bar{p}_1}$ is a measure over trajectories induced by the controlled SDE in Eq. 14 and $\mathbb{P}^{\mathrm{data}}$ is a measure over trajectories induced by the pre-trained SDE in Eq. 1.
+
+KL-divergence term in Eq. 15 can be expressed as the sum of affine control cost $\frac{\|\psi(\mathbf{x}_t,t)\|^2}{g^2(t)}$ and log Radon-Nikodym derivative at $t = 1$ , i.e., $\log \frac{\bar{p}_1(\mathbf{x}_1)}{p_1(\mathbf{x}_1)}$ :
+
+$$
+\psi^ {*}, p _ {1} ^ {*} = \underset {\psi , \bar {p} _ {1}} {\arg \max } \mathbb {E} _ {\mathbb {P} ^ {\psi}, \bar {p} _ {1}} \left[ r (\mathbf {x} _ {0}) \right] - \alpha \mathbb {E} _ {\mathbb {P} ^ {\psi}, \bar {p} _ {1}} \left[ \frac {1}{2} \int_ {t = 0} ^ {1} \frac {\| \psi (\mathbf {x} _ {t} , t) \| ^ {2}}{g ^ {2} (t)} d t + \log \frac {\bar {p} _ {1} (\mathbf {x} _ {1})}{p _ {1} (\mathbf {x} _ {1})} \right], \tag {16}
+$$
+
+which can be proved [20, 52] using Girsanov theorem and martingale property of Itô integral.
+
+The optimal control $\psi^{*}$ and the optimal initial distribution $p_1^*$ can be derived by introducing the optimal value function, defined as:
+
+$$
+V _ {t} ^ {*} (\mathbf {x} _ {t}) = \max _ {\psi} \mathbb {E} _ {\mathbb {P} ^ {\psi}} \left[ r (\mathbf {x} _ {0}) - \frac {\alpha}{2} \int_ {s = 0} ^ {t} \frac {\| \psi (\mathbf {x} _ {s} , s) \| ^ {2}}{g ^ {2} (s)} d s \Big | \mathbf {x} _ {t} \right]. \tag {17}
+$$
+
+where the expectation is taken over trajectories induced by the controlled SDE in Eq. 14 with current $\mathbf{x}_t$ is given.
+
+From the optimal value function at $t = 1$ , we can derive explicit formulation of the optimal initial distribution $p_1^*$ in terms of $V_1^*(\mathbf{x}_t)$ by plugging the definition of optimal value function at $t = 1$ (Eq. 17) into Eq. 16:
+
+$$
+p _ {1} ^ {*} = \underset {\bar {p} _ {1}} {\arg \max } \mathbb {E} _ {\bar {p} _ {1}} \left[ V _ {1} ^ {*} \left(\mathbf {x} _ {1}\right) \right] - \alpha \mathcal {D} _ {\mathrm {K L}} \left[ \bar {p} _ {1} \| p _ {1} \right]. \tag {18}
+$$
+
+Solving this yields the following closed-form expression for the optimal initial distribution (derivable via calculus of variations [13]), similarly to Eq. 3:
+
+$$
+p _ {1} ^ {*} (\mathbf {x} _ {1}) = \frac {1}{Z _ {1}} p _ {1} (\mathbf {x} _ {1}) \exp \left(\frac {V _ {1} ^ {*} (\mathbf {x} _ {1})}{\alpha}\right). \tag {19}
+$$
+
+The optimal control $\psi^{*}$ can be obtained from the Hamilton-Jacobi-Bellman (HJB) equation and is expressed in terms of the gradient of the optimal value function:
+
+$$
+\psi^ {*} (\mathbf {x} _ {t}, t) = g ^ {2} (t) \nabla \frac {V _ {t} ^ {*} (\mathbf {x} _ {t})}{\alpha}. \tag {20}
+$$
+
+Moreover, the optimal value function itself admits an interpretable closed-form expression via the Feynman-Kac formula:
+
+$$
+V _ {t} ^ {*} \left(\mathbf {x} _ {t}\right) = \alpha \log \mathbb {E} _ {\mathbb {P} ^ {\mathrm {d a t a}}} \left[ \exp \left(\frac {r \left(\mathbf {x} _ {0}\right)}{\alpha}\right) \mid \mathbf {x} _ {t} \right]. \tag {21}
+$$
+
+Importantly, Uehara et al. [20] further proved that the marginal distribution $p_t^*(\mathbf{x}_t)$ induced by the controlled SDE (with optimal control $\psi^*$ and optimal initial distribution $p_1^*$ ) is given by:
+
+$$
+p _ {t} ^ {*} (\mathbf {x} _ {t}) = \frac {1}{Z _ {t}} p _ {t} (\mathbf {x} _ {t}) \exp \left(\frac {V _ {t} ^ {*} (\mathbf {x} _ {t})}{\alpha}\right), \tag {22}
+$$
+
+where $p_t(\mathbf{x}_t)$ is the marginal distribution of the pretrained score-based generative model at time $t$ . Similarly, the optimal transition kernel under the controlled dynamics is:
+
+$$
+p _ {\theta} ^ {*} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right) = \frac {\exp \left(V _ {t - \Delta t} ^ {*} \left(\mathbf {x} _ {t - \Delta t}\right) / \alpha\right)}{\exp \left(V _ {t} ^ {*} \left(\mathbf {x} _ {t}\right) / \alpha\right)} p _ {\theta} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right), \tag {23}
+$$
+
+where $p_{\theta}(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t)$ denotes the transition kernel of the pretrained model, i.e., corresponding to the discretization of the reverse-time SDE defined in Eq. 1. For detail derivation, see Theorem 1 and Lemma 3 in Uehara et al. [20]. Notably, Eq. 22 implies that by following the controlled dynamics defined by Eq. 14, initialized with the optimal distribution $p_1^*$ and guided by the optimal control $\psi^*$ , the resulting distribution at time $t = 0$ will match the target distribution $p_0^*$ defined in Eq. 3.
+
+Note that the optimal control, optimal initial distribution, and optimal transition kernel are all expressed in terms of the optimal value function. However, despite their interpretable forms, these expressions are not directly computable in practice due to the intractability of the posterior $p(\mathbf{x}_0|\mathbf{x}_t)$ . This motivates the use of approximation techniques, most notably Tweedie's formula [18], which is widely adopted in the literature [39, 42, 14, 12] to make such expressions tractable. Under this approximation, the posterior is approximated by a Dirac-delta distribution centered at the posterior mean denoted by $\mathbf{x}_{0|t} \coloneqq \mathbb{E}_{\mathbf{x}_0 \sim p_{0|t}}[\mathbf{x}_0]$ , representing the conditional expectation under $p_{0|t} \coloneqq p(\mathbf{x}_0|\mathbf{x}_t)$ . Consequently, the optimal value function simplifies to:
+
+$$
+V _ {t} ^ {*} (\mathbf {x} _ {t}) = \alpha \log \int \exp \left(\frac {r (\mathbf {x} _ {0})}{\alpha}\right) p (\mathbf {x} _ {0} | \mathbf {x} _ {t}) \mathrm {d} \mathbf {x} _ {0} \simeq \alpha \log \int \exp \left(\frac {r (\mathbf {x} _ {0})}{\alpha}\right) \delta (\mathbf {x} _ {0} - \mathbf {x} _ {0 | t}) \mathrm {d} \mathbf {x} _ {0} = r (\mathbf {x} _ {0 | t}), \tag {24}
+$$
+
+where $\mathbf{x}_{0|t}$ is a deterministic function of $\mathbf{x}_t$ . Using this approximation, we have following approximations for the optimal initial distribution $\tilde{p}_1^*$ , the optimal control $\tilde{\psi}^*$ , and the optimal transition kernel $\tilde{p_{\theta}}^*$ , which are used throughout the paper:
+
+$$
+\tilde {p} _ {1} ^ {*} (\mathbf {x} _ {1}) := \frac {1}{Z _ {1}} p _ {1} (\mathbf {x} _ {1}) \exp \left(\frac {r (\mathbf {x} _ {0 | 1})}{\alpha}\right) \tag {25}
+$$
+
+$$
+\tilde {\psi} ^ {*} (\mathbf {x} _ {t}) = g ^ {2} (t) \nabla \frac {r \left(\mathbf {x} _ {0 \mid t}\right)}{\alpha} \tag {26}
+$$
+
+$$
+\tilde {p} _ {\theta} ^ {*} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right) = \frac {\exp \left(r \left(\mathbf {x} _ {0 \mid t - \Delta t}\right) / \alpha\right)}{\exp \left(r \left(\mathbf {x} _ {0 \mid t}\right) / \alpha\right)} p _ {\theta} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right). \tag {27}
+$$
+
+It is worth noting that sampling from the optimal initial distribution is essential to theoretically guarantee convergence to the target distribution Eq. 3. Simply following the optimal control alone does not suffice and can in fact bias away from the target, a phenomenon known as the value function bias problem [52]. To the best of our knowledge, this is the first work to explicitly address this problem in the context of inference-time reward-alignment with score-based generative models.
+
+# B Sequential Monte Carlo and Reward-Guided Sampling
+
+Sequential Monte Carlo (SMC) methods [9-11], also known as particle filter, are a class of algorithms for sampling from sequences of probability distributions. Beginning with $K$ particles drawn independently from an initial distribution, SMC maintains a weighted particle population, $\{\mathbf{x}_t^{(i)}\}_{i = 1}^K$ , and iteratively updates it through propagation, reweighting, and resampling steps to approximate the target distribution. During propagation, particles are moved using a proposal distribution; in the reweighting step, their importance weights are adjusted to reflect the discrepancy between the target and proposal distributions; and resampling preferentially retains high-weight particles while
+
+eliminating low-weight ones (this is performed only conditionally, see below). The weights are updated over time according to the following rule:
+
+$$
+w _ {t - \Delta t} ^ {(i)} = \frac {p _ {\operatorname {t a r}} \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right)}{q \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right)} w _ {t} ^ {(i)} \tag {28}
+$$
+
+where $p_{\mathrm{tar}}$ is an intermediate target kernel we want to sample from, and $q(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t)$ is a proposal kernel used during propagation.
+
+At each time $t$ , if the effective sample size (ESS), defined as $\left(\sum_{j=1}^{K} w_t^{(j)}\right)^2 / \sum_{i=1}^{K} \left(w_t^{(i)}\right)^2$ falls below a predefined threshold, resampling is performed. Specifically, a set of ancestor indices $\{a_t^{(i)}\}_{i=1}^K$ is drawn from a multinomial distribution based on the normalized weights. These indices are then used to form the resampled particle set $\{\mathbf{x}_t^{(a_t^{(i)})}\}_{i=1}^K$ . If resampling is not triggered, we simply set $a_t^{(i)} = i$ .
+
+In the propagation stage, particle set $\{\mathbf{x}_{t - \Delta t}^{(i)}\}_{i = 1}^{K}$ is generated via sampling from proposal distribution, $\mathbf{x}_{t - \Delta t}^{(i)}\sim q(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t^{(a_t^{(i)})})$ . When resampling is applied, the weights are reset to uniform values, i.e., $w_{t}^{(i)} = 1$ for all $i$ . Regardless of whether resampling occurred, new weights $\{w_{t - \Delta t}^{(i)}\}_{i = 1}^{K}$ are then computed using Eq. 28.
+
+In the context of reward-alignment tasks, SMC can be employed to approximately sample from the target distribution defined in Eq. 3. As the number of particles $K$ grows, the approximation becomes increasingly accurate due to the consistency of the SMC framework [61, 62]. To make this effective, the proposal kernel should ideally match the optimal transition kernel given in Eq. 23. However, as discussed in Appendix A, this kernel is computationally intractable. Therefore, prior work [14, 12, 8] typically resorts to its approximated form, as expressed in Eq. 27. This leads to the weight at each time being computed as:
+
+$$
+w _ {t - \Delta t} ^ {(i)} = \frac {\tilde {p} _ {\theta} ^ {*} (\mathbf {x} _ {t - \Delta t} | \mathbf {x} _ {t})}{q (\mathbf {x} _ {t - \Delta t} | \mathbf {x} _ {t})} w _ {t} ^ {(i)} = \frac {\exp (r (\mathbf {x} _ {0 | t - \Delta t}) / \alpha) p _ {\theta} (\mathbf {x} _ {t - \Delta t} | \mathbf {x} _ {t})}{\exp (r (\mathbf {x} _ {0 | t}) / \alpha) q (\mathbf {x} _ {t - \Delta t} | \mathbf {x} _ {t})} w _ {t} ^ {(i)}, \tag {29}
+$$
+
+where $p_{\theta}(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t)$ denotes the transition kernel of the pretrained score-based generative model. The pretrained model follows the SDE given in Eq. 1, which upon discretization yields a Gaussian transition kernel, $p_{\theta}(\mathbf{x}_{t - \Delta t}|\mathbf{x}_t) = \mathcal{N}(\mathbf{x}_t - \mathbf{f}(\mathbf{x}_t,t)\Delta t,g(t)^2\Delta t\mathbf{I})$ . On the other hand, for rewardguided sampling, i.e., to sample from the target distribution in Eq. 3, we follow controlled SDE in Eq. 14. At each intermediate time, the SOC framework (Appendix A) prescribes the use of the optimal control defined in Eq. 20. However, due to its intractability, the approximation in Eq. 26 is typically adopted in practice. Discretizing the controlled SDE under this approximation leads to the following proposal distribution at each time:
+
+$$
+q \left(\mathbf {x} _ {t - \Delta t} \mid \mathbf {x} _ {t}\right) = \mathcal {N} \left(\mathbf {x} _ {t} - \mathbf {f} \left(\mathbf {x} _ {t}, t\right) \Delta t + g ^ {2} (t) \nabla \frac {r \left(\mathbf {x} _ {0 \mid t}\right)}{\alpha} \Delta t, g (t) ^ {2} \Delta t \mathbf {I}\right). \tag {30}
+$$
+
+A similar proposal has also been used in [12, 8], where a Taylor expansion was applied in the context of entropy-regularized Markov Decision Process.
+
+# C Acceptance Probability of MALA and pCNL
+
+In this section we provide the Metropolis-Hastings (MH) [69, 30] acceptance rule that underpins both the Metropolis-Adjusted Langevin Algorithm (MALA) [31, 32] and the preconditioned Crank-Nicolson Langevin algorithm (pCNL) [36, 37]. Metropolis-Hastings algorithms form a class of MCMC methods that generate samples from a target distribution by accepting or rejecting proposed moves according to a specific acceptance function. Let $p_{\mathrm{tar}}$ denote a density proportional to the target distribution. Given the current state $\mathbf{x}$ and a proposal $\mathbf{x}' \sim q(\mathbf{x}'|\mathbf{x})$ , the MH step accepts the move with probability:
+
+$$
+a \left(\mathbf {x}, \mathbf {x} ^ {\prime}\right) = \min \left(1, \frac {p _ {\operatorname {t a r}} \left(\mathbf {x} ^ {\prime}\right) q \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)}{p _ {\operatorname {t a r}} (\mathbf {x}) q \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)}\right). \tag {31}
+$$
+
+If the proposal kernel $q(\mathbf{x}'|\mathbf{x})$ is taken to be the one-step Euler-Maruyama discretization of Langevin dynamics then it becomes the MALA, and Eq. 31 corresponds to the acceptance probability of
+
+MALA. Choosing instead the semi-implicit (Crank-Nicolson-type) discretization yields the proposal used in the pCNL, and Eq. 31 becomes the corresponding pCNL acceptance probability.
+
+We first show that preconditioned Crank-Nicolson (pCN), which is a modification of the Random-Walk Metropolis (RWM), preserves the Gaussian prior. pCN can be viewed as a special case of pCNL obtained when the underlying Langevin dynamics is chosen so that the Gaussian prior $\mathcal{N}(\mathbf{0},\mathbf{I})$ is its invariant distribution. This leads to the proposal mechanism [36, 37]:
+
+$$
+\mathbf {x} ^ {\prime} = \rho \mathbf {x} + \sqrt {1 - \rho^ {2}} \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}). \tag {32}
+$$
+
+where $\rho = (1 - \epsilon /4) / (1 + \epsilon /4)$ with $\epsilon >0$ corresponding to the step size of the Langevin dynamics. Assume that the prior is the standard Gaussian $\mathcal{N}(\mathbf{0},\mathbf{I})$ and let $\mathbf{x}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$ , then Eq. 32 expresses $\mathbf{x}'$ as a linear combination of two independent Gaussian random variables with unit covariance. Hence, by the closure of the Gaussian family under affine transformations, $\mathbf{x}'\sim \mathcal{N}(\mathbf{0},\mathbf{I})$ as well, thus preserving the Gaussian prior.
+
+Next, in the case of pCN, $p_1(\mathbf{x}')q_0(\mathbf{x}|\mathbf{x}')$ , is symmetric, i.e., $p_1(\mathbf{x}')q_0(\mathbf{x}|\mathbf{x}') = p_1(\mathbf{x})q_0(\mathbf{x}'|\mathbf{x})$ , where $p_1(\cdot)$ denotes Gaussian prior and $q_0(\cdot|\cdot)$ denotes the proposal kernel of the pCN, i.e., Eq. 32.
+
+Remark 1. Let $p_1(\cdot)$ as Gaussian prior $\mathcal{N}(\mathbf{0},\mathbf{I})$ and $q_{0}(\cdot |\cdot)$ as the proposal kernel of the pCN, i.e., $\mathcal{N}(\rho \mathbf{x},(1 - \rho^2)\mathbf{I})$ , then $p_1(\mathbf{x}')q_0(\mathbf{x}|\mathbf{x}') = p_1(\mathbf{x})q_0(\mathbf{x}'|\mathbf{x})$ .
+
+Proof. Apart from normalization constants, $p_1(\mathbf{x}') q_0(\mathbf{x}|\mathbf{x}')$ can be calculated as:
+
+$$
+\exp \left(- \frac {{\bf x} ^ {\prime 2}}{2}\right) \exp \left(- \frac {({\bf x} - \rho {\bf x} ^ {\prime}) ^ {2}}{2 (1 - \rho^ {2})}\right) = \exp \left(- \frac {{\bf x} ^ {\prime 2} + {\bf x} ^ {2} - 2 \rho {\bf x} {\bf x} ^ {\prime}}{2 (1 - \rho^ {2})}\right).
+$$
+
+Repeating the same calculation with $p_1(\mathbf{x})q_0(\mathbf{x}'|\mathbf{x})$ merely swaps $\mathbf{x}$ and $\mathbf{x}'$ , leaving the numerator unchanged. Hence the two products are identical.
+
+We provide additional remark for ease of calculation.
+
+Remark 2. Let $\mathcal{N}(\mathbf{x};\boldsymbol {\mu},\mathbf{C})$ be the density of a multivariate Gaussian with mean $\pmb{\mu}$ and positive definite covariance C. For fixed C, the ratio of two such densities that differ only in the mean is
+
+$$
+\frac {\mathcal {N} (\mathbf {x} ; \boldsymbol {\mu} , \mathbf {C})}{\mathcal {N} (\mathbf {x} ; \mathbf {0} , \mathbf {C})} = \exp \left(- \frac {1}{2} \| \boldsymbol {\mu} \| _ {\mathbf {C}} ^ {2} + \langle \boldsymbol {\mu}, \mathbf {x} \rangle_ {\mathbf {C}}\right)
+$$
+
+where $\| \pmb {\mu}\|_{\mathbf{C}}^{2}:= \pmb{\mu}^{\top}\mathbf{C}^{-1}\pmb{\mu}$ and $\langle \pmb {\mu},\mathbf{x}\rangle_{\mathbf{C}}\coloneqq \pmb{\mu}^{\top}\mathbf{C}^{-1}\mathbf{x}.$
+
+In our case, $\mathbf{C}$ corresponds to the identity matrix.
+
+Acceptance Probability of pCNL. As before, let $q_{0}$ be a proposal kernel of the pCN (Eq. 32) and $q_{p}$ be a proposal kernel of the pCNL:
+
+$$
+q _ {0} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right): \mathbf {x} ^ {\prime} = \rho \mathbf {x} + \sqrt {1 - \rho^ {2}} \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}) \tag {33}
+$$
+
+$$
+q _ {p} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right): \mathbf {x} ^ {\prime} = \rho \mathbf {x} + \sqrt {1 - \rho^ {2}} \left(\mathbf {z} + \frac {\sqrt {\epsilon}}{2} \nabla \frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}\right), \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}). \tag {34}
+$$
+
+Let $\tilde{\mathbf{x}} := \frac{\mathbf{x}' - \rho\mathbf{x}}{\sqrt{1 - \rho^2}}$ , and $\tilde{q}_0, \tilde{q}_p$ be the distributions of $\tilde{\mathbf{x}}$ under $q_0$ and $q_p$ , respectively. Then we obtain:
+
+$$
+\tilde {q} _ {0} (\tilde {\mathbf {x}} | \mathbf {x}) = \mathcal {N} (\tilde {\mathbf {x}}; \mathbf {0}, \mathbf {I}) \tag {35}
+$$
+
+$$
+\tilde {q} _ {p} (\tilde {\mathbf {x}} | \mathbf {x}) = \mathcal {N} (\tilde {\mathbf {x}}; \frac {\sqrt {\epsilon}}{2} \nabla \frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}, \mathbf {I}). \tag {36}
+$$
+
+Note that $\mathbf{x}_{0|1}$ is a function of $\mathbf{x}$ . Then by Remark 2,
+
+$$
+\frac {q _ {p} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)}{q _ {0} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)} = \frac {\tilde {q} _ {p} (\tilde {\mathbf {x}} \mid \mathbf {x})}{\tilde {q} _ {0} (\tilde {\mathbf {x}} \mid \mathbf {x})} = \exp \left(- \frac {\epsilon}{8} \left\| \nabla \frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha} \right\| _ {\mathbf {I}} ^ {2} + \frac {\sqrt {\epsilon}}{2} \left\langle \nabla \frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha}, \tilde {\mathbf {x}} \right\rangle_ {\mathbf {I}}\right). \tag {37}
+$$
+
+For the fraction part of the acceptance probability (Eq. 31) of pCNL, we have:
+
+$$
+\begin{array}{l} \frac {p _ {1} ^ {*} \left(\mathbf {x} ^ {\prime}\right) q _ {p} \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)}{p _ {1} ^ {*} (\mathbf {x}) q _ {p} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)} = \frac {\left(p _ {1} ^ {*} \left(\mathbf {x} ^ {\prime}\right) q _ {p} \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)\right) / \left(p _ {1} \left(\mathbf {x} ^ {\prime}\right) q _ {0} \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)\right)}{\left(p _ {1} ^ {*} (\mathbf {x}) q _ {p} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)\right) / \left(p _ {1} \left(\mathbf {x} ^ {\prime}\right) q _ {0} \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)\right)} (38) \\ = \frac {\left(p _ {1} ^ {*} \left(\mathbf {x} ^ {\prime}\right) q _ {p} \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)\right) / \left(p _ {1} \left(\mathbf {x} ^ {\prime}\right) q _ {0} \left(\mathbf {x} \mid \mathbf {x} ^ {\prime}\right)\right)}{\left(p _ {1} ^ {*} (\mathbf {x}) q _ {p} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)\right) / \left(p _ {1} (\mathbf {x}) q _ {0} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)\right)} := \frac {\varphi_ {p} \left(\mathbf {x} ^ {\prime} , \mathbf {x}\right)}{\varphi_ {p} \left(\mathbf {x} , \mathbf {x} ^ {\prime}\right)}, (39) \\ \end{array}
+$$
+
+where the target distribution is set as Eq. 25. In Eq. 38, we divide both numerator and denominator by a common term, and in Eq. 39, we utilized Remark 1.
+
+Denominator can be calculated utilizing Eq. 37:
+
+$$
+\begin{array}{l} \varphi_ {p} (\mathbf {x}, \mathbf {x} ^ {\prime}) = \frac {p _ {1} ^ {*} (\mathbf {x}) q _ {p} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)}{p _ {1} (\mathbf {x}) q _ {0} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right)} (40) \\ = \exp \left(\frac {r (\mathbf {x} _ {0 | 1})}{\alpha}\right) \exp \left(- \frac {\epsilon}{8} \left\| \nabla \frac {r (\mathbf {x} _ {0 | 1})}{\alpha} \right\| _ {\mathbf {I}} ^ {2} + \frac {\sqrt {\epsilon}}{2} \left\langle \nabla \frac {r (\mathbf {x} _ {0 | 1})}{\alpha}, \frac {\mathbf {x} ^ {\prime} - \rho \mathbf {x}}{\sqrt {1 - \rho^ {2}}} \right\rangle_ {\mathbf {I}}\right), (41) \\ \end{array}
+$$
+
+with numerator being simply interchanging $\mathbf{x}$ and $\mathbf{x}'$ . The acceptance probability of pCNL is $\min \left(1, \frac{\varphi_p(\mathbf{x}',\mathbf{x})}{\varphi_p(\mathbf{x},\mathbf{x}')}\right)$ .
+
+Acceptance Probability of MALA. In the case of MALA, the proposal is given as:
+
+$$
+\begin{array}{l} \mathbf {x} ^ {\prime} = \mathbf {x} + \frac {\epsilon}{2} \nabla \log p _ {\operatorname {t a r}} (\mathbf {x}) + \sqrt {\epsilon} \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}) (42) \\ = \mathbf {x} + \frac {\epsilon}{2} \left(- \mathbf {x} + \nabla \frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}\right) + \sqrt {\epsilon} \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}) (43) \\ \end{array}
+$$
+
+where as in pCNL we set the target distribution as Eq. 25. Thus the proposal kernel of MALA $q_{M}$ can be expressed as:
+
+$$
+q _ {M} \left(\mathbf {x} ^ {\prime} \mid \mathbf {x}\right) = \mathcal {N} \left(\mathbf {x} ^ {\prime}; \mathbf {x} \left(1 - \frac {\epsilon}{2}\right) + \frac {\epsilon}{2} \nabla \frac {r \left(\mathbf {x} _ {0 \mid 1}\right)}{\alpha}, \epsilon \mathbf {I}\right). \tag {44}
+$$
+
+The fraction part of the acceptance probability (Eq. 31) of MALA is given as:
+
+$$
+\begin{array}{l} \frac {p _ {1} ^ {*} (\mathbf {x} ^ {\prime}) q _ {M} (\mathbf {x} | \mathbf {x} ^ {\prime})}{p _ {1} ^ {*} (\mathbf {x}) q _ {M} (\mathbf {x} ^ {\prime} | \mathbf {x})} = \frac {\mathcal {N} (\mathbf {x} ^ {\prime} ; \mathbf {0} , \mathbf {I}) \exp (r (\mathbf {x} _ {0 | 1} ^ {\prime}) / \alpha) \mathcal {N} (\mathbf {x} ; \mathbf {x} ^ {\prime} (1 - \epsilon / 2) + \epsilon / 2 \cdot \nabla r (\mathbf {x} _ {0 | 1} ^ {\prime}) / \alpha , \epsilon \mathbf {I})}{\mathcal {N} (\mathbf {x} ; \mathbf {0} , \mathbf {I}) \exp (r (\mathbf {x} _ {0 | 1}) / \alpha) \mathcal {N} (\mathbf {x} ^ {\prime} ; \mathbf {x} (1 - \epsilon / 2) + \epsilon / 2 \cdot \nabla r (\mathbf {x} _ {0 | 1}) / \alpha , \epsilon \mathbf {I})} (4 5) \\ := \frac {\varphi_ {M} \left(\mathbf {x} ^ {\prime} , \mathbf {x}\right)}{\varphi_ {M} \left(\mathbf {x} , \mathbf {x} ^ {\prime}\right)} \tag {46} \\ \end{array}
+$$
+
+where we denote $\mathbf{x}_{0|1}^{\prime} \coloneqq \mathbf{x}_{0|1}(\mathbf{x}^{\prime})$ , i.e., we calculate Tweedie's formula with $\mathbf{x}^{\prime}$ . Thus the denominator, which is $\varphi_{M}(\mathbf{x}, \mathbf{x}^{\prime})$ , is proportional to the following expression:
+
+$$
+\exp \left(- \frac {\| \mathbf {x} \| _ {\mathbf {I}} ^ {2}}{2} + \frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha} - \frac {\| \mathbf {x} ^ {\prime} - \left\{\mathbf {x} \left(1 - \frac {\epsilon}{2}\right) + \frac {\epsilon}{2} \nabla \frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha} \right\} \| _ {\mathbf {I}} ^ {2}}{2 \epsilon}\right). \tag {47}
+$$
+
+After simplifying the expression—specifically, canceling out the cross terms involving $\mathbf{x}$ and $\mathbf{x}'$ that will appear symmetrically in the numerator and denominator—the expression for $\varphi_{M}(\mathbf{x},\mathbf{x}')$ becomes:
+
+$$
+\begin{array}{l} \varphi_ {M} \left(\mathbf {x}, \mathbf {x} ^ {\prime}\right) (48) \\ = \exp \left(\frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha}\right) \exp \left(- \frac {\epsilon}{8} \left\| \nabla \frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha} \right\| _ {\mathbf {I}} ^ {2} - \frac {\epsilon}{8} \| \mathbf {x} \| _ {\mathbf {I}} ^ {2} + \frac {1}{2} \left\langle \nabla \frac {r \left(\mathbf {x} _ {0 | 1}\right)}{\alpha}, \left(\mathbf {x} ^ {\prime} - \left(1 - \frac {\epsilon}{2}\right) \mathbf {x}\right) \right\rangle_ {\mathbf {I}}\right). (49) \\ \end{array}
+$$
+
+The numerator $\varphi_{M}(\mathbf{x}^{\prime},\mathbf{x})$ can be obtained by simply interchanging $\mathbf{x}$ and $\mathbf{x}^{\prime}$ . The acceptance probability of MALA is then, $\min \left(1,\frac{\varphi_M(\mathbf{x}^{\prime},\mathbf{x})}{\varphi_M(\mathbf{x},\mathbf{x}^{\prime})}\right)$ .
+
+# D Experimental Setup and Details
+
+In this section, we provide comprehensive details for each application: layout-to-image generation, quantity-aware image generation, and aesthetic-preference image generation. We also include full experimental details.
+
+Layout-to-Image Generation. This task involves placing user-specified objects within designated bounding boxes [82-85]. We evaluate performance on 50 randomly sampled cases from the HRS-Spatial [80] dataset. As the reward model, we use GroundingDINO [72], and measure the alignment between predicted and target boxes using mean Intersection-over-Union (mIoU). For the held-out reward, we compute mIoU using a different object detector, Salience DETR [74].
+
+Quantity-Aware Image Generation. This task involves generating a user-specified object in a specified quantity [86-88]. We evaluate methods on a custom dataset constructed via GPT-4o [81], comprising 20 object categories with randomly assigned counts up to 90, totaling 40 evaluation cases. As the reward model, we used T2ICount [73], which takes a generated image and the corresponding text prompt as input and returns a density map. Summing over this density map yields a differentiable estimate of the object count $n_{\mathrm{pred}}$ . The reward is defined as the negative smooth L1 loss:
+
+$$
+r _ {\text {c o u n t}} = \left\{ \begin{array}{l l} - 0. 5 (n _ {\text {p r e d}} - n _ {\text {g t}}) ^ {2} & | n _ {\text {p r e d}} - n _ {\text {g t}} | < 1, \\ - | n _ {\text {p r e d}} - n _ {\text {g t}} | + 0. 5 & | n _ {\text {p r e d}} - n _ {\text {g t}} | \geq 1. \end{array} \right.
+$$
+
+where $n_{\mathrm{gt}}$ denotes the input object quantity. For the held-out reward, we used an alternative counting model, CountGD [75]. This model returns integer-valued object counts. We apply a confidence threshold of 0.3 and evaluate using mean absolute error (MAE) and counting accuracy, where a prediction is considered correct if $n_{\mathrm{pred}} = n_{\mathrm{gt}}$ .
+
+Aesthetic-Preference Image Generation This task involves generating visually appealing images. We evaluate performance using 45 prompts consisting of animal names, provided in [45]. As the reward model, we use the LAION Aesthetic Predictor V2 [76], which estimates the aesthetic quality of an image and is commonly used in reward-alignment literature [12, 51, 60, 45].
+
+Common Held-Out Reward Models. For all applications, we additionally evaluate the generated images using widely adopted held-out reward models for image quality and text alignment. Specifically, we use ImageReward [77], fine-tuned on human feedback, and VQAScore [78], which leverages a visual question answering (VQA) model. Both are based on vision-language models and assess how well the generated image aligns with the input text prompt.
+
+Experimental Details. We use FLUX-Schnell [28] as the score-based generative model for our method and all baselines. Although FLUX is a flow-based model, SMC-based inference-time reward alignment can be applied by reformulating the generative ODE as an SDE [7, 13], as described in Sec. 3.1. Further to ensure diversity between samples during SMC, we applied Variance Preserving (VP) interpolant conversion [13, 54]. Apart from FLUX, we additionally report quantitative and qualitative results using another score-based generative model, SANA-Sprint [29], in Appendix H. These results demonstrate that the our claims are not tied to a specific model architecture, and additionally highlighting the robustness of $\Psi$ -SAMPLER.
+
+We use 25 denoising steps for all SMC-based methods and 50 for single-particle methods to compensate for their reduced exploration capacity. To ensure fair comparison, we fix the total number of function evaluations (NFE) across all SMC variants—1,000 for layout-to-image and quantity-aware generation tasks, and 500 for aesthetic-preference image generation. For aesthetic-preference image generation, we used half the NFE compared to other tasks because we found it sufficient for performance convergence. For methods that sample from the posterior, half of the NFE is allocated to the initial sampling stage, resulting in 20 particles during SMC, while prior-based methods use all NFE for SMC with 40 particles. For the aesthetic-preference image generation, we use half the number of particles in both settings to reflect the halved NFE. The ablation study comparing the performance on varying NFE allocation is provided in Appendix G. For all experiments, we used NVIDIA A6000 GPUs with 48GB VRAM.
+
+# E Toy Experiment Setup and Details
+
+In this section, we describe the setup for the toy experiment presented in the main paper. Specifically, the data distribution $p_0$ is a six-mode Gaussian mixture with covariance 0.3I, consisting of
+
+six equally weighted components uniformly arranged on a circle of radius 6. The reward function is defined as the sum of three Gaussian components centered at points evenly spaced on the circle: $r(x) = \sum_{i=1}^{3} \exp(-0.5\|x - p_i\|^2)$ , where $p_1 = (6.5,0)$ , $p_1 = (-3.25,3.25\sqrt{3})$ , and $p_3 = (-3.25, -3.25\sqrt{3})$ . We trained a few-step score-based generative model using 4-layer MLP with hidden dimension 128. For sampling, we fixed the total NFE to 100 across all methods and visualized the results using 2,000 generated samples. We used step size 0.1 for MALA and 0.2 for pCNL. We deliberately used a smaller step size for MALA than for pCNL to better reflect the actual experimental settings used in the main experiments (Sec. 5.1).
+
+# F Additional Evaluation Results of MALA and pCNL Initializations under Varying Step Size
+
+We presented a comparison between MALA and pCNL under varying MCMC step sizes for the layout-to-image generation task in the Sec. 5.4. Here, we extend this analysis to the remaining two tasks. In Fig. 4, we report results for quantity-aware generation (top row) and aesthetic-preference generation (bottom row), comparing MALA and pCNL across a range of step sizes.
+
+For the quantity-aware image generation task, we report acceptance probability, reward (negative smooth L1 loss via T2I-Count[73]), held-out reward (mean absolute error (MAE) via CountGD [75]), and LPIPS Mean Pairwise Distance (MPD) [79] (which measures the sample diversity). For the aesthetic-preference task, we report acceptance probability, reward (aesthetic score [76]), and LPIPS MPD [79]. All metrics are directly computed from the Tweedie estimates [18] of MCMC samples, before the SMC stage.
+
+As the step size increases, MALA's acceptance probability quickly falls to near zero, whereas pCNL retains a stable acceptance rate across a wider range. In quantity-aware image generation, pCNL achieves its best T2I-Count reward and lowest MAE at moderate step sizes (approximately 0.5–1.0), while MALA's performance deteriorates sharply beyond 0.05. Although pCNL's LPIPS MPD decreases at larger step sizes, it consistently outperforms MALA in diversity across the same settings. In aesthetic-preference generation, pCNL generally achieves higher aesthetic scores, with only a slight drop at the smallest step size, and maintains higher LPIPS MPD across all settings. In contrast, MALA's aesthetic scores decline rapidly once its acceptance rate vanishes.
+
+These results demonstrate that pCNL can effectively leverage larger step sizes to improve sample quality, with only minimal trade-offs in diversity.
+
+# Quantity-aware Generation
+
+
+
+
+
+
+
+
+
+# Aesthetic-preference Generation
+
+
+Figure 4: Performance comparison of MALA and pCNL across different evaluation metrics under two generation settings: quantity-aware generation (top row) and aesthetic-preference generation (bottom row). Each graph illustrates the performance trend with varying step sizes.
+
+
+
+
+
+# G Ablation Study: Varying NFE Allocation for Initial Particle Sampling Stage
+
+In this section, we present an ablation study examining how performance varies with different allocations of the total number of function evaluations (NFE) between the initial particle sampling stage (via Top- $K$ -of- $N$ or MCMC) and the subsequent SMC stage. In our main experiments, we adopt a balanced allocation, with $50\%$ of the NFE budget used for initial particle sampling and the remaining $50\%$ for SMC (denoted as $50\%/50\%$ ).
+
+To assess the effect of this design choice, we evaluate two alternative NFE splits: $25\%$ for initial particle sampling and $75\%$ for SMC (denoted as $25\%/75\%$ ), and $75\%$ for initial particle sampling with only $25\%$ for SMC (denoted as $75\%/25\%$ ). We conduct this analysis on both Top- $K$ -of- $N$ and $\Psi$ -SAMPLEER. For consistency, we fix the number of SMC steps to 25, adjusting the number of particles accordingly: 30 particles for the $25\%/75\%$ setting and 10 particles for the $75\%/25\%$ setting. For the aesthetic-preference image generation, we use half the number of particles in both settings to reflect the halved NFE. Tab. 2 summarizes the results.
+
+The $75\% / 25\%$ split overinvests compute to initial particle sampling, resulting in inefficiency due to the significantly reduced number of particles used during the SMC phase. As a result, it consistently showed the worst performance across all reward metrics. Conversely, the $25\% / 75\%$ split dedicates too little budget to initial particle sampling, limiting exploration despite using 1.5 times more particles than the $50\% / 50\%$ split. This leads to weaker performance, particularly under held-out reward evaluations. In contrast, the balanced $50\% / 50\%$ split consistently yields the most robust performance across both seen and held-out rewards across all tasks.
+
+| Tasks | Metrics | Top-K-of-N | Ψ-SAMPLER |
| 25%/75% | 50%/50% | 75%/25% | 25%/75% | 50%/50% | 75%/25% |
| Layout to Image | GroundingDINO† [72] ↑ | 0.424 | 0.425 | 0.390 | 0.454 | 0.467 | 0.433 |
| mIoU [74] ↑ | 0.439 | 0.427 | 0.401 | 0.463 | 0.471 | 0.426 |
| ImageReward [77] ↑ | 1.142 | 0.957 | 0.913 | 1.128 | 1.035 | 0.884 |
| VQA [78] ↑ | 0.822 | 0.855 | 0.770 | 0.825 | 0.810 | 0.766 |
| Quantity Aware | T2I-Count† [73] ↓ | 1.021 | 1.077 | 2.934 | 0.804 | 0.850 | 1.892 |
| MAE [75] ↓ | 4.6 | 3.675 | 5.65 | 3.6 | 2.925 | 3.7 |
| Acc (%) [75] ↑ | 22.5 | 12.5 | 25.0 | 25.0 | 32.5 | 30.0 |
| ImageReward [77] ↑ | 0.739 | 0.752 | 0.714 | 0.693 | 0.796 | 0.694 |
| VQA [78] ↑ | 0.910 | 0.960 | 0.937 | 0.932 | 0.951 | 0.943 |
| Aesthetic Preference | Aesthetic† [76] ↑ | 6.958 | 6.879 | 6.853 | 7.015 | 7.012 | 6.868 |
| ImageReward [77] ↑ | 1.114 | 1.133 | 1.076 | 1.040 | 1.171 | 1.062 |
| VQA [78] ↑ | 0.964 | 0.961 | 0.969 | 0.968 | 0.963 | 0.952 |
+
+Table 2: Ablation study results on varying NFE allocation conducted with $\Psi$ -SAMPLEER and Top- $K$ -of- $N$ across three task domains. **Bold** indicates the best performance, while **underline** denotes the second-best result for each metric. **Metrics** marked with $\dagger$ are used as seen reward during reward-guided sampling, where others are held-out reward. Higher values indicate better performance ( $\uparrow$ ), unless otherwise noted ( $\downarrow$ ).
+
+# H $\Psi$ -SAMPLEER with Other Score-Based Generative Model
+
+To evaluate the generality of $\Psi$ -SAMPLEER and further support our claims, we additionally conduct experiments using SANA-Sprint [29], which is a few-step flow-based generative models.
+
+Despite architectural differences, SANA-Sprint integrates seamlessly with our $\Psi$ -SAMPLEER. As shown in Tab. 3, it demonstrates reward and quality improvements consistent with those observed using FLUX [28] in Tab. 1, indicating that our method generalizes beyond a specific backbone. Notably, $\Psi$ -SAMPLEER delivers the highest performance across all seen reward models and maintains strong generalization to held-out metrics. Moreover, among SMC-based methods, those that initialize particles from the posterior consistently outperform prior-based variants (except for few case), highlighting the benefit of posterior-informed initialization.
+
+These results further highlight the robustness of $\Psi$ -SAMPLEER and its applicability to a various few-step score-based generative models.
+
+We further provide qualitative results for each applications, conducted with SANA-Sprint, in Fig. 5. For the layout-to-image generation task, each example shows the input layout with color-coded phrases and corresponding bounding boxes for clarity. In the quantity-aware image generation task, we overlay the predicted object centroids from a held-out counting model [75] on each image to facilitate comparison. The predicted count along with its absolute difference from the target quantity are shown beneath each image in the format $(\Delta \cdot)$ , with the best-performing result highlighted in blue. Note that for the aesthetic-preference generation task, the first row corresponds to the prompt "Dog" and the second to "Turkey". We display the generated images alongside their predicted aesthetic scores [76].
+
+| Tasks | Metrics | Single Particle | SMC-Based Methods |
| DPS [39] | FreeDoM [40] | Sampling from Prior | Sampling from Posterior |
| TDS [14] | DAS [12] | Top-K-of-N | ULA | MALA | Ψ-SAMPLER |
| Layout to Image | GroundingDINO†[72]↑ | 0.144 | 0.159 | 0.403 | 0.338 | 0.406 | 0.388 | 0.392 | 0.429 |
| mIoU [74]↑ | 0.229 | 0.242 | 0.405 | 0.343 | 0.406 | 0.393 | 0.394 | 0.432 |
| ImageReward [77]↑ | 1.241 | 1.068 | 1.363 | 1.263 | 1.478 | 1.227 | 1.326 | 1.502 |
| VQA [78]↑ | 0.779 | 0.754 | 0.835 | 0.808 | 0.851 | 0.776 | 0.832 | 0.853 |
| Quantity Aware | T2I-Count†[73]↓ | 11.290 | 12.839 | 0.110 | 0.122 | 0.0628 | 0.220 | 0.148 | 0.027 |
| MAE [75]↓ | 12.3 | 13.8 | 3.475 | 4.55 | 2.825 | 3.025 | 2.375 | 2.175 |
| Acc (%) [75]↑ | 0.0 | 0.0 | 30.0 | 22.5 | 27.5 | 22.5 | 30.0 | 32.5 |
| ImageReward [77]↑ | 0.680 | 0.526 | 0.954 | 0.889 | 0.803 | 0.789 | 0.840 | 0.845 |
| VQA [78]↑ | 0.920 | 0.859 | 0.934 | 0.920 | 0.916 | 0.928 | 0.922 | 0.930 |
| Aesthetic Preference | Aesthetic†[76]↑ | 6.432 | 6.281 | 7.436 | 7.324 | 7.452 | 7.343 | 7.412 | 7.469 |
| ImageReward [77]↑ | 1.106 | 1.045 | 1.233 | 1.258 | 1.144 | 1.367 | 1.217 | 1.262 |
| VQA [78]↑ | 0.891 | 0.902 | 0.894 | 0.907 | 0.888 | 0.905 | 0.904 | 0.909 |
+
+Table 3: Quantitative comparison of $\Psi$ -SAMPLEER and baselines across three task domains, conducted on SANA-Sprint [29]. **Bold** indicates the best performance, while **underline** denotes the second-best result for each metric. **Metrics** marked with $\dagger$ are used as seen reward during reward-guided sampling, where others are held-out reward. Higher values indicate better performance ( $\uparrow$ ), unless otherwise noted ( $\downarrow$ ).
+
+
+Figure 5: Qualitative results for each application on SANA-Sprint [29].
+
+# I Additional Qualitative Results
+
+In this section, we present additional qualitative results that extend the examples shown in Fig. 2 of the main paper. Consistent with the Fig. 2, all results here are generated using FLUX [28].
+
+Layout-to-Image Generation. We present additional qualitative results for the layout-to-image generation task in Fig. 6. Each example visualizes the input layout with color-coded phrases and their corresponding bounding boxes for clarity. $\Psi$ -SAMPLER consistently respects both the spatial constraints and object presence specified in the layout. The first four rows (Row 1-4) illustrate failure cases where baseline methods generate objects in incorrect locations—either misaligned with the bounding boxes or placed in unrelated regions. For instance, in Row 1, baseline methods fail to accurately place objects within the designated bounding boxes—both the dog and the eagle appear misaligned. Similarly, in Row 4, objects such as the cat and skateboard do not conform to the specified spatial constraints, spilling outside their intended regions in the baseline outputs. In contrast, $\Psi$ -SAMPLER successfully generates all objects within their designated bounding boxes. The last four rows (Rows 5-8) illustrate more severe failure cases by baselines, where not only is spatial alignment severely violated, but some objects are entirely missing. For example, in Row 5, DAS [12] fails to generate the apple altogether, while the other baselines exhibit significant spatial misalignment. In Row 7, some baselines produce unrealistic object combinations—such as the apple and red cup being merged—and misinterpret the layout, placing the apple inside the red cup instead of correctly positioning it in the wooden bowl. $\Psi$ -SAMPLER not only positions each object correctly but also ensures that all described entities are present and visually distinct.
+
+Quantity-Aware Image Generation. We provide additional qualitative results for the quantity-aware image generation task in Fig. 7 and Fig. 8. The examples cover a variety of object categories and target counts, showing that $\Psi$ -SAMPLEER works reliably across different scenarios. For each image, we overlay the predicted object centroids from a held-out counting model [75] for easier comparison. Additionally, we display the predicted count below each image, along with the absolute difference from the target quantity in the format $(\Delta \cdot)$ . We highlight best case with blue color. $\Psi$ -SAMPLEER consistently generates the correct number of objects, even in more challenging cases like cluttered scenes or small, overlapping items. On the other hand, baseline methods often produce too many or too few objects, and sometimes include misleading objects. This trend holds across all categories, from food to everyday objects.
+
+Aesthetic-Preference Image Generation. Further qualitative results for aesthetic-preference image generation are presented in Fig. 9. For each prompt (e.g., "Horse", "Bird"), we show the predicted aesthetic score [76] below each image. While all methods generate visually plausible outputs, $\Psi$ -SAMPLER consistently produces images with higher aesthetic appeal, as reflected in both qualitative impressions and the predicted aesthetic scores.
+
+
+Layout
+
+
+"A dog and a bird sitting on a branch while an eagle is flying in the sky."
+
+
+FreeDoM [40]
+TDS [14]
+
+
+DAS [12]
+
+
+Top-K-of-N
+
+
+ULA
+
+
+MALA
+
+
+$\Psi$ -SAMPLEER
+
+
+
+
+banana and an apple are beneath a book and a flower is lying on the book in a room."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+"A photo of a bear sitting between a surfboard and a chair with a bird flying in the sky."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+"A cat is sitting on top of a skateboard, a dog is standing next to a car, and an airplane is flying in the sky."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+"A photo of an apple and a vase and a hamburger and a pizza."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+"A banana and an apple and an elephant and a backpack in the meadow with bird flying in the sky."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+"A realistic photo of a wooden bowl with an apple and a red cup and a yellow tulip in a blue vase and a lemon."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Qualitative results for layout-to-image generation. Examples show how different methods place objects based on input layouts. $\Psi$ -SAMPLEER aligns well with the given boxes, while baselines often misplace or miss objects.
+
+
+"A realistic photo, a hamburger and a donut and a couch and a bus and a surfboard in the beach."
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7: Qualitative results for quantity-aware image generation. Across various object types and target counts, $\Psi$ -SAMPLEER generates the right number of instances more reliably than baseline methods, which tend to over- or under-count.
+
+
+Figure 8: Qualitative results for quantity-aware image generation. Across various object types and target counts, $\Psi$ -SAMPLEER generates the right number of instances more reliably than baseline methods, which tend to over- or under-count.
+
+
+Figure 9: Qualitative results for aesthetic-preference image generation. $\Psi$ -SAMPLEER produces images that are not only realistic but also more visually appealing, with better focus, balance, and overall look compared to baseline outputs.
\ No newline at end of file
diff --git a/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/images.zip b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7dc3db3e39f3b3d2f0b3db256cac558044169131
--- /dev/null
+++ b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9cd5d1496c9e12e58e11f1e41d55329754e75a1aa8847c7fdf4c9c73d3bd63ab
+size 3071152
diff --git a/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/layout.json b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c3163d8db1f94dcfe95c8554fa4bfc2240064b7d
--- /dev/null
+++ b/NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aff06ae22e3cc9f5461914738d75a84ee653bf64a2f84147f370b1fc444f4930
+size 1232303