Chelsea707 commited on
Commit
6c300fa
·
verified ·
1 Parent(s): 313f313

Add Batch 0a98fd28-fecd-4b18-893e-6bb4e1e655a0

Browse files
Files changed (24) hide show
  1. NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_content_list.json +3 -0
  2. NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_model.json +3 -0
  3. NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_origin.pdf +3 -0
  4. NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/full.md +941 -0
  5. NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/images.zip +3 -0
  6. NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/layout.json +3 -0
  7. NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_content_list.json +3 -0
  8. NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_model.json +3 -0
  9. NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_origin.pdf +3 -0
  10. NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/full.md +0 -0
  11. NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/images.zip +3 -0
  12. NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/layout.json +3 -0
  13. NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_content_list.json +3 -0
  14. NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_model.json +3 -0
  15. NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_origin.pdf +3 -0
  16. NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/full.md +0 -0
  17. NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/images.zip +3 -0
  18. NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/layout.json +3 -0
  19. NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_content_list.json +3 -0
  20. NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_model.json +3 -0
  21. NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_origin.pdf +3 -0
  22. NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/full.md +0 -0
  23. NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/images.zip +3 -0
  24. NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/layout.json +3 -0
NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b9742c693d20814ee1aa063ef17c1280dbcc90de865d53186f054158b1ecd61
3
+ size 174666
NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d220fcd0e497e29067411e72a8ae8cdb1570c1102c7662de960bc0ae672b9799
3
+ size 224570
NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/1eb2d10d-7a9a-4d1a-bdbf-70b0e1d6f4d7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da6a0e05e147f8cae1c75d219ea28c37cdd13010801940a7421205efcecd3ffc
3
+ size 1177795
NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/full.md ADDED
@@ -0,0 +1,941 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $O(\sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization
2
+
3
+ Rahul Vaze*
4
+
5
+ ool of Technology and Computer Science
6
+
7
+ Tata Institute of Fundamental Research, Mumbai
8
+
9
+ rahul.vaze@gmail.com
10
+
11
+ Abhishek Sinha
12
+
13
+ School of Technology and Computer Science
14
+
15
+ Tata Institute of Fundamental Research, Mumbai
16
+
17
+ abhishek.sinha@tifr.res.in
18
+
19
+ # Abstract
20
+
21
+ The constrained version of the standard online convex optimization (OCO) framework, called COCO is considered, where on every round, a convex cost function and a convex constraint function are revealed to the learner after it chooses the action for that round. The objective is to simultaneously minimize the static regret and cumulative constraint violation (CCV). An algorithm is proposed that guarantees a static regret of $O(\sqrt{T})$ and a CCV of $\min \{ \mathcal{V}, O(\sqrt{T} \log T) \}$ , where $\mathcal{V}$ depends on the distance between the consecutively revealed constraint sets, the shape of constraint sets, dimension of action space and the diameter of the action space. When constraint sets have additional structure, $\mathcal{V} = O(1)$ . Compared to the state of the art results, static regret of $O(\sqrt{T})$ and CCV of $O(\sqrt{T} \log T)$ , that were universal, the new result on CCV is instance dependent, which is derived by exploiting the geometric properties of the constraint sets.
22
+
23
+ # 1 Introduction
24
+
25
+ In this paper, we consider the constrained version of the standard online convex optimization (OCO) framework, called constrained OCO or COCO. In COCO, on every round $t$ , the online algorithm first chooses an admissible action $x_{t} \in \mathcal{X} \subset \mathbb{R}^{d}$ , and then the adversary chooses a convex loss/cost function $f_{t}: \mathcal{X} \to \mathbb{R}$ and a constraint function of the form $g_{t}(x) \leq 0$ , where $g_{t}: \mathcal{X} \to \mathbb{R}$ is a convex function. Since $g_{t}$ 's are revealed after the action $x_{t}$ is chosen, an online algorithm need not necessarily take feasible actions on each round, and in addition to the static regret
26
+
27
+ $$
28
+ \operatorname {R e g r e t} _ {[ 1: T ]} \equiv \sup _ {\left\{f _ {t} \right\} _ {t = 1} ^ {T}} \sup _ {x ^ {\star} \in \mathcal {X}} \operatorname {R e g r e t} _ {T} \left(x ^ {\star}\right), \text {w h e r e} \operatorname {R e g r e t} _ {T} \left(x ^ {\star}\right) \equiv \sum_ {t = 1} ^ {T} f _ {t} \left(x _ {t}\right) - \sum_ {t = 1} ^ {T} f _ {t} \left(x ^ {\star}\right), \tag {1}
29
+ $$
30
+
31
+ an additional metric of interest is the total cumulative constraint violation (CCV) defined as $\mathrm{CCV}_{[1:T]} \equiv \sum_{t=1}^{T} \max(g_t(x_t), 0)$ . Let $\mathcal{X}^{\star}$ be the feasible set consisting of all admissible actions that satisfy all constraints $g_t(x) \leq 0, t \in [T]$ . Under the standard assumption that $\mathcal{X}^{\star}$ is not
32
+
33
+ empty (called the feasibility assumption), the goal is to design an online algorithm to simultaneously achieve a small regret (1) with respect to any admissible benchmark $x^{\star} \in \mathcal{X}^{\star}$ and a small CCV.
34
+
35
+ With constraint sets $\mathcal{G}_t = \{x\in \mathcal{X}:g_t(x)\leq 0\}$ being convex for all $t$ , and the assumption $\mathcal{X}^{\star} = \cap_{t}\mathcal{G}_{t}\neq \varnothing$ implies that sets $S_{t} = \cap_{\tau = 1}^{t}\mathcal{G}_{\tau}$ are convex and are nested, i.e. $S_{t}\subseteq S_{t - 1}$ and $\mathcal{X}^{\star}\in S_{t}$ for all $t$ . Essentially, set $S_{t}$ 's are sufficient to quantify the CCV.
36
+
37
+ # 1.1 Prior Work
38
+
39
+ Constrained OCO (COCO): (A) Time-invariant constraints: COCO with time-invariant constraints, i.e., $g_{t} = g, \forall t$ [Yuan and Lamperski, 2018, Jenatton et al., 2016, Mahdavi et al., 2012, Yi et al., 2021] has been considered extensively, where functions $g$ are assumed to be known to the algorithm a priori. The algorithm is allowed to take actions that are infeasible at any time to avoid the costly projection step of the vanilla projected OGD algorithm and the main objective was to design an efficient algorithm with a small regret and CCV while avoiding the explicit projection step.
40
+
41
+ (B) Time-varying constraints: The more difficult question is solving COCO problem when the constraint functions, i.e., $g_{t}$ 's, change arbitrarily with time $t$ . In this setting, all prior work on COCO made the feasibility assumption. One popular algorithm for solving COCO considered a Lagrangian function optimization that is updated using the primal and dual variables [Yu et al., 2017, Sun et al., 2017, Yi et al., 2023]. Alternatively, [Neely and Yu, 2017] and [Liakopoulos et al., 2019] used the drift-plus-penalty (DPP) framework [Neely, 2010] to solve the COCO, but which needed additional assumption, e.g. the Slater's condition in [Neely and Yu, 2017] and with weaker form of the feasibility assumption [Neely and Yu, 2017]'s.
42
+
43
+ [Guo et al., 2022] obtained the bounds similar to [Neely and Yu, 2017] but without assuming Slater's condition. However, the algorithm [Guo et al., 2022] was quite computationally intensive since it requires solving a convex optimization problem on each round. Finally, very recently, the state of the art guarantees on simultaneous bounds on regret $O(\sqrt{T})$ and CCV $O(\sqrt{T}\log T)$ for COCO were derived in [Sinha and Vaze, 2024] with a very simple algorithm that combines the loss function at time $t$ and the CCV accrued till time $t$ in a single loss function, and then executes the online gradient descent (OGD) algorithm on the single loss function with an adaptive step-size. Another extension of [Sinha and Vaze, 2024] can be found in [Lekeufack and Jordan, 2025] that considers COCO problem under predictions about $f_{t}$ 's and $g_{t}$ 's. See Remark 6 for comparison of this work with [Lekeufack and Jordan, 2025]. Please refer to Table 1 for a brief summary of the prior results.
44
+
45
+ The COCO problem has been considered in the dynamic setting as well [Chen and Giannakis, 2018, Cao and Liu, 2018, Vaze, 2022, Liu et al., 2022] where the benchmark $x^{\star}$ in (1) is replaced by $x_{t}^{\star}$ ( $x_{t}^{\star} = \arg \min_{x} f_{t}(x)$ ) that is also allowed to change its actions over time. However, in this paper, we focus our entire attention on the static version. A special case of COCO is the online constraint satisfaction (OCS) problem that does not involve any cost function, i.e., $f_{t} = 0$ , $\forall t$ , and the only object of interest is minimizing the CCV. The algorithm with state of the art guarantee for COCO [Sinha and Vaze, 2024] was shown to have a CCV of $O(\sqrt{T}\log T)$ for the OCS.
46
+
47
+ # 1.2 Convex Body Chasing Problem
48
+
49
+ A well-studied problem related to the COCO is the nested convex body chasing (NCBC) problem [Bansal et al., 2018, Argue et al., 2019, Bubeck et al., 2020], where at each round $t$ , a convex set $\chi_t \subseteq \chi$ is revealed such that $\chi_t \subseteq \chi_{t-1}$ , and $\chi_0 = \chi \subseteq \mathbb{R}^d$ is a convex, compact, and bounded set. The objective is to choose action $x_t \in \chi_t$ so as to minimize the total movement cost $C = \sum_{t=1}^{T} ||x_t - x_{t-1}||$ , where $x_0 \in \chi$ is some fixed action. Best known-algorithms for NCBC [Bansal et al., 2018, Argue et al., 2019, Bubeck et al., 2020] choose $x_t$ to be the centroid or Steiner point of $\chi_t$ , essentially well inside the newly revealed convex set in order to reduce the future movement cost. With COCO, such an approach does not appear useful because of the presence of cost functions $f_t$ 's whose minima could be towards the boundary of convex sets $\chi_t$ 's.
50
+
51
+ # 1.3 Limitations of Prior Work
52
+
53
+ We explicitly show in Lemma 6 that the best known algorithm [Sinha and Vaze, 2024] (in terms of regret and up to log factors for CCV) for solving COCO suffers a CCV of $\Omega (\sqrt{T}\log T)$ even for 'simple' problem instances where $f_{t} = f$ and $g_{t} = g$ for all $t$ and $d = 1$ dimension, for which ideally the CCV should be $O(1)$ . The same is true for most other algorithms, where the main reason for their large CCV for simple instances is that all these algorithms treat minimizing the CCV as
54
+
55
+ a regret minimization problem for functions $g_{t}$ . What they fail to exploit is the geometry of the underlying nested convex sets $S_{t}$ 's that control the CCV.
56
+
57
+ # 1.4 Main open question
58
+
59
+ In comparison to the above discussed upper bounds, the best known simultaneous lower bound [Sinha and Vaze, 2024] for COCO is $\mathcal{R}_{[1:T]} = \Omega(\sqrt{d})$ and $\mathrm{CCV}_{[1:T]} = \Omega(\sqrt{d})$ , where $d$ is the dimension of the action space $\mathcal{X}$ . Without constraints, i.e., $g_t \equiv 0$ for all $t$ , the lower bound on $\mathcal{R}_{[1:T]} = \Omega(\sqrt{T})$ [Hazan, 2012]. Thus, there is a fundamental gap between the lower and upper bound for the CCV, and the main open question for COCO is: Is it possible to simultaneously achieve $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ and $CCV_{[1:T]} = o(\sqrt{T})$ or $CCV_{[1:T]} = O(1)$ for COCO? Even though we do not fully resolve this question, in this paper, we make some meaningful progress by proposing an algorithm that exploits the geometry of the nested sets $S_t$ 's and show that it is possible to simultaneously achieve $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ and $\mathrm{CCV}_{[1:T]} = O(1)$ in certain cases, and for general case, give a bound on the CCV that depends on the shape of the convex sets $S_t$ 's while achieving $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ . In particular, the contributions of this paper are as follows.
60
+
61
+ # 1.5 Our Contributions
62
+
63
+ In this paper, we propose an algorithm (Algorithm 2) that tries to exploit the geometry of the nested convex sets $S_{t}$ 's. In particular, Algorithm 2 at time $t$ , first takes an OGD step from the previous action $x_{t-1}$ with respect to the most recently revealed loss function $f_{t-1}$ with appropriate step-size to reach $y_{t-1}$ , and then projects $y_{t-1}$ onto the most recently revealed set $S_{t-1}$ to get $x_{t}$ , the action to be played at time $t$ . Let $F_{t}$ be the "projection" hyperplane passing through $x_{t}$ that is perpendicular to $x_{t} - y_{t-1}$ . For Algorithm 2, we derive the following guarantees.
64
+
65
+ - The regret of the Algorithm 2 is $O(\sqrt{T})$ .
66
+ - The CCV for the Algorithm 2 takes the following form
67
+
68
+ - When sets $S_{t}$ 's are structured, e.g. are spheres, or axis parallel cuboids/regular polygons, CCV is $O(1)$ .
69
+ - For the special case of $d = 2$ , when projection hyperplanes $F_{t}$ 's progressively make increasing angles with respect to the first projection hyperplane $F_{1}$ , the CCV is $O(1)$ .
70
+ - For general $S_{t}$ 's, the CCV is upper bounded by a quantity $\mathcal{V}$ that is a function of the distance between the consecutive sets $S_{t}$ and $S_{t+1}$ for all $t$ , the shape of $S_{t}$ 's, dimension $d$ and the diameter $D$ . Since $\mathcal{V}$ depends on the shape of $S_{t}$ 's, there is no universal bound on $\mathcal{V}$ , and the derived bound is instance dependent.
71
+
72
+ - As pointed out above, for general $S_{t}$ 's, there is no universal bound on the CCV of Algorithm 2. Thus, we propose an algorithm Switch that combines Algorithm 2 and the algorithm from [Sinha and Vaze, 2024] to provide a regret bound of $O(\sqrt{T})$ and a CCV that is minimum of $\mathcal{V}$ and $O(\sqrt{T}\log T)$ . Thus, Switch provides a best of two worlds CCV guarantee, which is small if the sets $S_{t}$ 's are 'nice', while in the worst case it is at most $O(\sqrt{T}\log T)$ .
73
+ - For the OCS problem, where $f_{t} = 0$ , $\forall t$ , we show that the CCV of Algorithm 2 is $O(1)$ compared to the CCV of $O(\sqrt{T}\log T)$ [Sinha and Vaze, 2024].
74
+
75
+ # 2 COCO Problem
76
+
77
+ On round $t$ , the online policy first chooses an admissible action $x_{t} \in \mathcal{X} \subset \mathbb{R}^{d}$ , and then the adversary chooses a convex cost function $f_{t}: \mathcal{X} \to \mathbb{R}$ and a constraint of the form $g_{t}(x) \leq 0$ where $g_{t}: \mathcal{X} \to \mathbb{R}$ is a convex function. Once the action $x_{t}$ has been chosen, we let $\nabla f_{t}(x_{t})$ and full function $g_{t}$ or the set $\{x: g_{t}(x) \leq 0\}$ to be revealed, as is standard in the literature. We now state the standard assumptions made in the literature while studying the COCO problem [Guo et al., 2022, Yi et al., 2021, Neely and Yu, 2017, Sinha and Vaze, 2024].
78
+
79
+ Assumption 1 (Convexity) $\mathcal{X} \subset \mathbb{R}^d$ is the admissible set that is closed, convex and has a finite Euclidean diameter $D$ . The cost function $f_t: \mathcal{X} \mapsto \mathbb{R}$ and the constraint function $g_t: \mathcal{X} \mapsto \mathbb{R}$ are convex for all $t \geq 1$ .
80
+
81
+ <table><tr><td>Reference</td><td>Regret</td><td>CCV</td><td>Complexity per round</td></tr><tr><td rowspan="2">[Neely and Yu, 2017], [Liakopoulos et al., 2019]</td><td>O(√T)</td><td>O(√T)</td><td>Conv-OPT, Slater&#x27;s condition</td></tr><tr><td>O(√T)</td><td>O(√T)</td><td>Conv-OPT, Slater&#x27;s condition</td></tr><tr><td>[Guo et al., 2022]</td><td>O(√T)</td><td>O(T3/4)</td><td>Conv-OPT</td></tr><tr><td>[Yi et al., 2023]</td><td>O(Tmax(β,1-β))</td><td>O(T1-β/2)</td><td>Conv-OPT</td></tr><tr><td>[Sinha and Vaze, 2024]</td><td>O(√T)</td><td>O(√T log T)</td><td>Projection</td></tr><tr><td>This paper</td><td>O(√T)</td><td>O(min{V, √T log T})</td><td>Projection</td></tr></table>
82
+
83
+ Table 1: Summary of the results on COCO for arbitrary time-varying convex constraints and convex cost functions. In the above table, $0 \leq \beta \leq 1$ is an adjustable parameter. Conv-OPT refers to solving a constrained convex optimization problem on each round. Projection refers to the Euclidean projection operation on the convex set $\mathcal{X}$ . The CCV bound for this paper is stated in terms of $\mathcal{V}$ which can be $O(1)$ or depends on the shape of convex sets $S_{t}$ 's.
84
+
85
+ Assumption 2 (Lipschitzness) All cost functions $\{f_t\}_{t \geq 1}$ and the constraint functions $\{g_t\}_{t \geq 1}$ 's are $G$ -Lipschitz, i.e., for any $x, y \in \mathcal{X}$ , we have $|f_t(x) - f_t(y)| \leq G||x - y||$ , $|g_t(x) - g_t(y)| \leq G||x - y||$ , $\forall t \geq 1$ .
86
+
87
+ Assumption 3 (Feasibility) With $\mathcal{G}_t = \{x\in \mathcal{X}:g_t(x)\leq 0\}$ , we assume that $\mathcal{X}^{\star} = \cap_{t = 1}^{T}\mathcal{G}_{t}\neq \emptyset$ . Any action $x^{\star}\in \mathcal{X}^{\star}$ is defined to be feasible.
88
+
89
+ The feasibility assumption distinguishes the cost functions from the constraint functions and is common across all previous literature on COCO [Guo et al., 2022, Neely and Yu, 2017, Yu and Neely, 2016, Yuan and Lamperski, 2018, Yi et al., 2023, Liakopoulos et al., 2019, Sinha and Vaze, 2024].
90
+
91
+ For any real number $z$ , we define $(z)^{+} \equiv \max(0, z)$ . Since $g_{t}$ 's are revealed after the action $x_{t}$ is chosen, any online policy need not necessarily take feasible actions on each round. Thus in addition to the static<sup>2</sup> regret defined below
92
+
93
+ $$
94
+ \operatorname {R e g r e t} _ {[ 1: T ]} \equiv \sup _ {\left\{f _ {t} \right\} _ {t = 1} ^ {T}} \sup _ {x ^ {\star} \in \mathcal {X} ^ {\star}} \operatorname {R e g r e t} _ {[ 1: T ]} \left(x ^ {\star}\right), \quad \operatorname {R e g r e t} _ {[ 1: T ]} \left(x ^ {\star}\right) \equiv \sum_ {t = 1} ^ {T} f _ {t} \left(x _ {t}\right) - \sum_ {t = 1} ^ {T} f _ {t} \left(x ^ {\star}\right) \tag {2}
95
+ $$
96
+
97
+ where an additional obvious metric of interest is the total cumulative constraint violation (CCV) defined as $\mathrm{CCV}_{[1:T]} = \sum_{t=1}^{T}(g_t(x_t))^+$ . Under the standard assumption (Assumption 3) that $\mathcal{X}^{\star}$ is not empty, the goal is to design an online policy to simultaneously achieve a small regret with $x^{\star} \in \mathcal{X}^{\star}$ and a small CCV.
98
+
99
+ For simplicity, we define set
100
+
101
+ $$
102
+ S _ {t} = \cap_ {\tau = 1} ^ {t} \mathcal {G} _ {\tau}, \tag {3}
103
+ $$
104
+
105
+ where $\mathcal{G}_t$ is as defined in Assumption 3. All $\mathcal{G}_t$ 's are convex and consequently, all $S_t$ 's are convex and are nested, i.e. $S_t \subseteq S_{t-1}$ . Moreover, because of Assumption 3, each $S_t$ is non-empty and in particular $\mathcal{X}^\star \in S_t$ for all $t$ . After action $x_t$ has been chosen, set $S_t$ controls the constraint violation, which can be used to write an upper bound on the $\mathrm{CCV}_{[1:T]}$ as follows.
106
+
107
+ Definition 4 For a convex set $\chi$ and a point $x \notin \chi$ , $\text{dist}(x, \chi) = \min_{y \in \chi} ||x - y||$ .
108
+
109
+ With $G$ being the common Lipschitz constants for all $g_{t}$ 's, the constraint violation at time $t$ ,
110
+
111
+ $$
112
+ \left(g _ {t} \left(x _ {t}\right)\right) ^ {+} \leq G \operatorname {d i s t} \left(x _ {t}, S _ {t}\right), \text {a n d} \mathbf {C C V} _ {[ 1: T ]} \leq G \sum_ {t = 1} ^ {T} \operatorname {d i s t} \left(x _ {t}, S _ {t}\right). \tag {4}
113
+ $$
114
+
115
+ # 3 Algorithm from Sinha and Vaze [2024]
116
+
117
+ The best known algorithm (Algorithm 1) to solve COCO Sinha and Vaze [2024] (in terms of regret and up to log factors for CCV) was shown to have the following guarantee.
118
+
119
+ # Algorithm 1 Online Algorithm from Sinha and Vaze [2024]
120
+
121
+ 1: Input: Sequence of convex cost functions $\{f_t\}_{t=1}^T$ and constraint functions $\{g_t\}_{t=1}^T$ , $G = a$ common Lipschitz constant, $T = \text{Horizon length}$ , $D = \text{Euclidean diameter of the admissible set } \mathcal{X}$ , $\mathcal{P}_{\mathcal{X}}(\cdot) = \text{Euclidean projection oracle on the set } \mathcal{X}$
122
+ 2: Let $\beta = (2GD)^{-1}$ , $V = 1$ , $\lambda = \frac{1}{2\sqrt{T}}$ , $\Phi(x) = \exp(\lambda x) - 1$ .
123
+ 3: Initialization: Set $x_{1} = 0$ , $\mathrm{CCV}(0) = 0$ .
124
+ 4: For $t = 1 : T$
125
+ 5: Play $x_{t}$ , observe $f_{t}, g_{t}$ , incur a cost of $f_{t}(x_{t})$ and constraint violation of $(g_{t}(x_{t}))^{+}$ .
126
+ 6: $\tilde{f}_t\gets \beta f_t,\tilde{g}_t\gets \beta \max (0,g_t).$
127
+ 7: $\mathbf{CCV}(t) = \mathbf{CCV}(t - 1) + \tilde{g}_t(x_t)$ .
128
+ 8: Compute $\nabla_t = \nabla \hat{f}_t(x_t)$ , where $\hat{f}_t(x) \coloneqq V\tilde{f}_t(x) + \Phi'(\mathbf{CCV}(t))\tilde{g}_t(x)$ , $t \geq 1$ .
129
+ 9: $x_{t + 1} = \mathcal{P}_{\mathcal{X}}(x_t - \eta_t\nabla_t)$ , where $\eta_t = \frac{\sqrt{2}D}{2\sqrt{\sum_{\tau = 1}^t||\nabla_\tau||_2^2}}$
130
+
131
+ # 10: EndFor
132
+
133
+ Theorem 5 [Sinha and Vaze [2024]] Algorithm 1's Regret $_{[1:T]}$ = $O(\sqrt{T})$ and $CCV_{[1:T]} = O(\sqrt{T}\log T)$ when $f_t, g_t$ are convex.
134
+
135
+ We next show that in fact the analysis of Sinha and Vaze [2024] is tight for the CCV even when $d = 1$ and $f_{t}(x) = f(x)$ and $g_{t}(x) = g(x)$ for all $t$ . With finite diameter $D$ and the fact that any $x^{\star} \in \mathcal{X}^{\star}$ belongs to all nested convex bodies $S_{t}$ 's, when $d = 1$ , one expects that the CCV for any algorithm in this case will be $O(D)$ . However, as we show next, Algorithm 1 does not effectively make use of geometric constraints imposed by nested convex bodies $S_{t}$ 's.
136
+
137
+ Lemma 6 Even when $d = 1$ and $f_{t}(x) = f(x)$ and $g_{t}(x) = g(x)$ for all $t$ , for Algorithm 1, its $CCV_{[1:T]} = \Omega(\sqrt{T} \log T)$ .
138
+
139
+ Proof: Input: Consider $d = 1$ , and let $\mathcal{X} = [1, a], a > 2$ . Moreover, let $f_t(x) = f(x)$ and $g_t(x) = g(x)$ for all $t$ . Let $f(x) = cx^2$ for some (large) $c > 0$ and $g(x)$ be such that $G = \{x : g(x) \leq 0\} \subseteq [a/2, a]$ and let $|\nabla g(x)| \leq 1$ for all $x$ .
140
+
141
+ Let $1 < x_{1} < a / 2$ . Note that $\mathrm{CCV}(t)$ (defined in Algorithm 1) is a non-decreasing function, and let $t^{\star}$ be the earliest time $t$ such that $\Phi^{\prime}(\mathrm{CCV}(t))\nabla g(x) < -c$ . For $f(x) = cx^{2}$ , $\nabla f(x) \geq c$ for all $x > 1$ . Thus, using Algorithm 1's definition, it follows that for all $t \leq t^{\star}$ , $x_{t} < a / 2$ , since the derivative of $f$ dominates the derivative of $\Phi^{\prime}(\mathrm{CCV}(t))g(x)$ until then.
142
+
143
+ Since $\Phi(x) = \exp(\lambda x) - 1$ with $\lambda = \frac{1}{2\sqrt{T}}$ , and by definition $|\nabla g(x)| \leq 1$ for all $x$ , thus, we have that by time $t^{\star}$ , $\mathrm{CCV}_{[1:t^{\star}]} = \Omega(\sqrt{T}\log T)$ . Therefore, $\mathrm{CCV}_{[1:T]} = \Omega(\sqrt{T}\log T)$ .
144
+
145
+ Essentially, Algorithm 1 is treating minimizing the CCV problem as regret minimization for function $g$ similar to function $f$ and this leads to its CCV of $\Omega(\sqrt{T} \log T)$ . For any given input instance with $d = 1$ , an alternate algorithm that chooses its actions following online gradient descent (OGD) projected on to the most recently revealed feasible set $S_{t}$ achieves $O(\sqrt{T})$ regret (irrespective of the starting action $x_{1}$ ) and $O(D)$ CCV (since any $x^{\star} \in S_{t}$ for all $t$ ). We extend this intuition in the next section, and present an algorithm that exploits the geometry of the nested convex sets $S_{t}$ for any $d$ .
146
+
147
+ # 4 New Algorithm for solving COCO
148
+
149
+ In this section, we present a simple algorithm (Algorithm 2) for solving COCO. Algorithm 2 is essentially an online projected gradient algorithm (OGD), which first takes an OGD step from the previous action $x_{t-1}$ with respect to the most recently revealed loss function $f_{t-1}$ with appropriate step-size which is then projected onto $S_{t-2}$ to reach $y_{t-1}$ , and then projects $y_{t-1}$ onto the most recently revealed set $S_{t-1}$ to get $x_t$ , the action to be played at time $t$ . (3).
150
+
151
+ Remark 1 Step 6 of Algorithm 2 might appear unnecessary, however, its useful for proving Theorem 12.
152
+
153
+ Since Algorithm 2 is essentially an online projected gradient algorithm, similar to the classical result on OGD, next, we show that the regret of Algorithm 2 is $O(\sqrt{T})$ .
154
+
155
+ # Algorithm 2 Online Algorithm for COCO
156
+
157
+ 1: Input: Sequence of convex cost functions $\{f_t\}_{t=1}^T$ and constraint functions $\{g_t\}_{t=1}^T$ , $G = a$ common Lipschitz constant, $d$ dimension of the admissible set $\mathcal{X}$ , step size $\eta_t = \frac{D}{G\sqrt{t}}$ . $D =$ Euclidean diameter of the admissible set $\mathcal{X}$ , $\mathcal{P}_{\mathcal{X}}(\cdot) =$ Euclidean projection on the set $\mathcal{X}$ ,
158
+ 2: Initialization: Set $x_{1} \in \mathcal{X}$ arbitrarily, $\mathrm{CCV}(0) = 0$ .
159
+ 3: For $t = 1:T$
160
+ 4: Play $x_{t}$ , observe $f_{t}, g_{t}$ , incur a cost of $f_{t}(x_{t})$ and constraint violation of $(g_{t}(x_{t}))^{+}$ .
161
+ 5: Set $S_{t}$ as defined in (3)
162
+ 6: $y_{t} = \mathcal{P}_{S_{t - 1}}(x_{t} - \eta_{t}\nabla f_{t}(x_{t}))$
163
+ 7: $x_{t + 1} = \mathcal{P}_{S_t}(y_t)$
164
+ 8: EndFor
165
+
166
+ Lemma 7 The Regret $_{[1:T]}$ for Algorithm 2 is $O(\sqrt{T})$ .
167
+
168
+ Extension of Lemma 7 when $f_{t}$ 's are strongly convex which results in $\mathrm{Regret}_{[1:T]} = O(\log T)$ for Algorithm 2 follows standard arguments Hazan [2012] and is omitted.
169
+
170
+ The real challenge is to bound the total CCV for Algorithm 2. Let $x_{t}$ be the action played by Algorithm 2. Then by definition, $x_{t} \in S_{t-1}$ . Moreover, from (4), the constraint violation at time $t$ , $\mathrm{CCV}(t) \leq G\mathrm{dist}(x_{t}, S_{t})$ . The next action $x_{t+1}$ chosen by Algorithm 2 belongs to $S_{t}$ , however, it is obtained by first taking an OGD step from $x_{t}$ to reach $y_{t}$ and then projects $y_{t}$ onto $S_{t}$ . Since $f_{t}$ 's are arbitrary, the OGD step could be towards any direction, and thus, there is no direct relationship between $x_{t+1}$ and $x_{t}$ . Informally, $(x_{1}, x_{2}, \ldots, x_{T})$ is not a connected curve with any useful property. Thus, we take recourse in upper bounding the CCV via upper bounding the total movement cost $M$ (defined below) between nested convex sets using projections.
171
+
172
+ The total constraint violation for Algorithm 2 is
173
+
174
+ $$
175
+ \operatorname {C C V} _ {[ 1: t ]} \leq G \sum_ {\tau = 1} ^ {t ^ {\infty}} \operatorname {d i s t} \left(x _ {\tau}, S _ {\tau}\right) \stackrel {(a)} {\leq} G \sum_ {\tau = 1} ^ {t} \left| \left| x _ {\tau} - b _ {\tau} \right| \right| \stackrel {(b)} {=} G M _ {t}, \tag {5}
176
+ $$
177
+
178
+ where in (a) $b_{t}$ is the projection of $x_{t}$ onto $S_{t}$ , i.e., $b_{t} = \mathcal{P}_{S_{t}}(x_{t})$ and in (b) $M_{t} = \sum_{\tau=1}^{t} ||x_{\tau} - b_{\tau}||$ is defined to be the total movement cost on the instance $S_{1}, \ldots, S_{t}$ . The object of interest is $\mathbf{M}_{\mathbf{T}}$ .
179
+
180
+ # 5 Bounding the Total Movement Cost $M_T$ for Algorithm 2
181
+
182
+ We start by considering structured problem instances where CCV of Algorithm 2 is $O(1)$ , i.e., independent of $T$ .
183
+
184
+ Lemma 8 If all nested convex bodies $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ are spheres then $M_T \leq d^{3/2}D = O(1)$ .
185
+
186
+ Lemma 9 If all nested convex bodies $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ are cuboids/regular polygons that are axis parallel to each other, then $M_T \leq d^{3/2} D = O(1)$ .
187
+
188
+ Interestingly, input instance where $S_{t}$ 's are axis-parallel cuboids has been used to derive the only known lower bound for COCO of Regret $_{[1:T]} = O(\sqrt{d})$ and $\mathrm{CCV}_{[1:T]} = O(\sqrt{d})$ [Sinha and Vaze, 2024].
189
+
190
+ Remark 2 Lemma 8 and 9 are first results of its kind in COCO, where even for nicely structured instances the previous best known guarantee is $CCV_{[1:T]} = O(\sqrt{T}\log T)$ [Sinha and Vaze, 2024] or $CCV_{[1:T]} = O(\sqrt{T})$ [Ferreira and Soares, 2025].
191
+
192
+ Next, we show that similar $O(1)$ CCV guarantee can be obtained for Algorithm 2 with less structured input, however, only when $d = 2$ .
193
+
194
+ # 5.1 Special case of $d = 2$
195
+
196
+ In this section, we show that if $d = 2$ (all convex sets $S_{t}$ 's lie in a plane) and the projections satisfy a monotonicity property depending on the problem instance, then we can bound the total CCV for Algorithm 2 independent of the time horizon $T$ and consequently getting a $O(1)$ CCV.
197
+
198
+ ![](images/54d15352dd67e8426b274d3d974caf364cb05afa89ffc1f69f48b5a79535705f.jpg)
199
+ Figure 1: Definition of $F_{t}$ 's.
200
+
201
+ ![](images/327abf5b86d51a43aa87c50895928c68415097b1f8de7ab24c58b90c88d10a7f.jpg)
202
+ Figure 2: Figure representing the cone $C_{w_t}(c_t)$ that contains the convex hull of $m_t$ and $S_t$ with unit vector $w_t$ .
203
+
204
+ Definition 10 Recall from the definition of Algorithm 2, $y_{t} = \mathcal{P}_{S_{t - 1}}(x_{t} - \eta_{t}\nabla f_{t}(x_{t}))$ and $x_{t + 1} = \mathcal{P}_{S_t}(y_t)$ . Let the hyperplane perpendicular to line segment $(y_{t},x_{t + 1})$ passing through $x_{t + 1}$ be $F_{t}$ . Without loss of generality, we let $y_{t} \notin S_{t}$ , since otherwise the projection is trivial. Essentially $F_{t}$ is the projection hyperplane at time $t$ . Let $\mathcal{H}_t^+$ denote the positive half plane corresponding to $F_{t}$ , i.e., $\mathcal{H}_t^+ = \{z: z^T(y_t - x_{t + 1}) \geq 0\}$ . Refer to Fig. 1. Let the angle between $F_{1}$ and $F_{t}$ be $\theta_t$ .
205
+
206
+ Definition 11 The instance $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ is defined to be monotonic if $\theta_{2} \leq \theta_{3} \leq \dots \leq \theta_{T}$ .
207
+
208
+ Theorem 12 For $d = 2$ when the instance is monotonic, $CCV_{[1:T]} = O(GD)$ for Algorithm 2.
209
+
210
+ Theorem 12 shows that CCV of Algorithm 2 is independent of $T$ as long as the instance is monotonic when $d = 2$ . It is worth noting that even under the monotonicity assumption it is non-trivial to upper bound the CCV since the successive angles made by $F_{t}$ 's with $F_{1}$ can increase arbitrarily slowly, making it difficult to control the total CCV. The proof is derived by using basic convex geometry results from Manselli and Pucci [1991] in combination with exploiting the definition of Algorithm 2 and the monotonicity condition.
211
+
212
+ Finally, in the next subsection, we upper bound $M_T$ , and consequently the CCV for Algorithm 2, when the input has no structure other than $S_t$ 's being nested.
213
+
214
+ # 5.2 General Guarantee on CCV
215
+
216
+ In this subsection, we give a general bound on $M_T$ of Algorithm 2 for any sequence of nested convex bodies which depends on the geometry of the nested convex bodies (instance dependent). To state the result we need the following preliminaries.
217
+
218
+ Following (5), $b_{t} = \mathcal{P}_{S_{t}}(x_{t})$ where $x_{t} \in \partial S_{t-1}$ , where $\partial S$ is the boundary of convex set $S$ . Without loss of generality, $x_{t} \notin S_{t}$ since otherwise the distance $||x_{t} - b_{t}|| = 0$ . Let $m_{t}$ be the mid-point of $x_{t}$ and $b_{t}$ , i.e. $m_{t} = \frac{x_{t} + b_{t}}{2}$ .
219
+
220
+ Definition 13 Let the convex hull of $m_t \cup S_t$ be $\mathcal{C}_t$ . Let $w_t$ be a unit vector such that there exists $c_t > 0$ such that the cone
221
+
222
+ $$
223
+ C _ {w _ {t}} \left(c _ {t}\right) = \left\{z \in \mathbb {R} ^ {d}: - w _ {t} ^ {T} \frac {(z - m _ {t})}{| | (z - m _ {t}) | |} \geq c _ {t} \right\}
224
+ $$
225
+
226
+ contains $\mathcal{C}_t$ . Since $S_{t}$ is convex, such $w_{t},c_{t} > 0$ exist. For example, $w_{t} = b_{t} - x_{t}$ is one such choice for which $c_{t} > 0$ since $m_t\notin S_t$ . See Fig. 2 for a pictorial representation.
227
+
228
+ Let $c_{w_t,t}^\star = \arg \max_{c_t} C_{w_t}(c_t)$ , $c_t^\star = \max_{w_t} c_{w_t,t}^\star$ , and $w_t^\star = \arg \max_{w_t} c_{w_t,t}^\star$ . Moreover, let $c^\star = \min_{t} c_t^\star$ , where by definition, $c^\star < 1$ .
229
+
230
+ Essentially, $2\cos^{-1}(c_t^\star)$ is the angle width of $\mathcal{C}_t$ with respect to $w_{t}^{\star}$ , i.e. each element of $\mathcal{C}_t$ makes an angle of at most $\cos^{-1}(c_t^\star)$ with $w_{t}^{\star}$ .
231
+
232
+ Remark 3 Note that $c_t^*$ is only a function of the distance $||x_t - b_t||$ and the shape of $S_t$ 's, in particular, the maximum width of $S_t$ along the directions perpendicular to vector $x_t - b_t \forall t$ which
233
+
234
+ can be at most the diameter $D$ . $c_{t}^{\star}$ decreases (increasing the "width" of cone $C_{w_t^*}(c_t^\star)$ ) as $\| x_{t} - b_{t}\|$ decreases, but small $\| x_{t} - b_{t}\|$ also implies small violation at time $t$ from (5).
235
+
236
+ Remark 4 $c^{\star}$ is instance dependent or algorithm dependent? For notational simplicity, we have defined $c^{\star}$ using $x_{t}$ 's (Algorithm 2 specific quantity) and its projection $b_{t}$ on $S_{t}$ . However, since $x_{t}$ and $x_{t-1}$ have no useful relation between them, $x_{t}$ can be any arbitrary point on the boundary of $S_{t-1}$ , and $c^{\star}$ is in effect defined with respect to arbitrary $x_{t} \in S_{t-1}$ making it an instance-dependent quantity.
237
+
238
+ Lemma 14 $M_T$ for Algorithm 2 is at most $\frac{2V_d(d - 1)}{V_{d - 1}}\left(\frac{1}{c^\star}\right)^d D$ , where $V_{d}$ is the $(d - 1)$ -dimensional Lebesgue measure of the unit sphere in $d$ dimensions.
239
+
240
+ Proof Idea Projecting $x_{t} \in \partial S_{t-1}$ onto $S_{t}$ to get $b_{t} = \mathcal{P}_{S_{t}}(x_{t})$ , the diameter of $S_{t}$ is at most diameter of $S_{t-1} - ||x_{t} - b_{t}||$ , however, only along the direction $b_{t} - x_{t}$ . Since the shape of $S_{t}$ is arbitrary, as a result, the diameter of $S_{t}$ need not be smaller than the diameter of $S_{t-1}$ along any pre-specified direction, which was the main idea used to derive Lemma 8. Thus, to prove Lemma 14 we relate the distance $||x_{t} - b_{t}||$ with the decrease in mean width of a convex body, that is defined as the expected width of the convex body along all the directions that are chosen uniformly randomly (formal definition is provided in Definition 34).
241
+
242
+ Note that $V_d / V_{d-1} = O(1 / \sqrt{d})$ . Thus, from Lemma 14 we get the following main result of the paper for Algorithm 2 combining Lemma 7 and Lemma 14.
243
+
244
+ Theorem 15 Algorithm 2 has Regret $_{[1:T]} = O(\sqrt{T})$ , and $CCV_{[1:T]} = O\left(\sqrt{d}\left(\frac{1}{c^{\star}}\right)^{d}D\right)$ .
245
+
246
+ Theorem 15 is an instance dependent result for the CCV, compared to the prior universal guarantees of $\tilde{O}(\sqrt{T})$ on the CCV. In particular, it exploits the geometric structure of the nested convex sets $S_{t}$ 's and derives an upper bound on the CCV that only depends on the 'shape' of $S_{t}$ 's via $c^{\star}$ . Moreover, $c^{\star}$ is only a dimension $(d)$ dependent quantity (independent of $T$ ) as long as the minimum distance between consecutive constraint sets is not function of $T$ , since the diameter $D$ is constant, whereas all existing algorithms will suffer from CCV of $\Omega(\sqrt{T})$ even in this case.
247
+
248
+ Remark 5 One pertinent question at this time is: What is $c^{\star}$ and why should the CCV for a problem instance necessarily depend on it? $c^{\star}$ corresponds to the minimum angle width (via) of the problem instance, the angular width of the 'smallest' cone containing the newly revealed constraint sets. Angle width essentially depends on the width of the convex sets in directions perpendicular to the direction of projection, and controls the total CCV, since successive convex constraint sets are nested (lie inside each other), the smaller the angle width smaller is the room that an algorithm has to violate the constraints in future steps. Angle width also depends on the distance between $x_{t}$ and $S_{t}$ and is potentially large when $d(x_{t},S_{t})$ is small and the diameter along the direction perpendicular to $x_{t} - b_{t}$ is large.
249
+
250
+ $c^{\star}$ is a fundamental natural object that inherently captures the geometric difficulty in bounding the CCV. The core contribution of this paper is to formalize this by bringing in the novel concept of connecting the reduction of average width of the convex constraint set to the total constraint violation, that entails non-trivial convex analysis. If $c^{\star}$ is in fact small (e.g. total CCV is $\Omega(\sqrt{T})$ ) for a problem instance then that problem instance does not have enough geometric features to extract via projections. To cover for such instances, we propose the Switch algorithm next to cap the CCV by $\tilde{O}(\sqrt{T})$ .
251
+
252
+ # 6 Algorithm Switch
253
+
254
+ Theorem 15 provides an instance dependent bound on the CCV, that is a function of $c^{\star}$ . If $c^{\star}$ is small, CCV can be larger than $O(\sqrt{T}\log T)$ , the CCV guarantee of Algorithm 1 [Sinha and Vaze, 2024]. Thus, next, we marry the two algorithms, Algorithm 1 and Algorithm 2, in Algorithm 3 to provide a best of both results as follows.
255
+
256
+ Theorem 16 Switch (Algorithm 3) has regret $\text{Regret}_{[1:T]} = O(\sqrt{T})$ , while $CCV_{[1:T]} = \min \left\{O\left(\sqrt{d}\left(\frac{1}{c^s}\right)^d D\right), O(\sqrt{T}\log T)\right\}$ .
257
+
258
+ # Algorithm 3 Switch
259
+
260
+ 1: Input: Sequence of convex cost functions $\{f_t\}_{t=1}^T$ and constraint functions $\{g_t\}_{t=1}^T$ , $G = a$ common Lipschitz constant, $d$ dimension of the admissible set $\mathcal{X}$ , $D = \text{Euclidean diameter of the admissible set } \mathcal{X}$ , $\mathcal{P}_{\mathcal{X}}(\cdot) = \text{Euclidean projection operator on the set } \mathcal{X}$ ,
261
+ 2: Initialization: Set $x_{1} \in \mathcal{X}$ arbitrarily, $\mathrm{CCV}(0) = 0$ .
262
+ 3: For $t = 1:T$
263
+ 4: If $\mathbf{CCV}(t - 1) \leq \sqrt{T} \log T$
264
+ 5: Follow Algorithm 2 and update $\mathbf{CCV}(t) = \mathbf{CCV}(t - 1) + \max \{g_t(x_t),0\}$ .
265
+ 6: Else
266
+ 7: Follow Algorithm 1 with resetting $\mathbf{CCV}(t - 1) = 0$
267
+ 8: EndIf
268
+ 9: EndFor
269
+
270
+ Algorithm Switch should be understood as the best of two worlds algorithm, where the two worlds correspond to one having nice convex sets $S_{t}$ 's that have CCV independent of $T$ or $o(\sqrt{T})$ for Algorithm 2, while in the other, CCV of Algorithm 2 is large on its own, and the overall CCV is controlled by discontinuing the use of Algorithm 2 once its CCV reaches $\sqrt{T}\log T$ and switching to Algorithm 1 thereafter that has universal guarantee of $O(\sqrt{T}\log T)$ on its CCV.
271
+
272
+ # 7 OCS Problem
273
+
274
+ In [Sinha and Vaze, 2024], a special case of COCO, called the OCS problem, was introduced where $f_{t} \equiv 0$ for all $t$ . Essentially, with OCS, only constraint satisfaction is the objective. In [Sinha and Vaze, 2024], Algorithm 1 was shown to have CCV of $O(\sqrt{T}\log T)$ . Next, we show that Algorithm 2 has CCV of $O(1)$ for the OCS, a remarkable improvement.
275
+
276
+ Theorem 17 For solving OCS, Algorithm 2 has $CCV_{[1:T]} = O\left(d^{d/2}D\right) = O(1)$ .
277
+
278
+ As discussed in [Sinha and Vaze, 2024], there are important applications of OCS, and it is important to find tight bounds on its CCV. Theorem 17 achieves this by showing that CCV of $O(1)$ can be achieved, where the constant depends only on the dimension of the action space and the diameter. This is a fundamental improvement compared to the CCV bound of $O(\sqrt{T}\log T)$ from [Sinha and Vaze, 2024]. Theorem 17 is derived by using the connection between the curve obtained by successive projections on nested convex sets and self-expanded curves (Definition 20) and then using a classical result on self-expanded curves from [Manselli and Pucci, 1991].
279
+
280
+ # 8 Experimental Results
281
+
282
+ In this section, we compare the performance of Algorithm 1 and Algorithm 2 experimentally. We start by simulating the performance of Algorithm 1 and Algorithm 2 on the input that was used to prove Lemma 6. Fig. 3 numerically verifies the claim of Lemma 6 that the CCV of Algorithm 1 is $\Omega (\sqrt{T}\log T)$ , while the CCV of Algorithm 2 remains constant.
283
+
284
+ # 8.1 Synthetic Data
285
+
286
+ Next, we consider a more reasonable data setup to compare the performance of Algorithm 1 and Algorithm 2, where with $d = 10$ , we let $f_{t}(x) = ||x - a_{t}||_{1}$ , and $a_{t}$ is a $d$ -dimensional vector that is coordinate-wise uniformly distributed between $[-1,1]$ and is independent across $t$ . Similarly, we consider $g_{t}(x) = \max(0, w_{t}^{T} \cdot x - 0.1)$ where $w_{t}$ is a $d$ -dimensional vector that also is coordinate-wise uniformly distributed between $[-1,1]$ and is independent across $t$ . This choice ensures that $x = 0$ is feasible for all constraints, i.e., Assumption 3 is satisfied. In Figs. 4a and 4b, we plot the regret and CCV, respectively, for Algorithm 1 and Algorithm 2, and see that Algorithm 2 outperforms Algorithm 1 in both the regret and the CCV.
287
+
288
+ ![](images/df06ba1a81581c1cb8d81221d007ee7c4bc33526e1f4421da32dc8dbaf12920f.jpg)
289
+ Figure 3: Regret and CCV comparison for input described in Lemma 6.
290
+
291
+ ![](images/efa678e9bb40a9517a7b4571e3137bd8019a6e8135a6e8a266ce211f0ef6ab38.jpg)
292
+ (a) Regret comparison of Algorithm 1 and Algorithm 2
293
+
294
+ ![](images/d7ea4c6a99b43ac5f79dc18b1135dab199353c8f602ef596f6d1acc72e0284b2.jpg)
295
+ (b) CCV comparison of Algorithm 1 and Algorithm 2
296
+
297
+ # 9 Conclusions
298
+
299
+ One fundamental open question for COCO is: whether it is possible to simultaneously achieve $\mathcal{R}_{[1:T]} = O(\sqrt{T})$ and $\mathrm{CCV}_{[1:T]} = o(\sqrt{T})$ or $\mathrm{CCV}_{[1:T]} = O(1)$ . In this paper, we have made substantial progress towards answering this question by proposing an algorithm that exploits the geometric properties of the nested convex sets $S_{t}$ 's that effectively control the CCV. The state of the art algorithms [Sinha and Vaze, 2024, Ferreira and Soares, 2025] achieve a CCV of $\tilde{\Omega}(\sqrt{T})$ even for very simple instances as shown in Lemma 6, and conceptually different algorithms are needed to achieve CCV of $o(\sqrt{T})$ . We propose one such algorithm and show that when the nested convex constraint sets are well structured, achieving a CCV of $O(1)$ is possible without losing out on $O(\sqrt{T})$ regret guarantee. We also derived a bound on the CCV for general problem instances, that is as a function of the shape of nested convex constraint sets and the distance between them, and the diameter.
300
+
301
+ In the absence of good lower bounds, the open question remains unresolved in general, however, this paper significantly improves the conceptual understanding of COCO problem by demonstrating that good algorithms need to exploit the geometry of the nested convex constraint sets.
302
+
303
+ # References
304
+
305
+ Jianjun Yuan and Andrew Lamperski. Online convex optimization for cumulative constraints. Advances in Neural Information Processing Systems, 31, 2018.
306
+
307
+ Rodolphe Jenatton, Jim Huang, and Cédric Archambeau. Adaptive algorithms for online convex optimization with long-term constraints. In International Conference on Machine Learning, pages
308
+
309
+ 402-411.PMLR,2016.
310
+ Mehrdad Mahdavi, Rong Jin, and Tianbao Yang. Trading regret for efficiency: online convex optimization with long term constraints. The Journal of Machine Learning Research, 13(1):2503-2528, 2012.
311
+ Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl Johansson. Regret and cumulative constraint violation analysis for online convex optimization with long term constraints. In International Conference on Machine Learning, pages 11998-12008. PMLR, 2021.
312
+ Hao Yu, Michael Neely, and Xiaohan Wei. Online convex optimization with stochastic constraints. Advances in Neural Information Processing Systems, 30, 2017.
313
+ Wen Sun, Debadeepta Dey, and Ashish Kapoor. Safety-aware algorithms for adversarial contextual bandit. In International Conference on Machine Learning, pages 3280-3288. PMLR, 2017.
314
+ Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Yiguang Hong, Tianyou Chai, and Karl H Johansson. Distributed online convex optimization with adversarial constraints: Reduced cumulative constraint violation bounds under slater's condition. arXiv preprint arXiv:2306.00149, 2023.
315
+ Michael J Neely and Hao Yu. Online convex optimization with time-varying constraints. arXiv preprint arXiv:1702.04783, 2017.
316
+ Nikolaos Liakopoulos, Apostolos Destounis, Georgios Paschos, Thrasyvoulos Spyropoulos, and Panayotis Mertikopoulos. Cautious regret minimization: Online optimization with long-term budget constraints. In International Conference on Machine Learning, pages 3944-3952. PMLR, 2019.
317
+ Michael J Neely. Stochastic network optimization with application to communication and queueing systems. Synthesis Lectures on Communication Networks, 3(1):1-211, 2010.
318
+ Hengquan Guo, Xin Liu, Honghao Wei, and Lei Ying. Online convex optimization with hard constraints: Towards the best of two worlds and beyond. Advances in Neural Information Processing Systems, 35:36426-36439, 2022.
319
+ Abhishek Sinha and Rahul Vaze. Optimal algorithms for online convex optimization with adversarial constraints. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=TxffvJMnBy.
320
+ Ricardo N. Ferreira and Cláudia Soares. Optimal bounds for adversarial constrained online convex optimization, 2025. URL https://arxiv.org/abs/2503.13366.
321
+ Jordan Lekeufack and Michael I. Jordan. An optimistic algorithm for online convex optimization with adversarial constraints, 2025. URL https://arxiv.org/abs/2412.08060.
322
+ Tianyi Chen and Georgios B Giannakis. Bandit convex optimization for scalable and dynamic iot management. IEEE Internet of Things Journal, 6(1):1276-1286, 2018.
323
+ Xuanyu Cao and KJ Ray Liu. Online convex optimization with time-varying constraints and bandit feedback. IEEE Transactions on automatic control, 64(7):2665-2680, 2018.
324
+ Rahul Vaze. On dynamic regret and constraint violations in constrained online convex optimization. In 2022 20th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt), pages 9-16, 2022. doi: 10.23919/WiOpt56218.2022.9930613.
325
+ Qingsong Liu, Wenfei Wu, Longbo Huang, and Zhixuan Fang. Simultaneously achieving sublinear regret and constraint violations for online convex optimization with time-varying constraints. ACM SIGMETRICS Performance Evaluation Review, 49(3):4-5, 2022.
326
+ Nikhil Bansal, Martin Böhm, Marek Eliás, Grigorios Koumoutsos, and Seeun William Umboh. Nested convex bodies are chaseable. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1253-1260. SIAM, 2018.
327
+
328
+ C.J. Argue, Sébastien Bubeck, Michael B Cohen, Anupam Gupta, and Yin Tat Lee. A nearly-linear bound for chasing nested convex bodies. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 117-122. SIAM, 2019.
329
+ Sebastien Bubeck, Bo'az Klartag, Yin Tat Lee, Yanzhi Li, and Mark Sellke. Chasing nested convex bodies nearly optimally. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1496-1508. SIAM, 2020.
330
+ Elad Hazan. The convex optimization approach to regret minimization. Optimization for machine learning, page 287, 2012.
331
+ Hao Yu and Michael J Neely. A low complexity algorithm with $o(\sqrt{T})$ regret and $o(1)$ constraint violations for online convex optimization with long term constraints. arXiv preprint arXiv:1604.02218, 2016.
332
+ Paolo Manselli and Carlo Pucci. Maximum length of steepest descent curves for quasi-convex functions. Geometriae Dedicata, 38(2):211-227, 1991.
333
+ HaroldGordon Eggleston. Convexity,1966.
334
+
335
+ # NeurIPS Paper Checklist
336
+
337
+ # 1. Claims
338
+
339
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
340
+
341
+ Answer: [Yes]
342
+
343
+ Justification: We provide complete theorem statements and proofs of all claims.
344
+
345
+ Guidelines:
346
+
347
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
348
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
349
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
350
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
351
+
352
+ # 2. Limitations
353
+
354
+ Question: Does the paper discuss the limitations of the work performed by the authors?
355
+
356
+ Answer: [Yes]
357
+
358
+ Justification: Our result crucially makes use of feasibility assumption (Assumption 3) which is universally used in the COCO literature. In the absence of good lower bounds, the problem considered in the paper question remains open in full generality.
359
+
360
+ Guidelines:
361
+
362
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
363
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
364
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
365
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
366
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
367
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
368
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
369
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
370
+
371
+ # 3. Theory assumptions and proofs
372
+
373
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
374
+
375
+ # Answer: [Yes]
376
+
377
+ Justification: We clearly state the assumptions under which our theoretical results hold.
378
+
379
+ # Guidelines:
380
+
381
+ - The answer NA means that the paper does not include theoretical results.
382
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
383
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
384
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
385
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
386
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
387
+
388
+ # 4. Experimental result reproducibility
389
+
390
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
391
+
392
+ # Answer: [NA]
393
+
394
+ # Guidelines:
395
+
396
+ - The answer NA means that the paper does not include experiments.
397
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
398
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
399
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
400
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
401
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
402
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
403
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
404
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
405
+
406
+ # 5. Open access to data and code
407
+
408
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
409
+
410
+ # Answer: [NA]
411
+
412
+ # Guidelines:
413
+
414
+ - The answer NA means that paper does not include experiments requiring code.
415
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
416
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
417
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
418
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
419
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
420
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
421
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
422
+
423
+ # 6. Experimental setting/details
424
+
425
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
426
+
427
+ # Answer: [NA]
428
+
429
+ # Guidelines:
430
+
431
+ - The answer NA means that the paper does not include experiments.
432
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
433
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
434
+
435
+ # 7. Experiment statistical significance
436
+
437
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
438
+
439
+ # Answer: [NA]
440
+
441
+ # Guidelines:
442
+
443
+ - The answer NA means that the paper does not include experiments.
444
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
445
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
446
+ - The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
447
+ - The assumptions made should be given (e.g., Normally distributed errors).
448
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
449
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
450
+
451
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
452
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
453
+
454
+ # 8. Experiments compute resources
455
+
456
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
457
+
458
+ Answer: [NA]
459
+
460
+ Guidelines:
461
+
462
+ - The answer NA means that the paper does not include experiments.
463
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
464
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
465
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
466
+
467
+ # 9. Code of ethics
468
+
469
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
470
+
471
+ Answer: [Yes]
472
+
473
+ Justification: This paper deals with fundamental optimization theory and conform with NeurIPS Code of Ethics
474
+
475
+ Guidelines:
476
+
477
+ - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
478
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
479
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
480
+
481
+ # 10. Broader impacts
482
+
483
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
484
+
485
+ Answer: [NA]
486
+
487
+ Justification: This is a theoretical paper and the authors do not see any immediate direct societal impact of this paper.
488
+
489
+ Guidelines:
490
+
491
+ - The answer NA means that there is no societal impact of the work performed.
492
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
493
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
494
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
495
+
496
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
497
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
498
+
499
+ # 11. Safeguards
500
+
501
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
502
+
503
+ Answer: [NA]
504
+
505
+ Justification: This theoretical paper does not pose any such risks.
506
+
507
+ Guidelines:
508
+
509
+ - The answer NA means that the paper poses no such risks.
510
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
511
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
512
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
513
+
514
+ # 12. Licenses for existing assets
515
+
516
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
517
+
518
+ Answer: [NA]
519
+
520
+ Guidelines:
521
+
522
+ - The answer NA means that the paper does not use existing assets.
523
+ - The authors should cite the original paper that produced the code package or dataset.
524
+ - The authors should state which version of the asset is used and, if possible, include a URL.
525
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
526
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
527
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
528
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
529
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
530
+
531
+ # 13. New assets
532
+
533
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
534
+
535
+ Answer: [NA]
536
+
537
+ Guidelines:
538
+
539
+ - The answer NA means that the paper does not release new assets.
540
+
541
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
542
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
543
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
544
+
545
+ # 14. Crowdsourcing and research with human subjects
546
+
547
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
548
+
549
+ Answer: [NA]
550
+
551
+ Guidelines:
552
+
553
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
554
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
555
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
556
+
557
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
558
+
559
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
560
+
561
+ Answer: [NA]
562
+
563
+ Guidelines:
564
+
565
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
566
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
567
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
568
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
569
+
570
+ # 16. Declaration of LLM usage
571
+
572
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
573
+
574
+ Answer: [NA]
575
+
576
+ Guidelines:
577
+
578
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
579
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
580
+
581
+ # 10 Comparison with [Lekeufack and Jordan, 2025]
582
+
583
+ Remark 6 [Lekeufack and Jordan, 2025] consider the COCO problem when predictions about both cost functions $f_{t}$ 's and constraint functions $g_{t}$ 's are available. With predictions, they show that if predictions are perfect, $O(1)$ regret and CCV is achievable, while if the predictions are totally wrong, in the worst case the regret and CCV are at most as bad as the result of [Sinha and Vaze, 2024]. Intermediate range of results is also obtained depending on the quality of prediction. Essentially [Lekeufack and Jordan, 2025] use the prediction wrapper over the algorithm of [Sinha and Vaze, 2024] to derive their guarantee.
584
+
585
+ In this paper, however, we are not assuming any predictions, and are solving the COCO problem with the worst case input, similar to all the prior work listed in Table 1. Moreover, the presented algorithm is conceptually different than [Sinha and Vaze, 2024], and for the first time shows that $O(1)$ or instance dependent CCV while having $O(\sqrt{T})$ regret is possible, which is not the case with prior work even for $d = 1$ .
586
+
587
+ Thus, the setting of [Lekeufack and Jordan, 2025] is completely different and not really comparable with our results.
588
+
589
+ # 11 Proof of Lemma 7
590
+
591
+ Proof: From the convexity of $f_{t}$ 's, for $x^{\star}$ satisfying Assumption (3), we have
592
+
593
+ $$
594
+ f _ {t} (x _ {t}) - f _ {t} \left(x ^ {\star}\right) \leq \nabla f _ {t} ^ {T} \left(x _ {t} - x ^ {\star}\right).
595
+ $$
596
+
597
+ From the choice of Algorithm 2 for $x_{t + 1}$ , we have
598
+
599
+ $$
600
+ \begin{array}{l} \left| \left| x _ {t + 1} - x ^ {\star} \right| \right| ^ {2} = \left| \left| \mathcal {P} _ {S _ {t}} (y _ {t}) - x ^ {\star} \right| \right| ^ {2} \\ \stackrel {(a)} {\leq} | | y _ {t} - x ^ {\star} | | ^ {2}, \\ = \left\| \mathcal {P} _ {S _ {t - 1}} \left(x _ {t} - \eta_ {t} \nabla f _ {t} \left(x _ {t}\right)\right) - x ^ {\star} \right\| ^ {2}, \\ \stackrel {(n)} {\leq} \left| \left| \left(x _ {t} - \eta_ {t} \nabla f _ {t} ^ {T} \left(x _ {t}\right)\right) - x ^ {\star} \right| \right| ^ {2}, \\ \end{array}
601
+ $$
602
+
603
+ where inequalities $(a)$ and $(b)$ follow since $x^{\star}\in S_{t}$ for all $t$ . Hence
604
+
605
+ $$
606
+ \left| \left| x _ {t + 1} - x ^ {\star} \right| \right| ^ {2} \leq \left| \left| x _ {t} - x ^ {\star} \right| \right| ^ {2} + \eta_ {t} ^ {2} \left| \left| \nabla f _ {t} (x _ {t}) \right| \right| ^ {2} - 2 \eta_ {t} \nabla f _ {t} ^ {T} (x _ {t}) (x _ {t} - x ^ {\star}),
607
+ $$
608
+
609
+ $$
610
+ \nabla f _ {t} ^ {T} (x _ {t}) (x _ {t} - x ^ {\star}) \leq \frac {| | x _ {t} - x ^ {\star} | | ^ {2} - | | x _ {t + 1} - x ^ {\star} | | ^ {2}}{\eta_ {t}} + \eta_ {t} G ^ {2}.
611
+ $$
612
+
613
+ Summing this over $t = 1$ to $T$ , we get
614
+
615
+ $$
616
+ \begin{array}{l} 2 \sum_ {t = 1} ^ {T} \left(f _ {t} \left(x _ {t}\right) - f _ {t} \left(x ^ {\star}\right)\right) \leq \sum_ {t = 1} ^ {T} \nabla f _ {t} ^ {T} \left(x _ {t} - x ^ {\star}\right), \\ \leq \sum_ {t = 1} ^ {T} \frac {| | x _ {t} - x ^ {\star} | | ^ {2} - | | x _ {t + 1} - x ^ {\star} | | ^ {2}}{\eta_ {t}} + \sum_ {t = 1} ^ {T} \eta_ {t} G ^ {2}, \\ \leq D ^ {2} \frac {1}{\eta_ {T}} + G ^ {2} \sum_ {t = 1} ^ {T} \eta_ {t}, \\ \leq O (D G \sqrt {T}), \\ \end{array}
617
+ $$
618
+
619
+ where the final inequality follows by choosing $\eta_t = \frac{D}{G\sqrt{t}}$ .
620
+
621
+ # 12 Proof of Lemma 8 and Lemma 9.
622
+
623
+ Proof: [Proof of Lemma 8] Recall the definition that $x_{t} \in \partial S_{t-1}, b_{t} = \mathcal{P}_{S_{t}}(x_{t}) \in S_{t}$ from (5). Let $||x_{t} - b_{t}|| = r$ , then since all $S_{t}$ 's are spheres, at least along one of the $d$ -orthogonal canonical basis vectors, $\text{diameter}(S_{t}) \leq \text{diameter}(S_{t-1}) - \frac{r}{\sqrt{d}}$ . Since the diameter along any of the $d$ -axis is $D$ , we get the answer. We would like to remark that the proof is short and elementary that should be seen as a strength. Proof: [Proof of Lemma 9] Proof is identical to Lemma 8.
624
+
625
+ # 13 Preliminaries for Bounding the CCV in Theorem 12 and Theorem 17
626
+
627
+ Let $K_{1},\ldots ,K_{T}$ be nested (i.e., $K_{1}\supseteq K_{2}\supset K_{3}\supseteq \dots \supseteq K_{T})$ bounded convex subsets of $\mathbb{R}^d$
628
+
629
+ Definition 18 If $\sigma_1 \in K_1$ , and $\sigma_{t+1} = \mathcal{P}_{K_{t+1}}(\sigma_t)$ , for $t = 1, \ldots, T$ . Then the curve
630
+
631
+ $$
632
+ \underline {{\sigma}} = \left\{\left(\sigma_ {1}, \sigma_ {2}\right), \left(\sigma_ {2}, \sigma_ {3}\right), \dots , \left(\sigma_ {T - 1}, \sigma_ {T}\right) \right\}
633
+ $$
634
+
635
+ is called the projection curve on $K_{1},\ldots ,K_{T}$
636
+
637
+ We are interested in upper bounding the quantity
638
+
639
+ $$
640
+ \Sigma = \max _ {\underline {{\sigma}}} \sum_ {t = 1} ^ {T - 1} \left| \left| \sigma_ {t} - \sigma_ {t + 1} \right| \right|. \tag {6}
641
+ $$
642
+
643
+ Lemma 19 For a projection curve $\underline{\sigma}$ , $\Sigma \leq d^{d/2} \text{diameter}(K_1)$ .
644
+
645
+ To prove the result we need the following definition.
646
+
647
+ Definition 20 A curve $\gamma : I \to \mathbb{R}^d$ is called self-expanded, if for every $t$ where $\gamma'(t)$ exists, we have
648
+
649
+ $$
650
+ < \gamma^ {\prime} (t), \gamma (t) - \gamma (u) > \geq 0
651
+ $$
652
+
653
+ for all $u \in I$ with $u \leq t$ , where $<,.,>$ represents the inner product. In words, what this means is that $\gamma$ starting in a point $x_0$ is self expanded, if for every $x \in \gamma$ for which there exists the tangent line $\mathsf{T}$ , the arc (sub-curve) $(x_0, x)$ is contained in one of the two half-spaces, bounded by the hyperplane through $x$ and orthogonal to $\mathsf{T}$ .
654
+
655
+ For self-expanded curves the following classical result is known.
656
+
657
+ Theorem 21 Manselli and Pucci [1991] For any self-expanded curve $\gamma$ belonging to a closed bounded convex set of $\mathbb{R}^d$ with diameter $D$ , its total length is at most $O(d^{d/2}D)$ .
658
+
659
+ Proof: [Proof of Lemma 19] From Definition 18, the projection curve is
660
+
661
+ $$
662
+ \underline {{\sigma}} = \left\{\left(\sigma_ {1}, \sigma_ {2}\right), \left(\sigma_ {2}, \sigma_ {3}\right), \dots , \left(\sigma_ {T - 1}, \sigma_ {T}\right) \right\}.
663
+ $$
664
+
665
+ Let the reverse curve be $\underline{r} = \{r_t\}_{t=0,\dots,T-2}$ , where $r_t = (\sigma_{T-t}, \sigma_{T-t-1})$ . Thus we are reading $\underline{\sigma}$ backwards and calling it $\underline{r}$ . Note that since $\sigma_t$ is the projection of $\sigma_{t-1}$ on $K_t$ , each piece-wise linear segment $(\sigma_t, \sigma_{t+1})$ is a straight line and hence differentiable except at the end points. Moreover, since each $\sigma_t$ is obtained by projecting $\sigma_{t-1}$ onto $K_t$ and $K_{t+1} \subseteq K_t$ , we have that the projection hyperplane $F_t$ that passes through $\sigma_t = \mathcal{P}_{K_t}(\sigma_{t-1})$ and is perpendicular to $\sigma_t - \sigma_{t-1}$ separates the two sub curves $\{(\sigma_1, \sigma_2), (\sigma_2, \sigma_3), \ldots, (\sigma_{t-1}, \sigma_t)\}$ and $\{(\sigma_t, \sigma_{t+1}), (\sigma_{t+1}, \sigma_{t+2}), \ldots, (\sigma_{T-1}, \sigma_T)\}$ .
666
+
667
+ Thus, we have that for each segment $r_{\tau}$ , at each point where it is differentiable, the curve $r_1, \ldots, r_{\tau - 1}$ lies on one side of the hyperplane that passes through the point and is perpendicular to $r_{\tau + 1}$ . Thus, we conclude that curve $\underline{r}$ is self-expanded.
668
+
669
+ As a result, Theorem 21 implies that the length of $\underline{r}$ is at most $O(d^{d/2}\mathrm{diameter}(K_1))$ , and the result follows since the length of $\underline{r}$ is same as that of $\underline{\sigma}$ which is $\Sigma$ .
670
+
671
+ # 14 Proof of Theorem 12
672
+
673
+ Proof: Recall that $d = 2$ , and the definition of $F_{t}$ from Definition 10. Let the center be $c = \mathcal{P}_{S_1}(x_1)$ . Let $t_{\mathrm{orth}}$ be the earliest $t$ for which $\angle (F_t,F_1) = \pi$ .
674
+
675
+ Initialize $\kappa = 1$ , $s(1) = 1$ , $\tau(1) = 1$ .
676
+
677
+ BeginProcedure Step 1:Definition of Phase $\kappa$ . Consider
678
+
679
+ $$
680
+ \tau (\kappa) = \arg \max _ {s (\kappa) < t \leq t _ {\mathrm {o r t h}}, \angle (F _ {s (\kappa)}, F _ {t}) \leq \pi / 4} t.
681
+ $$
682
+
683
+ If there is no such $\tau (\kappa)$
684
+
685
+ Phase $\kappa$ ends, define Phase $\kappa$ as Empty, $s(\kappa + 1) = \tau(\kappa) + 1$ .
686
+
687
+ Else If
688
+
689
+ $$
690
+ \angle \left(F _ {\tau (\kappa)}, F _ {1}\right) = \pi \operatorname {E x i t}
691
+ $$
692
+
693
+ Else If
694
+
695
+ $$
696
+ s (\kappa + 1) = \tau (\kappa)
697
+ $$
698
+
699
+ End If
700
+
701
+ Increment $\kappa = \kappa + 1$ , and Go to Step 1.
702
+
703
+ # EndProcedure
704
+
705
+ Example 22 To better understand the definition of phases, consider Fig. 5, where the largest $t$ for which the angle between $F_{t}$ and $F_{1}$ is at most $\pi /4$ is 3. Thus, $\tau (1) = 3$ , i.e., phase 1 explores till time $t = 3$ and phase 1 ends. The starting hyperplane to consider in phase 2 is $s(2) = 3$ and given that angle between $F_{3}$ and and the next hyperplane $F_{4}$ is more than $\pi /4$ , phase 2 is empty and phase 2 ends by exploring till $t = 4$ . The starting hyperplane to consider in phase 3 is $s(3) = 4$ and the process goes on. The first time $t$ such that the angle between $F_{1}$ and $F_{t}$ is $\pi$ is $t = 6$ , and thus $t_{\text{orth}} = 6$ , and the process stops at time $t = 6$ . This also implies that $S_{6} \subset F_{1}$ . Since $S_{t}$ 's are nested, for all $t \geq 6$ , $S_{t} \subset F_{1}$ . Hence the total CCV after $t \geq t_{\text{orth}}$ is at most $GD$ .
706
+
707
+ The main idea with defining phases, is to partition the whole space into empty and non-empty regions, where in each non-empty region, the starting and ending hyperplanes have an angle to at most $\pi /4$ , while in an empty phase the starting and ending hyperplanes have an angle of at least $\pi /4$ . Thus, we get the following simple result.
708
+
709
+ Lemma 23 For $d = 2$ , there can be at most 4 non-empty and 4 empty phases.
710
+
711
+ Proof is immediate from the definition of the phases, since any consecutively occurring non-empty and empty phase exhausts an angle of at least $\pi /4$
712
+
713
+ Remark 7 Since we are in $d = 2$ dimensions, for all $t \geq t_{\text{orth}}$ , the movement is along the hyperplane $F_1$ and thus the resulting constraint violation after time $t \geq t_{\text{orth}}$ is at most $G$ . Thus, in the phase definition above, we have only considered time till $t_{\text{orth}}$ and we only need to upper bound the CCV till time $t_{\text{orth}}$ .
714
+
715
+ ![](images/aaf858a9ffe914199cfd31f485fe28a718934481b016516fba4dac45a9235e5d.jpg)
716
+ Figure 5: Figure corresponding to Example 22.
717
+
718
+ We next define the following required quantities.
719
+
720
+ Definition 24 With respect to the quantities defined for Algorithm 2, let for a non-empty phase $\kappa$
721
+
722
+ $$
723
+ r _ {\max } (\kappa) = \max _ {s (\kappa) < t \leq \tau (\kappa)} | | y _ {t} - \mathsf {c} | | a n d t ^ {\star} (\kappa) = \arg \max _ {s (\kappa) < t \leq \tau (\kappa)} ^ {T} | | y _ {t} - \mathsf {c} | |.
724
+ $$
725
+
726
+ $t^{\star}(\kappa)$ is the time index belonging to phase $\kappa$ for which $y_{t}$ is the farthest.
727
+
728
+ Definition 25 A non-empty phase $\kappa$ consists of time slots $\mathcal{T}(\kappa) = [\tau (\kappa -1),\tau (\kappa)]$ and the angle $\angle (F_{t_1},F_{t_2})\leq \pi /4$ for all $t_1,t_2\in \mathcal{T}(\kappa)$ . Using Definition 24, we partition $\mathcal{T}(\kappa)$ as $\mathcal{T}(\kappa) = \mathcal{T}^{-}(\kappa)\cup \mathcal{T}^{+}(\kappa)$ , where $\mathcal{T}^{-}(\kappa) = [\tau (\kappa -1) + 1,t^{\star}(\kappa) + 1]$ and $\mathcal{T}^{+}(\kappa) = [t^{\star}(\kappa) + 2,\tau (\kappa)]$ .
729
+
730
+ Thus, $\mathcal{T}(\kappa)$ and $\mathcal{T}(\kappa + 1)$ have one common time slot.
731
+
732
+ Definition 26 [Definition of $z_{t}(\kappa)$ for $t \in \mathcal{T}^{-}(\kappa)$ ]. Let $z_{t^{\star}(\kappa) + 1} = x_{t^{\star}(\kappa) + 1}$ . For $t \in \mathcal{T}^{-}(\kappa) \backslash t^{\star}(\kappa) + 1$ , define $z_{t}(\kappa)$ inductively as follows. $z_{t}(\kappa)$ is the pre-image of $z_{t + 1}(\kappa)$ on $F_{t - 1}$ such that the projection of $z_{t}(\kappa)$ on $F_{t}$ is $z_{t + 1}(\kappa)$ .
733
+
734
+ Definition 27 [Definition of $z_{t}(\kappa)$ for $t \in \mathcal{T}^{+}(\kappa)$ ]. For $t \in \mathcal{T}^{+}(\kappa)$ , define $z_{t}(\kappa)$ inductively as follows. $z_{t}(\kappa)$ is the projection of $z_{t-1}(\kappa)$ on $F_{t-1}$ .
735
+
736
+ See Fig. 6 for a visual illustration of $t^\star (\kappa)$ and $z_{t}(\kappa)$ .
737
+
738
+ ![](images/6cdaad4667399e559fa14ec4ba750510eb46786c035f994a932b76af603f57c1.jpg)
739
+ Figure 6: Illustration of definition of $z_{t}(\kappa)$ for $t \in \mathcal{T}(\kappa)$ . In this example, for phase 1, $t^{\star}(1) = 3$ since the distance of $y_{3}$ from $c$ is the farthest for phase 1 that consists of time slots $\mathcal{T}(1) = \{2,3\}$ . Hence $z_{t^{\star}(1) + 1}(1) = x_{4}$ . For $t \in \mathcal{T}(1) \backslash t^{\star}(1) + 1$ , $z_{t}(1)$ are such $z_{t + 1}(1)$ is a projection of $z_{t}(1)$ onto $F_{t}$ .
740
+
741
+ The main idea behind defining $z_{t}(\kappa)$ 's is as follows. For each non-empty phase, we will construct a projection curve (Definition 18) using points $z_{k}$ such that the length of the projection curve upper bounds the CCV of Algorithm 2 (shown in Lemma 33), and then use Lemma 19 to upper bound the length of the projection curve.
742
+
743
+ Definition 28 [Definition of $S_t'$ for a non-empty phase $\kappa$ :] $S_{t^{\star}(\kappa) + 1}' = S_{t^{\star}(\kappa) + 1}$ . For $t \in \mathcal{T}^{-}(\kappa) \backslash t^{\star}(\kappa) + 1$ , $S_t'$ is the convex hull of $z_{t + 1}(\kappa) \cup S_t \cup S_{t + 1}'(\kappa)$ . For $t \in \mathcal{T}^{+}(\kappa)$ , $S_t' = S_t$ . See Fig. 7.
744
+
745
+ Lemma 29 For a non-empty phase $\kappa$ , for any $t \in \mathcal{T}(\kappa)$ , $S_{t+1}' \subseteq S_t'$ , i.e. they are nested.
746
+
747
+ Definition 30 For a non-empty phase, $\chi (\kappa) = S_{\tau (\kappa -1)}^{\prime}\cap \mathcal{H}_{\tau (\kappa)}^{+}$ , where $\mathcal{H}_{\tau (\kappa)}^{+}$ has been defined in Definition 10.
748
+
749
+ Definition 31 [New Violations for $t \in \mathcal{T}(\kappa)$ ): For a non-empty phase $\kappa$ , for $t \in \mathcal{T}(\kappa) \setminus \tau(\kappa - 1)$ , let
750
+
751
+ $$
752
+ v _ {t} (\kappa) = \left| \left| z _ {t} (\kappa) - z _ {t - 1} (\kappa) \right| \right|.
753
+ $$
754
+
755
+ ![](images/4eab8e75afaa8d49139033ffa097b3bf7310f0425e0efe524108ec22d3b6a1ab.jpg)
756
+ Figure 7: Definition of $S_{t}$ 's where $U_{t}$ are the extra regions that are added to $S_{t}$ to get $S_{t}'$ .
757
+
758
+ Lemma 32 For each non-empty phase $\kappa$ , all $z_{t}(\kappa)$ 's for $t \in \mathcal{T}(\kappa)$ belongs to $\mathcal{B}(\mathsf{c}, \sqrt{2}D)$ , where $\mathcal{B}(c, r)$ is a ball with radius $r$ centered at $c$ . In other words, $\chi(\kappa) \subseteq \mathcal{B}(\mathsf{c}, \sqrt{2}D)$ .
759
+
760
+ Proof: Recall that for a non-empty phase $\kappa$ , $\mathcal{T}(\kappa) = \mathcal{T}^{-}(\kappa) \cup \mathcal{T}^{+}(\kappa)$ . We first argue about $t \in \mathcal{T}^{-}(\kappa)$ . By definition, $z_{t^{\star}(\kappa) + 1} = x_{t^{\star}(\kappa) + 1}$ and $x_{t^{\star}(\kappa) + 1} \in S_{t^{\star}(\kappa)}$ . Thus, $z_{t^{\star}(\kappa) + 1} \in \mathcal{B}(\mathfrak{c}, \sqrt{2} D)$ . Next we argue for $t \in \mathcal{T}^{-}(\kappa) \backslash t^{\star}(\kappa) + 1$ . Recall that the diameter of $\mathcal{X}$ is $D$ , and the fact that $y_t \in S_{t-1}$ from Algorithm 2. Thus, for any non-empty phase $\kappa$ , the distance from $\mathfrak{c}$ to the farthest $y_t$ belonging to the phase $\kappa$ is at most $D$ , i.e., $r_{\max}(\kappa) \leq D$ . Let the pre-image of $z_{t^{\star}(\kappa) + 1}(\kappa)$ onto $F_{s(\kappa)}$ (the base hyperplane with respect to which all hyperplanes have an angle of at most $\pi/4$ in phase $\kappa$ ) be $p(\kappa)$ such that projection of $p(\kappa)$ onto $F_{s(\kappa)}$ is $z_{t^{\star}(\kappa) + 1}(\kappa)$ . From the definition of any non-empty phase, the angle between $F_{s(\kappa)}$ and $F_t$ for $t \in \mathcal{T}(\kappa)$ is at most $\pi/4$ . Thus, the distance of $p(\kappa)$ from $\mathfrak{c}$ is at most $\sqrt{2} D$ .
761
+
762
+ Consider the 'triangle' $\Pi(\kappa)$ that is the convex hull of $c$ , $z_{t^{\star}(\kappa) + 1}(\kappa)$ and $p(\kappa)$ . Given that the angle between $F_{t^{\star}(\kappa)}$ and $F_{t^{\star}(\kappa) - 1}$ is at most $\pi /4$ , the argument above implies that $z_{t}(\kappa)\in \Pi (\kappa)$ for $t = t^{\star}(\kappa)$ . For $t = t^{\star}(\kappa) - 1$ , $z_{t}(\kappa)\in F_{t - 1}$ is the projection of $z_{t - 1}(\kappa)$ onto $S_{t - 1}^{\prime}$ . This implies that the distance of $z_{t}(\kappa)$ (for $t = t^{\star}(\kappa) - 1$ ) from $c$ is at most
763
+
764
+ $$
765
+ \frac {D}{\cos \left(\alpha_ {t , t ^ {\star} (\kappa)}\right) \cos \left(\alpha_ {t ^ {\star} (\kappa) , t ^ {\star} (\kappa) + 1}\right)},
766
+ $$
767
+
768
+ where $\alpha_{t_1,t_2}$ is the angle between $F_{t_1}$ and $F_{t_2}$ . From the monotonicity of angles $\theta_t$ (Definition 11), and the definition of a non-empty phase, we have that $\alpha_{t,t^{\star}(\kappa)} + \alpha_{t^{\star}(\kappa),t^{\star}(\kappa) + 1} \leq \pi /4$ and $\alpha_{t,t^{\star}(\kappa)} \geq 0$ , $\alpha_{t^{\star}(\kappa),t^{\star}(\kappa) + 1} \geq 0$ . Next, we appeal to the identity
769
+
770
+ $$
771
+ \cos (A + B) \leq \cos (A) \cos (B) \tag {7}
772
+ $$
773
+
774
+ where $A + B \leq \pi / 4$ , to claim that $z_{t}(\kappa) \in \Pi(\kappa)$ for $t = t^{\star}(\kappa) - 1$ .
775
+
776
+ Iteratively using this argument while invoking the identity (7) gives the result that for any $t \in \mathcal{T}^{-}(\kappa)$ , we have that $z_{t}(\kappa)$ belongs to $\Pi(\kappa)$ . Since $\Pi(\kappa) \subseteq \mathcal{B}(\mathsf{c}, \sqrt{2}D)$ , we have the claim for all $t \in \mathcal{T}^{-}(\kappa)$ .
777
+
778
+ By definition $z_{t}(\kappa)$ for $t \in \mathcal{T}^{+}(\kappa)$ belong to $S_{t-1} \subseteq S_{1}$ . Thus, their distance from $c$ is at most $D$ .
779
+
780
+ Lemma 33 For each non-empty phase $\kappa$ , and for $t \in \mathcal{T}(\kappa)$ the violation $v_{t}(\kappa) \geq dist(x_{t}, S_{t})$ where $dist(x_{t}, S_{t})$ is the original violation.
781
+
782
+ Proof: By construction of any non-empty phase $\kappa$ , for $t \in \mathcal{T}(\kappa)$ both $x_{t}(\kappa)$ and $z_{t}(\kappa)$ belong to $F_{t-1}$ . Moreover, by construction, the distance of $z_{t}(\kappa)$ from $\mathsf{c}$ is at least as much as the distance of $x_{t}$ from $\mathsf{c}$ . Thus, using the monotonicity property of angles $\theta_{t}$ (Definition 11) we get the result. See Fig. 6 for a visual illustration.
783
+
784
+ For each non-empty phase $\kappa$ , by definition, the curve defined by sequence $z_{t}(\kappa)$ for $t \in \mathcal{T}(\kappa)$ is a projection curve (Definition 18) on sets $S_{t}^{\prime}(\kappa)$ (note that $S_{t}^{\prime}(\kappa)$ 's are nested from Lemma 29). Moreover, for all $t \in \mathcal{T}(\kappa)$ , set $S_{t}^{\prime}(\kappa) \subset \chi(\kappa)$ which is a bounded convex set. Thus, for $d = 2$ from Lemma 19 the length of curve $\underline{z}(\kappa) = \{(z_{t}(\kappa), z_{t+1}(\kappa))\}_{t \in \mathcal{T}(\kappa)}$
785
+
786
+ $$
787
+ \sum_ {t \in \mathcal {T} (\kappa)} v _ {t} (\kappa) \leq 2 \text {d i a m e t e r} (\chi (\kappa)). \tag {8}
788
+ $$
789
+
790
+ By definition, the number of non-empty phases till time $t_{\mathrm{orth}}$ is at most 4. Moreover, in each nonempty phase $\chi(\kappa) \subseteq \mathcal{B}(\mathsf{c}, \sqrt{2}D)$ from Lemma 32.
791
+
792
+ Thus, from (8), we have that
793
+
794
+ $$
795
+ \begin{array}{l} \sum_ {\text {P h a s e} \kappa \text {i s n o n - e m p t y}} \sum_ {t \in \mathcal {T} (\kappa)} v _ {t} (\kappa) \leq \sum_ {\text {P h a s e} \kappa \text {i s n o n - e m p t y}} 2 \operatorname {d i a m e t e r} (\chi (\kappa)) \\ \leq 8 \operatorname {d i a m e t e r} (\mathcal {B} (c, \sqrt {2} D)) \leq O (D). \tag {9} \\ \end{array}
796
+ $$
797
+
798
+ Using Lemma 33, we get
799
+
800
+ $$
801
+ \sum_ {\text {P h a s e} \kappa \text {i s n o n - e m p t y}} \sum_ {t \in \mathcal {T} (\kappa)} \operatorname {d i s t} \left(x _ {t}, S _ {t}\right) \leq O (D). \tag {10}
802
+ $$
803
+
804
+ For any empty phase, the constraint violation is the length of line segment $(x_{t},\mathcal{P}_{S_{t}}(x_{t}))$ (Algorithm 2) crossing it is a straight line whose length is at most $O(D)$ . Moreover, the total number of empty phases (Lemma 23) is a constant. Thus, the length of the curve $(x_{t},\mathcal{P}_{S_{t}}(x_{t}))$ for Algorithm 2 corresponding to all empty phases is at $O(D)$ .
805
+
806
+ Recall from (4) that the CCV is at most $G$ times $\mathrm{dist}(x_t, S_t)$ . Thus, from (10) we get that the total violation incurred by Algorithm 2 corresponding to non-empty phases is at most $O(GD)$ , while corresponding to empty phases is at $O(GD)$ . Finally, accounting for the very first violation $\mathrm{dist}(x_1, S_1) \leq D$ and the fact that the CCV after time $t \geq t_{\mathrm{orth}}$ (Remark 7) is at most $GD$ , we get that the total constraint violation $\mathrm{CCV}_{[1:T]}$ for Algorithm 2 is at most $O(GD)$ .
807
+
808
+ ![](images/0334d9f4d57b1e9c81a1aaabbc00fbab0180623bbb05027302fa2439be4a0b1d.jpg)
809
+
810
+ # 15 Proof of Theorem 14
811
+
812
+ Proof: We need the following preliminaries.
813
+
814
+ Definition 34 Let $K$ be a non-empty convex bounded set in $\mathbb{R}^d$ . Let $u$ be a unit vector, and $\ell_u$ a line through the origin parallel to $u$ . Let $K_u$ be the orthogonal projection of $K$ onto $\ell_u$ , with length $|K_u|$ . The mean width of $K$ is defined as
815
+
816
+ $$
817
+ W (K) = \frac {1}{V _ {d}} \int_ {\mathbb {S} _ {1} ^ {d}} | K _ {u} | d u, \tag {11}
818
+ $$
819
+
820
+ where $\mathbb{S}_1^d$ is the unit sphere in $d$ dimensions and $V_{d}$ its $(d - 1)$ -dimensional Lebesgue measure.
821
+
822
+ The following is immediate.
823
+
824
+ $$
825
+ 0 \leq W (K) \leq \operatorname {d i a m e t e r} (K). \tag {12}
826
+ $$
827
+
828
+ Lemma 35 Eggleston [1966] For $d = 2$
829
+
830
+ $$
831
+ W (K) = \frac {\text {P e r i m e t e r} (K)}{\pi}.
832
+ $$
833
+
834
+ ![](images/1766d3f023b2ee771b59f13e4ad2b47a58488915e4b69e42ef2b3fc69e7ae124.jpg)
835
+ Figure 8: Figure representing the cone $C_{w_t}(c_t)$ that contains the convex hull of $m_t$ and $S_t$ with respect to the unit vector $w_t$ . $u$ is a unit vector perpendicular to $H_u$ an hyperplane that is a supporting hyperplane $C_t$ at $m_t$ such that $\mathcal{C}_t \cap H_u = \{m_t\}$ and $u^T(x_t - m_t) \geq 0$
836
+
837
+ Lemma 35 implies that $W(K) \neq W(K_1) + W(K_2)$ even if $K_1 \cup K_2 = K$ and $K_1 \cap K_2 = \phi$ .
838
+
839
+ Recall from (5) that $x_{t} \in \partial S_{t-1}$ and $b_{t}$ is the projection of $x_{t}$ onto $S_{t}$ , and $m_{t}$ is the mid-point of $x_{t}$ and $b_{t}$ , i.e. $m_{t} = \frac{x_{t} + b_{t}}{2}$ . Moreover, the convex sets $S_{t}$ 's are nested, i.e., $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ . To prove Theorem 14 we will bound the rate at which $W(S_{t})$ (Definition 34) decreases as a function of the length $||x_{t} - b_{t}||$ .
840
+
841
+ From Definition 13, recall that $\mathcal{C}_t$ is the convex hull of $m_t\cup S_t$ . We also need to define $\mathcal{C}_t^-$ as the convex hull of $x_{t}\cup S_{t}$ . Since $S_{t}\subseteq \mathcal{C}_{t}$ and $\mathcal{C}_t^{-}\subseteq S_{t - 1}$ (since $S_{t - 1}$ is convex and $x_{t}\in S_{t - 1}$ ), we have
842
+
843
+ $$
844
+ W \left(S _ {t}\right) - W \left(S _ {t - 1}\right) \leq W \left(\mathcal {C} _ {t}\right) - W \left(\mathcal {C} _ {t} ^ {-}\right). \tag {13}
845
+ $$
846
+
847
+ Definition 36 $\Delta_t = W(\mathcal{C}_t) - W(\mathcal{C}_t^-)$ .
848
+
849
+ The main ingredient of the proof is the following Lemma that bounds $\Delta_t$ whose proof is provided after completing the proof of Theorem 14.
850
+
851
+ Lemma 37
852
+
853
+ $$
854
+ \Delta_ {t} \leq - V _ {d - 1} \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d} (d - 1)} \left(c _ {t} ^ {\star}\right) ^ {d},
855
+ $$
856
+
857
+ where $c_t^*$ has been defined in Definition 13.
858
+
859
+ Recalling that $c^{\star} = \min_{t}c_{t}^{\star}$ from Definition 13, and combining Lemma 37 with (12) and (13), we get that
860
+
861
+ $$
862
+ \sum_ {t = 1} ^ {T} \left| \left| x _ {t} - b _ {t} \right| \right| \leq \frac {2 V _ {d} (d - 1)}{V _ {d - 1}} \left(\frac {1}{c ^ {\star}}\right) ^ {d} \text {d i a m e t e r} \left(S _ {1}\right),
863
+ $$
864
+
865
+ since $S_{1} \supseteq S_{2} \supseteq \dots \supseteq S_{T}$ . Recalling that $\mathrm{diameter}(S_1) \leq D$ , Theorem 14 follows.
866
+
867
+ Proof: [Proof of Lemma 37]
868
+
869
+ Let $H_{u}$ be the hyperplane perpendicular to vector $u$ . Let $\mathcal{U}_0$ be the set of unit vectors $u$ such that hyperplanes $H_{u}$ are supporting hyperplanes to $\mathcal{C}_t$ at point $m_t$ such that $\mathcal{C}_t \cap H_u = \{m_t\}$ and $u^T(x_t - m_t) \geq 0$ . See Fig. 8 for reference.
870
+
871
+ Since $b_{t}$ is a projection of $x_{t}$ onto $S_{t}$ , and $m_{t}$ is the mid-point of $x_{t}, b_{t}$ , for $u \in \mathcal{U}_0$ , the hyperplane $H_{u}^{\prime}$ containing $x_{t}$ and parallel to $H_{u}$ is a supporting hyperplane for $\mathcal{C}_t^-$ .
872
+
873
+ Thus, using the definition of $K_{u}$ from (11),
874
+
875
+ $$
876
+ \Delta_ {t} \leq \frac {1}{V _ {d}} \int_ {\mathcal {U} _ {0}} \left(\left| \mathcal {C} _ {t, u} \right| - \left| \mathcal {C} _ {t, u} ^ {-} \right|\right) d u = - \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d}} \int_ {\mathcal {U} _ {0}} u ^ {T} \frac {\left(x _ {t} - m _ {t}\right)}{\left| \left| x _ {t} - m _ {t} \right| \right|} d u, \tag {14}
877
+ $$
878
+
879
+ since $||x_{t} - m_{t}|| = ||x_{t} - b_{t}|| / 2$
880
+
881
+ Recall the definition of $C_{w_t^*}(c_t^\star)$ from Definition 13 which implies that the convex hull of $m_t$ and $S_t$ , $\mathcal{C}_t$ is contained in $C_{w_t^*}(c_t^\star)$ . Next, we consider $\mathcal{U}_1$ the set of unit vectors $u$ such that hyperplanes $H_u$ are supporting hyperplanes to $C_{w_t^*}(c_t^\star)$ at point $m_t$ such that $u^T(x_t - m_t) \geq 0$ . By definition $\mathcal{C}_t \subseteq C_{w_t^*}(c_t^\star)$ , it follows that $\mathcal{U}_1 \subset \mathcal{U}_0$ .
882
+
883
+ Thus, from (14)
884
+
885
+ $$
886
+ \Delta_ {t} \leq - \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d}} \int_ {\mathcal {U} _ {1}} u ^ {T}. \frac {\left(x _ {t} - m _ {t}\right)}{\left| \left| x _ {t} - m _ {t} \right| \right|} d u \tag {15}
887
+ $$
888
+
889
+ Recalling the definition of $w_{t}^{\star}$ (Definition 13), vector $u \in \mathcal{U}_1$ can be written as
890
+
891
+ $$
892
+ u = \lambda u _ {\perp} + \sqrt {1 - \lambda^ {2}} w _ {t} ^ {\star},
893
+ $$
894
+
895
+ where $u_{\perp}^{T}w_{t}^{\star} = 0,|u_{\perp}| = 1$ and since $u\in \mathcal{U}_1$
896
+
897
+ $$
898
+ 0 \leq \lambda = \sqrt {1 - \left(u ^ {T} w _ {t} ^ {\star}\right)} = u ^ {T} u _ {\perp} \leq c _ {t} ^ {\star}.
899
+ $$
900
+
901
+ Let $\mathcal{S}_{\perp} = \{u_{\perp}:|u_{\perp}| = 1,u_{\perp}^{T}w_{t}^{\star} = 0\}$ . Let $du_{\perp}$ be the $(n - 2)$ -dimensional Lebesgue measure of $\mathcal{S}_{\perp}$ .
902
+
903
+ It is easy to verify that $du = \lambda^{d - 2}(1 - \lambda^2)^{-1 / 2}d\lambda du_{\perp}$ and hence from (15)
904
+
905
+ $$
906
+ \Delta_ {t} \leq - \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{V _ {d}} \int_ {0} ^ {c _ {t} ^ {\star}} \lambda^ {d - 2} \left(1 - \lambda^ {2}\right) ^ {- 1 / 2} d \lambda \int_ {\mathscr {S} _ {\perp}} \left(\lambda u _ {\perp} + \sqrt {1 - \lambda^ {2}} w _ {t} ^ {\star}\right) ^ {T} \frac {\left(x _ {t} - m _ {t}\right)}{\left| \left| x _ {t} - m _ {t} \right| \right|} d u _ {\perp}. \tag {16}
907
+ $$
908
+
909
+ Note that $\int_{du_{\perp}}u_{\perp}du_{\perp} = 0$ . Thus,
910
+
911
+ $$
912
+ \begin{array}{l} \Delta_ {t} = - \frac {| | x _ {t} - b _ {t} | |}{2 V _ {d}} \frac {(w _ {t} ^ {\star}) ^ {T} (x _ {t} - m _ {t})}{| | x _ {t} - m _ {t} | |} \int_ {0} ^ {c _ {t} ^ {\star}} \lambda^ {d - 2} (1 - \lambda^ {2}) ^ {- 1 / 2} \sqrt {1 - \lambda^ {2}} d \lambda \int_ {\mathcal {S} _ {\perp}} d u _ {\perp}, \\ \stackrel {(a)} {\leq} - V _ {d - 1} \frac {| | x _ {t} - b _ {t} | |}{2 V _ {d}} \frac {(w _ {t} ^ {\star}) ^ {T} (x _ {t} - m _ {t})}{| | x _ {t} - m _ {t} | |} \int_ {0} ^ {c _ {t} ^ {\star}} \lambda^ {d - 2} d \lambda , \\ \stackrel {(b)} {\leq} - V _ {d - 1} \frac {| | x _ {t} - b _ {t} | |}{2 V _ {d} (d - 1)} c _ {t} ^ {\star} \left(c _ {t} ^ {\star}\right) ^ {d - 1}, \\ = - V _ {d - 1} \frac {\left| \left| x _ {t} - b _ {t} \right| \right|}{2 V _ {d} (d - 1)} \left(c _ {t} ^ {\star}\right) ^ {d}, \tag {17} \\ \end{array}
913
+ $$
914
+
915
+ where $(a)$ follows since $\int_{\mathcal{S}_{\perp}}du_{\perp} = V_{d - 1}$ by definition, $(b)$ follows since $\frac{(w_t^\star)^T(x_t - m_t)}{||x_t - m_t||}\geq c_t^\star$ from Definition 13.
916
+
917
+ ![](images/84f87e779bcb48cabe8fe71549a24b36115461e64bdbb0e4db2208f6c1ccc9df.jpg)
918
+
919
+ # 16 Proof of Theorem 16
920
+
921
+ Proof: Since $\mathrm{CCV}(t)$ is a monotone non-decreasing function, let $t_{\mathrm{min}}$ be the largest time until which Algorithm 2 is followed by Switch. The regret guarantee is easy to prove. From Theorem 15, regret until time $t_{\mathrm{min}}$ is at most $O(\sqrt{t_{\mathrm{min}}})$ . Moreover, starting from time $t_{\mathrm{min}}$ till $T$ , from Theorem 5, the regret of Algorithm 1 is at most $O(\sqrt{T - t_{\mathrm{min}}})$ . Thus, the overall regret for Switch is at most $O(\sqrt{T})$ .
922
+
923
+ For the CCV, with Switch, until time $t_{\mathrm{min}}$ , $\mathrm{CCV}(t_{\mathrm{min}}) \leq \sqrt{T} \log T$ . At time $t_{\mathrm{min}}$ , Switch starts to use Algorithm 1 which has the following appealing property from (8) Sinha and Vaze [2024] that for
924
+
925
+ any $t \geq t_{\mathrm{min}}$ where at time $t_{\mathrm{min}}$ Algorithm 1 was started to be used with resetting $\mathrm{CCV}(t_{\mathrm{min}}) = 0$ . For any $t \geq t_{\mathrm{min}}$
926
+
927
+ $$
928
+ \Phi \left(\mathrm {C C V} (t)\right) + \operatorname {R e g r e t} _ {t} \left(x ^ {\star}\right) \leq \sqrt {\sum_ {\tau = t _ {\min }} ^ {t} \left(\Phi^ {\prime} \left(\mathrm {C C V} (\tau)\right)\right) ^ {2}} + \sqrt {t - t _ {\min }}. \tag {18}
929
+ $$
930
+
931
+ where $\beta = (2GD)^{-1}, V = 1, \lambda = \frac{1}{2\sqrt{T}}, \Phi(x) = \exp(\lambda x) - 1$ , and $\lambda = \frac{1}{2\sqrt{T}}$ . We trivially have $\operatorname{Regret}_t(x^\star) \geq -\frac{Dt}{2D} \geq -\frac{t}{2}$ . Hence, from (18), we have that for any $\lambda = \frac{1}{2\sqrt{T}}$ and any $t \geq t_{\min}$
932
+
933
+ $$
934
+ \mathrm {C C V} _ {[ t _ {\min }, T ]} \leq 4 G D \ln (2 (1 + 2 T)) \sqrt {T}.
935
+ $$
936
+
937
+ Since as argued before, with Switch, $\mathrm{CCV}(t_{\mathrm{min}}) \leq \sqrt{T} \log T$ , we get that $\mathrm{CCV}_{[1:T]} \leq O(\sqrt{T} \log T)$ .
938
+
939
+ # 17 Proof of Theorem 17
940
+
941
+ Clearly, with $f_{t} \equiv 0$ for all $t$ , with Algorithm 2, $y_{t} = x_{t}$ and the successive $x_{t}$ 's are such that $x_{t + 1} = \mathcal{P}_{S_t}(x_t)$ . Thus, essentially, the curve $\underline{x} = (x_1,x_2),(x_2,x_3),\ldots ,(x_{T - 1},x_T)$ formed by Algorithm 2 for OCS is a projection curve (Definition 18) on $S_{1} \supseteq ,\dots ,\supseteq S_{T}$ and the result follows from Lemma 19 and the fact that $\mathrm{diameter}(S_1) \leq D$ .
NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca9d4f2d687c42fb34e4d26541b9705cf5bc6aa660dbdeea3a2cf99a8fc24c10
3
+ size 612389
NeurIPS/2025/$O(_sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a408d2ac8e02a824373173e982d5c3ca5544de2533ff839947c19cc9d3605fc6
3
+ size 1592059
NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30b697506c7a7e6b65857ebdbb1d5d59dd5b32304bcb133decf334ee8ea12bb4
3
+ size 234050
NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:221e8b74fe1916827a63487ba61f393329ed049114fb0879061c85ff32b3a7d4
3
+ size 301627
NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/01770134-3fc2-484e-b6ce-7462acde076d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cea9277e6a0a94336e488f6776e6ad1e6daba430ec5e0d79f637329c4e941df
3
+ size 830885
NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:793cf038d35c7485677a3c07b4ab90392656accf8b842c33dd1aa389b3355eb0
3
+ size 828860
NeurIPS/2025/$Q_sharp$_ Provably Optimal Distributional RL for LLM Post-Training/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8a330036bfbb5933e02b68e0244886bbaaf4f825d3c5cde0436fd067442bdeb
3
+ size 1653987
NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ba0191b6b0050207873b9ebfa605a54f5a0e503a50cec6d0210f9e9d6f9c77c
3
+ size 230855
NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bed83b2a9a4711dec7c998a22faa6e397a625624aecb6a1deffaa71d13347bf7
3
+ size 288228
NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/bfd9fcf2-7b44-4981-872e-2f1285906665_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f46f45036ace6bf2ad7aed3db04f470b5ce200cf69151cbb5e7124285b65ca7e
3
+ size 21083094
NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f2e89a066fa2cc41f1379512533d672d39b7693cf6d0955b6db25f0fa6e4f47
3
+ size 1582228
NeurIPS/2025/$_Delta _mathrm{Energy}$_ Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38c8966e1e6f1a2e1a793efb0d75f3a967562002aef831bd9e6423729f7c552c
3
+ size 1213575
NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddca4faf9a894c910908d1262293af88907ff1b45a8db71576c39fb843fe8a4c
3
+ size 233520
NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7117f856c1d42855c96974493d8e8efa44f7d882736d2e12e1f1343e3fb510ad
3
+ size 286578
NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/4a379936-68f8-4c4d-92ac-06c0226cef96_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3249a812495cc1faf6697a31194a1f3d44907d972d7dd72b7b4afc114c2f9726
3
+ size 41109155
NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cd5d1496c9e12e58e11f1e41d55329754e75a1aa8847c7fdf4c9c73d3bd63ab
3
+ size 3071152
NeurIPS/2025/$_Psi$-Sampler_ Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aff06ae22e3cc9f5461914738d75a84ee653bf64a2f84147f370b1fc444f4930
3
+ size 1232303