SlowGuess's picture
Add Batch 2128362d-2231-4e01-8aae-2794405974ef
20091b1 verified

Adaptive Best-of-Both-Worlds Algorithm for Heavy-Tailed Multi-Armed Bandits

Jiatai Huang1 Yan Dai1 Longbo Huang

Abstract

In this paper, we generalize the concept of heavily-tailed multi-armed bandits to adversarial environments, and develop robust best-of-both-worlds algorithms for heavy-tailed multi-armed bandits (MAB), where losses have $\alpha$ -th $(1 < \alpha \leq 2)$ moments bounded by $\sigma^{\alpha}$ , while the variances may not exist. Specifically, we design an algorithm HTINF, when the heavy-tail parameters $\alpha$ and $\sigma$ are known to the agent, HTINF simultaneously achieves the optimal regret for both stochastic and adversarial environments, without knowing the actual environment type a-priori. When $\alpha, \sigma$ are unknown, HTINF achieves a log $T$ -style instance-dependent regret in stochastic cases and $o(T)$ no-regret guarantee in adversarial cases. We further develop an algorithm AdaTINF, achieving $\mathcal{O}(\sigma K^{1 - 1 / \alpha} T^{1 / \alpha})$ minimax optimal regret even in adversarial settings, without prior knowledge on $\alpha$ and $\sigma$ . This result matches the known regret lower-bound (Bubeck et al., 2013), which assumed a stochastic environment and $\alpha$ and $\sigma$ are both known. To our knowledge, the proposed HTINF algorithm is the first to enjoy a best-of-both-worlds regret guarantee, and AdaTINF is the first algorithm that can adapt to both $\alpha$ and $\sigma$ to achieve optimal gap-independent regret bound in classical heavy-tailed stochastic MAB setting and our novel adversarial formulation.

1. Introduction

In this paper, we focus on the multi-armed bandit problem with heavy-tailed losses. Specifically, in our setting, there is an agent facing $K$ feasible actions (called bandit arms) to

*Equal contribution ${}^{1}$ Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China. Correspondence to: Longbo Huang longbohuang@tsinghua.edu.cn.

Proceedings of the $39^{th}$ International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).

sequentially make decisions on. For each time step $t\in [T]$ 1 each arm $i\in [K]$ is associated with a loss distribution $\nu_{t,i}$ which is unknown to the agent. The only constraint on $\nu_{t,i}$ is that the $\alpha$ -th moment $(\alpha \in (1,2])$ is bounded by some constant $\sigma^{\alpha}$ , i.e., $\mathbb{E}{\ell \sim \nu{t,i}}[|\ell |^{\alpha}]\leq \sigma^{\alpha}$ for all $t\in [T]$ and $i\in [K]$ . However, neither $\alpha$ nor $\sigma$ is known to the agent.

At each step $t$ , the agent picks an arm $i_t$ and observes a loss $\ell_{t,i_t}$ drawn from the distribution $\nu_{t,i_t}$ , which is independent to all previous steps. The goal of the agent is to minimize the pseudo-regret, which is defined by the expected difference between the loss he suffered and the loss of always pulling the best arm in hindsight (formally defined in Definition 3.2), where the expectation is taken with respect to the randomness both in the algorithm and the environment.

Prior MAB literature mostly studies settings where the loss distributions are supported on a bounded interval $I$ (e.g., $I = [0,1]$ ) known to the agent before-hand, which is a special case of our setting where all $\nu_{t,i}$ 's are Dirac measures centered within $I$ (Seldin & Slivkins, 2014; Zimmert & Seldin, 2019). By constraint, there is another common existing MAB setting called scale-free MAB (De Rooij et al., 2014; Orabona & PΓ‘l, 2018), where the range of losses are not known. In this case, the loss range itself can even depend on other scale parameter of the problem instance (e.g., $T$ and $K$ ) rather than being a constant. Our heavy-tailed setting can be seen as an intermediate setting between bounded-loss MAB and scale-free MAB, where loss feedback can be indefinitely large, but not in a completely arbitrary manner. This setting naturally extends classical MAB settings, including bounded-loss MAB and sub-Gaussian-loss MAB.

Following the convention of prior MAB literature, we further distinguish the environment into two typical types. Environment of the first type consists with time homogeneous distributions, i.e., for each $i\in [K]$ $\nu_{t,i} = \nu_{1,i}$ holds for all $t\in [T]$ . We call them stochastic environments. Bubeck et al. (2013) proved that, for heavy-tailed stochastic bandits, even when $\alpha$ and $\sigma$ are both known to the agent, an $\Omega (\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ instance-independent regret and $\Omega (\sigma^{\frac{\alpha}{\alpha - 1}}\sum_{i\neq i^{*}}\log T\Delta_{i}^{-\frac{1}{\alpha - 1}})$ instance-dependent regret is

unavoidable, where $i^*$ denotes the optimal arm in hindsight, and $\Delta_i \triangleq \mathbb{E}{\ell \sim \nu{1,i}}[\ell] - \mathbb{E}{\ell \sim \nu{1,i*}}[\ell]$ is the sub-optimally gap between $i$ and $i^*$ . They also designed an algorithm that matches these lower-bounds up to logarithmic factors when both $\alpha$ and $\sigma$ are known.

In the second type of environments, loss distributions can be time inhomogeneous, and we call them adversarial environments. To our knowledge, no previous work studied similar adversarial heavy-tailed MAB problems. It can be seen that the instance-independent lower-bound $\Omega (\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ for stochastic heavy-tailed MAB proved by Bubeck et al. (2013) is also a lower-bound for this adversarial extension.

In this paper, we develop algorithms for heavy-tailed bandits in both stochastic and adversarial cases. In contrast to existing (stochastic) heavy-tailed MAB algorithms (Bubeck et al., 2013; Lee et al., 2020) that heavily use well-designed mean estimators for heavy-tailed distributions, our algorithms are mainly designed based on the Follow-the-Regularized-Leader (FTRL) framework, which has been applied in a number of adversarial MAB works (Zimmert & Seldin, 2019; Seldin & Lugosi, 2017). Our proposed algorithms enjoy optimal or near-optimal regret guarantees and require much less prior knowledge compared to prior works. When $\sigma, \alpha$ are known before-hand, our algorithm matches existing gap dependent and independent regret lower-bounds, while previous algorithms suffer extra log-factors (check Table 1 for a comparison). Finally, we propose an algorithm with $\mathcal{O}(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ regret even when $\sigma, \alpha$ are both unknown, which shows the existing $\Omega(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ lower-bound is tight even when all prior knowledge on $\sigma, \alpha$ is absent.

1.1. Our Contributions

We first introduce a novel adversarial MAB setting where losses are heavy-tailed, which generalizes the existing heavy-tailed stochastic MAB setting and scalar-loss adversarial MAB setting. Three novel algorithms are proposed. HTINF enjoys an optimal best-of-both-worlds regret guarantee when $\alpha, \sigma$ are known. Without the knowledge of $\alpha, \sigma$ , OptTINF guarantees $o(T)$ adversarial regret (a.k.a. "no-regret guarantee") and $\mathcal{O}(\log T)$ gap-dependent bound for stochastically constrained environments. AdaTINF guarantees minimax optimal $\mathcal{O}(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ adversarial regret.

1.1.1. KNOWN $\alpha ,\sigma$ CASE

When $\alpha, \sigma$ are both known to the agent, we provide a novel algorithm called Heavy-Tail Tsallis-INF (HTINF, Algorithm 1), based on the Follow-the-Regularized-Leader (FTRL) framework. In HTINF, We introduce a novel skipping technique equipped with an action-dependent skipping threshold ( $r_t$ in Algorithm 1) to handle the heavy-tailed losses, which can be of independent interest.

HTINF enjoys the so-called best-of-both-worlds property (Bubeck & Slivkins, 2012) to achieve $\mathcal{O}\left(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha}\right)$ regret in adversarial settings and $\mathcal{O}\left(\sigma^{-\frac{\alpha}{\alpha - 1}}\sum_{i\neq i^*}\Delta_i^{-\frac{1}{\alpha - 1}}\log T\right)$ regret in stochastically constrained adversarial settings (which contains stochastic cases; see Section 3 for definition) simultaneously, without knowing the actual environment type a-priori. The claimed regret bounds both match the corresponding lower-bounds by Bubeck et al. (2013), showing that these bounds are indeed tight even for our adversarial setting.

1.1.2. UNKNOWN $\alpha, \sigma$ CASE

When the agent does not access to $\alpha$ and $\sigma$ , running HTINF optimistically with $\alpha = 2$ and $\sigma = 1$ (named OptTINF; Algorithm 2) also gives non-trivial regret guarantees. Specifically, we showed that it enjoys a near-optimal regret of $\mathcal{O}\left(\sum_{i \neq i^*} \left( \frac{\sigma^{2\alpha}}{\Delta_i^{3 - \alpha}} \right)^{\frac{1}{\alpha - 1}} \log T\right)$ in stochastically constrained adversarial environments and $\mathcal{O}\left( \sigma^\alpha K^{\frac{\alpha - 1}{2}} T^{\frac{3 - \alpha}{2}} \right)$ regret in adversarial cases, which is still $o(T)$ .

We further present another novel algorithm called Adaptive Tsallis-INF (AdaTINF, Algorithm 3) for heavy-tailed bandits. Without knowing the heavy-tail parameters $\alpha$ and $\sigma$ before-hand, AdaTINF is capable of guaranteeing an $\mathcal{O}\left(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha}\right)$ regret in the adversarial setting, matching the regret lower-bound from Bubeck et al. (2013).

To the best of our knowledge, all prior algorithms for MAB with heavy-tailed losses need to know $\alpha$ before-hand. The proposed two algorithms, OptTINF and AdaTINF, are the first algorithms to have the adaptivity for both unknown heavy-tail parameters $\alpha$ and $\sigma$ , while achieving near-optimal regrets in stochastic or adversarial settings.

1.2. Related Work

Heavy-tailed losses: The heavy-tailed (stochastic) bandit model was first introduced by Bubeck et al. (2013), where instance-dependent and independent lower-bounds were given. They designed an algorithm nearly matching these lower-bounds (with an extra log $T$ factor in the gap-independent regret), when $\sigma, \alpha$ are both known to the agent. Vakili et al. (2013) derived a tighter upper-bound with $\alpha, \sigma$ and min $\Delta_{i}$ all presented to the agent. Kagrecha et al. (2019) gave an algorithm adaptive to $\sigma$ in a pure exploration setting. Lee et al. (2020) got rid of the requirement of $\sigma$ , yielding near-optimal regret bounds with a prior knowledge on $\alpha$ only. Moreover, all above algorithms built on the UCB framework, which does not directly apply to adversarial environments. One can refer to Table 1 for a comparison.

Other variations with heavy-tailed losses are also studied in the literature, e.g., linear bandits (Medina & Yang, 2016;

Table 1. An overview of the proposed algorithms and related works.

AlgorithmLoss TypePrior KnowledgeTotal Regret
Lower-bounds (Bubeck et al., 2013)StochasticaΞ±,σΩ(σα/Ξ±-1βˆ‘iβ‰ i* Ξ”i-1/Ξ±-1 logT)
Ξ©(ΟƒK1-1/Ξ±T1/Ξ±)
RobustUCB (Bubeck et al., 2013)StochasticΞ±,ΟƒO(βˆ‘iβ‰ i* (σα/Ξ”i)1/Ξ±-1 logT) (optimal)
O(Οƒ(K log T)1-1/Ξ±T1/Ξ±) (sub-optimal for log T factors)
Lee et al. (2020)Stochasticα; require μi ∈ [0,1]O(K1-1/αT1/α log K)b (sub-optimal for log K factors)
1/2-Tsallis-INF (Zimmert & Seldin, 2019)SCA-uniquecrequire α = 2 and [0,1]-bounded lossesO(Σi≠i* 1/Δi log T) (optimal for α = 2, σ = 1 case)
AdversarialO(√KT) (optimal for Ξ± = 2, Οƒ = 1 case)
HTINF (ours)SCA-uniqueΞ±,ΟƒO(Ξ£iβ‰ i* (σα/Ξ”i)1/Ξ±-1 log T) (optimal)
AdversarialO(ΟƒK1-1/Ξ±T1/Ξ±) (optimal)
Optimistic HTINF (ours)SCA-uniqueNoneO(Σi≠i* (σ2/Δi3-α)1/α-1 log T)
AdversarialO(σαKΞ±-1/2T3-Ξ±/2)
AdaTINF (ours)AdversarialNonedO(ΟƒK1-1/Ξ±T1/Ξ±) (optimal)

aAs discussed in Section 3, the instance-independent lower bounds automatically apply to adversarial settings, and the main result of this paper shows that it is indeed tight even for adversarial settings.
${}^{b}$ Lee et al. (2020) regarded $\sigma$ as a constant when stating their regret bounds. By designing different estimators, they also gave various instance-dependent bounds, each with ${\left( \log T\right) }^{\frac{\alpha }{\alpha - 1}}$ (sub-optimal) dependency on $T$ . One can check Table 1 in their paper for more details.
${}^{c}$ Abbreviation for stochastically constrained adversarial settings with a unique optimal arm.
Though the time horizon $T$ is assumed to be known in Algorithm 3, it is in fact non-essential for AdaTINF. The removal of $T$ , via a usual doubling trick, will not cause extra factors. Check Appendix D for more discussions.

Xue et al., 2021), contextual bandits (Shao et al., 2018) and Lipschitz bandits (Lu et al., 2019). However, none of above algorithms removes the dependency on $\alpha$ .

Best-of-both-worlds: This concept of designing a single algorithm to yield near-optimal regret in both stochastic and adversarial environments was first proposed by Bubeck & Slivkins (2012). Bubeck & Slivkins (2012); Auer & Chiang (2016); Besson & Kaufmann (2018) designed algorithms that initially run a policy for stochastic settings, and may permanently switch to a policy for adversarial settings during execution. Seldin & Slivkins (2014); Seldin & Lugosi (2017); Wei & Luo (2018); Zimmert & Seldin (2019) designed algorithm using the Online Mirror Descent (OMD) or Follow the Regularized Leader (FTRL) framework. Our

work falls into the second category.

Adaptive algorithms: There is a rich literature in deriving algorithms adaptive to the loss sequences, for either full information setting (Luo & Schapire, 2015; Orabona & Pal, 2016), stochastic bandits (Garivier & CappΓ©, 2011; Lattimore, 2015) or adversarial bandits (Wei & Luo, 2018; Bubeck et al., 2019). There are also many algorithms that is adaptive to the loss range, so-called 'scale-free' algorithms (De Rooij et al., 2014; Orabona & PΓ‘l, 2018; Hadiji & Stoltz, 2020). However, as mentioned above, to our knowledge, our work is the first to adapt to heavy-tail parameters.

2. Notations

We use $[N]$ to denote the integer set ${1,2,\dots ,N}$ . Let $f$ be any strictly convex function defined on a convex set $\Omega \subseteq \mathbb{R}^K$ . For $x,y\in \Omega$ , if $\nabla f(x)$ exists, we denote the Bregman divergence induced by $f$ as

Df(y,x)β‰œf(y)βˆ’f(x)βˆ’βŸ¨βˆ‡f(x),yβˆ’x⟩. D _ {f} (y, x) \triangleq f (y) - f (x) - \langle \nabla f (x), y - x \rangle .

We use $f^{*}(y) \triangleq \sup_{x \in \mathbb{R}^{K}} {\langle y, x \rangle - f(x)}$ to denote the Fenchel conjugate of $f$ . Denote the $K-1$ -dimensional probability simplex by $\triangle_{[K]} = {x \in \mathbb{R}_+^K \mid x_1 + x_2 + \dots + x_K = 1}$ . We use $\mathbf{e}i \in \triangle{[K]}$ to denote the vector whose $i$ -th coordinate is 1 and others are 0.

Let $\overline{f}$ denote the restriction of $f$ on $\triangle_{[K]}$ , i.e.,

fβ€Ύ(x)={f(x),xβˆˆβ–³[K]∞,xβˆ‰β–³[K]. \overline {{f}} (x) = \left\{ \begin{array}{l l} f (x), & x \in \triangle_ {[ K ]} \\ \infty , & x \notin \triangle_ {[ K ]} \end{array} \right..

Let $\mathcal{E}$ be a random event, we use $\mathbb{1}[\mathcal{E}]$ to denote the indicator of $\mathcal{E}$ , which equals 1 if $\mathcal{E}$ happens, and 0 otherwise.

3. Problem Setting

We now introduce our formulation of the heavy-tailed MAB problem. Formally speaking, there are $K \geq 2$ available arms indexed from 1 to $K$ , and $T \geq 1$ time slots for the agent to make decisions sequentially. ${\nu_{t,i}}_{t \in [T], i \in [K]}$ are $T \times K$ probability distributions over real numbers, which are fixed before the game starts and unknown to the agent (i.e., obliviously adversely chosen). Instead of the usual assumption of bounded variance or even bounded range, we only assume that they are heavy-tailed, as follows.

Assumption 3.1 (Heavy-tailed Losses Assumption). The $\alpha$ -th moment of all loss distributions ${\nu_{t,i}}$ are bounded by $\sigma^{\alpha}$ for some constants $1 < \alpha \leq 2$ and $\sigma > 0$ , i.e.,

[βˆ£β„“βˆ£Ξ±]≀σα,βˆ€t∈[T],i∈[K]. \underset {\ell \sim \nu_ {t, i}} {\mathbb {E}} \left[ | \ell | ^ {\alpha} \right] \leq \sigma^ {\alpha}, \quad \forall t \in [ T ], i \in [ K ].

In this paper, we will discuss how to design algorithms for two different cases: $\alpha$ and $\sigma$ are known before-hand or oblivious (i.e., fixed before-hand but unknown to the agent). We denote by $\mu_{t,i} \triangleq \mathbb{E}{x \sim \nu{t,i}}[x]$ the individual mean loss for each arm and $\mu_t \triangleq (\mu_{t,1}, \mu_{t,2}, \ldots, \mu_{t,K})$ the mean loss vector at time $t$ , respectively.

At the beginning of each time slot $t$ , the agent needs to choose an action $i_t \in [K]$ . At the end of time slot $t$ , the agent will receive and suffer a loss $\ell_{t,i_t}$ , which is guaranteed to be an independent sample from the distribution $\nu_{t,i_t}$ . The agent is allowed to make the decision $i_t$ based on all history actions $i_1, \ldots, i_{t-1}$ , all history feedback $\ell_{1,i_1}, \ldots, \ell_{t-1,i_{t-1}}$ , and any amount of private randomness of the agent.

The objective of the agent is to minimize the total loss. Equivalently, the agent aims to minimize the following pseudo-regret defined by Bubeck & Slivkins (2012) (also referred to as the regret in this paper for simplicity):

Definition 3.2 (Pseudo-regret). We define

RTβ‰œmax⁑i∈[K]E[βˆ‘t=1Tβ„“t,itβˆ’βˆ‘t=1Tβ„“t,i]=max⁑i∈[K]E[βˆ‘t=1TΞΌt,itβˆ’βˆ‘t=1TΞΌt,i](1) \begin{array}{l} \mathcal {R} _ {T} \triangleq \max _ {i \in [ K ]} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \ell_ {t, i _ {t}} - \sum_ {t = 1} ^ {T} \ell_ {t, i} \right] \\ = \max _ {i \in [ K ]} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mu_ {t, i _ {t}} - \sum_ {t = 1} ^ {T} \mu_ {t, i} \right] \tag {1} \\ \end{array}

to be the pseudo-regret of an MAB algorithm, where the expectation is taken with respect to randomness from both the algorithm and the environment.

In the remaining of this paper, we will use $\mathcal{F}t \triangleq \sigma(i_1, \dots, i_t, \ell{1,i_1}, \dots, \ell_{t,i_t})$ to denote the natural filtration of an MAB algorithm execution.

3.1. Stochastically Constrained Environments

Definition 3.3 (Stochastic Environments). If, for each arm $i \in [K]$ , all $T$ loss distributions $\nu_{1,i}, \nu_{2,i}, \ldots, \nu_{T,i}$ are identical, we call such environment a stochastic environment.

A more general setting is called stochastically constrained adversarial setting (Wei & Luo, 2018), defined as follows.

Definition 3.4 (Stochastically Constrained Adversarial Environments). If, there exists an optimal arm $i^{} \in [K]$ and mean gaps $\Delta_{i} \geq 0$ such that for all $t \in [T]$ , we have $\mu_{t,i} - \mu_{t,i^{}} \geq \Delta_{i}$ for all $i \neq i^{*}$ , we call such environment a stochastically constrained adversarial environment.

It can be seen that stochastic problem instances are special cases of stochastically constrained adversarial instances. Hence, in this paper, we study this more general setting instead of stochastic cases. As in Zimmert & Seldin (2019), we make the following assumption.

Assumption 3.5 (Unique Optimal Arm Assumption). In stochastically constrained adversarial environments, $i^{*}$ is the unique best arm throughout the process, i.e.,

Ξ”i>0,βˆ€iβ‰ iβˆ—. \Delta_ {i} > 0, \quad \forall i \neq i ^ {*}.

Remark. The existence of a unique optimal arm is a common assumption in MAB and RL literature leveraging FTRL with Tsallis entropy regularizers (Zimmert & Seldin, 2019; Erez & Koren, 2021; Jin & Luo, 2020; Jin et al., 2021). Recently, Ito (2021) gave a new analysis of Tsallis-INF's logarithmic regret on stochastic MAB instances without this assumption. It is an interesting future work to figure out whether it is doable in our heavy-tailed losses setting.

3.2. Adversarial Environments

In contrast, an environment without any extra requirement is called an adversarial environment. We denote the best arm(s) in hindsight by $i^{}$ , i.e., the $i \in [K]$ that makes the expectation in Eq. (1) maximum. We make the following assumption on the losses of arm $i^{}$ .

Assumption 3.6 (Truncated Non-negative Losses Assumption). There exists an optimal arm $i^{}$ such that $\ell_{t,i^{}}$ is truncated non-negative for all $t \in [T]$ .

In the assumption, the truncated non-negative property is defined as follows.

Definition 3.7 (Truncated Non-negativity). A random variable $X$ is truncated non-negative, if for any $M \geq 0$ ,

E[Xβ‹…1[∣X∣>M]]β‰₯0. \mathbb {E} \left[ X \cdot \mathbb {1} [ | X | > M ] \right] \geq 0.

Remark. This truncated non-negative requirement is strictly weaker than the common non-negative losses assumption in MAB literature, especially works fitting in the FTRL framework (Auer et al., 2002; Zimmert & Seldin, 2019). Intuitively, truncated non-negativity forbids the random variable to hold too much mass on its negative part, but it can still have negative outcomes.

4. Static Algorithm: HTINF

In this section, we first present an algorithm achieving optimal regrets when $\alpha, \sigma$ are both known before-hand, and then extend it to the unknown $\alpha, \sigma$ case.

4.1. Known $\alpha, \sigma$ Case

For the case where both $\alpha$ and $\sigma$ are known a-priori, we present a FTRL algorithm with the $\frac{1}{\alpha}$ -Tsallis entropy function $\Psi(\mathbf{x}) = -\alpha \sum_{i=1}^{K} x_i^{1/\alpha}$ (Tsallis, 1988; Abernethy et al., 2015; Zimmert & Seldin, 2019) as the regularizer. We pick $\eta = t^{-1/\alpha}$ as the learning rate of the FTRL algorithm. Importance sampling is used to construct estimates $\hat{\ell}_t$ of the true loss feedback vector $\ell_t$ .

In this algorithm, to handle the heavy-tailed losses, we designed a novel skipping technique with action-dependent threshold $r_t \propto \eta_t^{-1} x_{t,i_t}^{1 / \alpha}$ at time slot $t$ , i.e., the agent simply discards those time slots with the absolute value of the loss feedback more than $r_t$ . Note that this skipping criterion with dependency on $i_t$ is properly defined, for it is checked after deciding the arm $i_t$ and receiving the feedback. To decide $x_t$ , the probability to pull each arm in a new time step, we pick the best mixed action $x$ against the sum of all non-skipped estimated loss $\hat{\ell}_t$ 's, in a regularized manner. The pseudo-code of the algorithm is presented in Algorithm 1.

The performance of Algorithm 1 is presented in the following Theorem 4.1. The proof is sketched in Section 5. For a

Algorithm 1 Heavy-Tail Tsallis-INF (HTINF)

Input: Number of arms $K$ , heavy-tail parameters $\alpha$ and $\sigma$
Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$

1: for $t = 1,2,\cdots$ do
2: Calculate policy with learning rate $\eta_t^{-1} = \sigma t^{1 / \alpha}$ ; Pick the regularizer $\Psi (x) = -\alpha \sum_{i = 1}^{K}x_i^{1 / \alpha}$ :

xt←(Ξ·tβˆ‘s=1tβˆ’1βŸ¨β„“^s,x⟩+Ξ¨(x)) x _ {t} \leftarrow \underset {x \in \bigtriangleup_ {[ K ]}} {\operatorname {a r g m i n}} \left(\eta_ {t} \sum_ {s = 1} ^ {t - 1} \langle \hat {\ell} _ {s}, x \rangle + \Psi (x)\right)

3: Sample new action $i_t \sim x_t$ .
4: Calculate the skipping threshold $r_t \gets \Theta_\alpha \eta_t^{-1} x_{t,i_t}^{1/\alpha}$ where $\Theta_\alpha = \min {1 - 2^{-\frac{\alpha - 1}{2\alpha - 1}}, (2 - \frac{2}{\alpha})^{\frac{1}{2 - \alpha}}}$ .
5: Play according to $i_t$ and observe loss feedback $\ell_{t,i_t}$ .
6: if $|\ell_{t,i_t}| > r_t$ then
7: $\hat{\ell}t\gets \mathbf{0}$
8: else
9: Construct weighted importance sampling loss estimator $\hat{\ell}
{t,i}\gets \frac{\ell_{t,i}}{x_{t,i}}\mathbb{1}[i = i_t],\forall i\in [K]$
10: end if
11: end for

detailed formal proof, see Appendix A.

Theorem 4.1 (Performance of HTINF). If Assumptions 3.1 and 3.6 hold, we have the following best-of-both-worlds style regret guarantees.

  1. When the environment is adversarial, Algorithm 1 ensures regret bound

RT≀O(ΟƒK1βˆ’1/Ξ±T1/Ξ±). \mathcal {R} _ {T} \leq \mathcal {O} \left(\sigma K ^ {1 - 1 / \alpha} T ^ {1 / \alpha}\right).

  1. If the environment is stochastically constrained adversarial with a unique optimal arm $i^{*}$ , i.e., Assumption 3.5 holds, then Algorithm 1 ensures

RT≀O(ΟƒΞ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iβˆ’1Ξ±βˆ’1log⁑T)2. \mathcal {R} _ {T} \leq \mathcal {O} \left(\sigma^ {\frac {\alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \log T\right) ^ {2}.

The $\mathcal{O}(\log T)$ instance-dependent bound in Theorem 4.1 is due to a property similar to the self-bounding property of $1 / 2$ -Tsallis entropy (Zimmert & Seldin, 2019). For $\alpha < 2$ , such properties of $1 / \alpha$ -Tsallis entropy do not automatically hold, they are made possible by our novel skipping mechanism with action-dependent threshold.

4.2. Extending to Unknown $\alpha, \sigma$ Case: OptTINF

The two hyper-parameters $\sigma, \alpha$ in Algorithm 3 are just set to the true heavy-tail parameters of the loss distributions when they are known before-hand. When the distributions' heavy-tail parameters $\alpha, \sigma$ are both unknown to the agent, we can prove that by directly running HTINF with algorithm hyper-parameters $\alpha = 2$ and $\sigma = 1$ (not necessarily equal to the true $\alpha, \sigma$ values) "optimistically" as in Algorithm 2, one can still achieve $\mathcal{O}(\log T)$ regret in stochastic case and sub-linear regret in adversarial case.

Algorithm 2 Optimistic HTINF (OptTINF)

Input: Number of arms $K$

Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$

1: Run HTINF (Algorithm 1) with hyper-parameters $\alpha = 2$ and $\sigma = 1$ .

The performance of Algorithm 2 is described below. As the analysis is quite similar to that of Algorithm 1, we postpone the formal proof to Appendix B.

Theorem 4.2 (Performance of OptTINF). If Assumptions 3.1 and 3.6 hold, the following two statements are valid.

  1. In adversarial cases, Algorithm 2 achieves

RT≀O(σαKΞ±βˆ’12T3βˆ’Ξ±2+KT). \mathcal {R} _ {T} \leq \mathcal {O} \left(\sigma^ {\alpha} K ^ {\frac {\alpha - 1}{2}} T ^ {\frac {3 - \alpha}{2}} + \sqrt {K T}\right).

  1. In stochastically constrained adversarial environments with a unique optimal arm $i^{*}$ (Assumption 3.5), it ensures

RT≀O(Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iβˆ’3βˆ’Ξ±Ξ±βˆ’1log⁑T). \mathcal {R} _ {T} \leq \mathcal {O} \left(\sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- \frac {3 - \alpha}{\alpha - 1}} \log T\right).

For both cases, $\sigma$ and $\alpha$ in the regret bounds refer to the true heavy-tail parameters of the loss distributions.

Theorem 4.2 claims that when facing an instance with unknown $1 < \alpha < 2$ , Algorithm 2 still guarantees $O(T^{\frac{3 - \alpha}{2}})$ "no-regret" performance and $\mathcal{O}(\log T)$ instance-dependent regret upper-bound for stochastic instances.

5. Regret Analysis of HTINF

In this section, we sketch the analysis of Algorithm 1. By definition, we need to bound

RT(y)β‰œβˆ‘t=1TE[⟨xtβˆ’y,ΞΌt⟩](yβˆˆβ–³[K])(2) \mathcal {R} _ {T} (y) \triangleq \sum_ {t = 1} ^ {T} \mathbb {E} [ \langle x _ {t} - y, \mu_ {t} \rangle ] \quad (y \in \bigtriangleup_ {[ K ]}) \tag {2}

for the one-hot vector $y \triangleq \mathbf{e}{i^*}$ . For any $t \in [T], i \in [K]$ , let $\mu{t,i}^{\prime} \triangleq \mathbb{E}[\ell_{t,i} \mathbb{1}[|\ell_{t,i}| \leq r_t] \mid \mathcal{F}_{t-1}, i_t = i]$ . For a given $y$ ,

we decompose $\mathcal{R}_T(y)$ into two parts:

RT(y)=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]+E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβ€²βŸ©]=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]⏟s k i p p i n g g a p+E[βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩]⏟F T R L e r r o r(3) \begin{array}{l} \mathcal {R} _ {T} (y) = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right] + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} ^ {\prime} \rangle \right] \\ = \underbrace {\mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y , \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right]} _ {\text {s k i p p i n g g a p}} + \underbrace {\mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y , \hat {\ell} _ {t} \rangle \right]} _ {\text {F T R L e r r o r}} \tag {3} \\ \end{array}

where the last step is due to $\mathbb{E}[\hat{\ell}t\mid \mathcal{F}{t - 1}] = \mu_t^{\prime}$ . We call the first part the skipping gap, and the second, the FTRL error.

In the following sections, we will show that both parts can be controlled and transformed into expressions similar to the bounds with self-bounding properties in (Zimmert & Seldin, 2019), guaranteeing best-of-both-worlds style regret upper-bounds. Therefore, the design of HT INF and our new analysis generalizes the self-bounding property of (Zimmert & Seldin, 2019) from $1/2$ -Tsallis entropy regularizer to general $\alpha$ -Tsallis entropy regularizers where $1/2 \leq \alpha < 1$ .

5.1. To Control the Skipping Gap

To control the skipping gap part, notice that for all $t \in [T], i \in [K]$ , we can bound

ΞΌt,iβˆ’ΞΌt,iβ€²=E[βˆ£β„“t,i∣1[βˆ£β„“t,i∣>rt]∣Ftβˆ’1,it=i]≀E[βˆ£β„“t,i∣αrt1βˆ’Ξ±βˆ£Ftβˆ’1,it=i]≀σαrt1βˆ’Ξ±=Θα1βˆ’Ξ±Οƒt1/Ξ±βˆ’1xt,i1/Ξ±βˆ’1 \begin{array}{l} \mu_ {t, i} - \mu_ {t, i} ^ {\prime} = \mathbb {E} \left[ | \ell_ {t, i} | \mathbb {1} [ | \ell_ {t, i} | > r _ {t} ] \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ \leq \mathbb {E} \left[ \left| \ell_ {t, i} | ^ {\alpha} r _ {t} ^ {1 - \alpha} \right. \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ \leq \sigma^ {\alpha} r _ {t} ^ {1 - \alpha} = \Theta_ {\alpha} ^ {1 - \alpha} \sigma t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha - 1} \\ \end{array}

where $\Theta_{\alpha}$ is a factor in $r_t$ and only dependent on $\alpha$ , as defined in Line 4 of Algorithm 1. Moreover, by Assumption 3.6, $\mu_{t,i^*} - \mu_{t,i^*}' \geq 0$ a.s. Summing over $i$ and $t$ gives

βˆ‘t=1T⟨xtβˆ’eiβˆ—,ΞΌtβˆ’ΞΌtβ€²βŸ©β‰€Ξ˜Ξ±1βˆ’Ξ±Οƒβˆ‘t=1Tβˆ‘iβ‰ iβˆ—t1/Ξ±βˆ’1xt,i1/α≀5Οƒβˆ‘t=1Tβˆ‘iβ‰ iβˆ—t1/Ξ±βˆ’1xt,i1/Ξ±(4)≀10Οƒ(T+1)1/Ξ±K1βˆ’1/Ξ±.(5) \begin{array}{l} \sum_ {t = 1} ^ {T} \langle x _ {t} - \mathbf {e} _ {i ^ {*}}, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \leq \Theta_ {\alpha} ^ {1 - \alpha} \sigma \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha} \\ \leq 5 \sigma \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha} (4) \\ \leq 1 0 \sigma (T + 1) ^ {1 / \alpha} K ^ {1 - 1 / \alpha}. (5) \\ \end{array}

5.2. To Control the FTRL Error

For the FTRL error part, we follow the regular analysis for FTRL algorithms. Note that our skipping mechanism is equivalent to plugging in $\hat{\ell}_t = 0$ for all skipped time step $t$ in a FTRL framework for MAB algorithms. Therefore, due to the definition that $\mathbb{E}[\hat{\ell}_t] = \mu_t'$ , we can leverage most standard techniques on regret analysis of a FTRL algorithm and obtain following lemma.

Lemma 5.1 (FTRL Regret Decomposition).

βˆ‘t=1T⟨xtβˆ’y,β„“^tβŸ©β‰€βˆ‘t=1T(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))⏟P a r t (A)+βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)⏟Part(B) \begin{array}{l} \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle \leq \underbrace {\sum_ {t = 1} ^ {T} (\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}) (\Psi (y) - \Psi (x _ {t}))} _ {\text {P a r t (A)}} \\ + \underbrace {\sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t} , z _ {t})} _ {P a r t (B)} \\ \end{array}

where

ztβ‰œβˆ‡Ξ¨βˆ—(βˆ‡Ξ¨(xt)βˆ’Ξ·t1[βˆ£β„“t,itβˆ£β‰€rt](β„“^tβˆ’β„“t,it1)). z _ {t} \triangleq \nabla \Psi^ {*} \left(\nabla \Psi (x _ {t}) - \eta_ {t} \mathbb {1} [ | \ell_ {t, i _ {t}} | \leq r _ {t} ] (\hat {\ell} _ {t} - \ell_ {t, i _ {t}} \mathbf {1})\right).

In Lemma 5.1, $z_{t}$ is an intermediate action probability-like measure vector (which does not necessarily sum up to 1) during the FTRL algorithm. Here we leverage a trick of drifting the loss vectors (Wei & Luo, 2018) $\hat{\ell}{t}^{\prime} \triangleq \hat{\ell}{t} - \ell_{t,i_{t}} \mathbf{1}$ . Intuitively, one can see that feeding $\hat{\ell}{t}^{\prime}$ into a FTRL framework will produce exactly the same action sequence as $\hat{\ell}{t}$ .

We then divide this upper-bound in Lemma 5.1 into two parts, parts (A) and (B), and analyze them separately.

5.2.1. BOUND FOR PART (A)

As $y$ is an one-hot vector, we have $\Psi(y) = -\alpha$ for $\Psi(x) = -\alpha \sum_{i=1}^{K} x_i^{1/\alpha}$ . Hence, each summand in part (A) becomes

(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(βˆ’Ξ±+Ξ±βˆ‘i=1Kxi1/Ξ±)≀2Οƒ1Ξ±t1/Ξ±βˆ’1β‹…Ξ±βˆ‘iβ‰ iβˆ—xt,i1/Ξ± \begin{array}{l} \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) \left(- \alpha + \alpha \sum_ {i = 1} ^ {K} x _ {i} ^ {1 / \alpha}\right) \\ \leq 2 \sigma \frac {1}{\alpha} t ^ {1 / \alpha - 1} \cdot \alpha \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / \alpha} \\ \end{array}

due to the concavity of $t^{1 / \alpha}$ (Lemma E.6) and the fact that $x_{t,i} \leq 1$ . This further implies

(A)β‰€βˆ‘t=1T2Οƒt1/Ξ±βˆ’1βˆ‘iβ‰ iβˆ—xt,i1/Ξ±(6)≀4Οƒ(T+1)1/Ξ±K1βˆ’1/Ξ±.(7) \begin{array}{l} (\mathrm {A}) \leq \sum_ {t = 1} ^ {T} 2 \sigma t ^ {1 / \alpha - 1} \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / \alpha} (6) \\ \leq 4 \sigma (T + 1) ^ {1 / \alpha} K ^ {1 - 1 / \alpha}. (7) \\ \end{array}

5.2.2. BOUND FOR PART (B)

We can bound the expectation of each summand in part (B) as the following lemma states.

Lemma 5.2. Algorithm 1 ensures

E[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]≀8Οƒt1/Ξ±βˆ’1βˆ‘iβ‰ iβˆ—xt,i1/Ξ±(8)≀8Οƒt1/Ξ±βˆ’1K1βˆ’1/Ξ±.(9) \begin{array}{l} \mathbb {E} \left[ \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) \mid \mathcal {F} _ {t - 1} \right] \leq 8 \sigma t ^ {1 / \alpha - 1} \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / \alpha} (8) \\ \leq 8 \sigma t ^ {1 / \alpha - 1} K ^ {1 - 1 / \alpha}. (9) \\ \end{array}

5.3. Combining All Parts

In order to derive the claimed regret upper-bounds in Theorem 4.1, it suffices to plug in the bounds for the terms in Eq. (3) and Lemma 5.1.

Adversarial Case (Statement 1 in Theorem 4.1): To obtain an instance-independent bound for the expected total pseudo-regret $\mathcal{R}_T$ , we can plug inequalities (5), (7) and (9) into Eq. (3) to obtain

RT≀30ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±. \mathcal {R} _ {T} \leq 3 0 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha}.

Stochastically Constrained Adversarial Case (Statement 2 in Theorem 4.1): To obtain an instance-dependent bound for $\mathcal{R}_T$ , we leverage the arm-pulling probability ${x_t}$ dependent bounds (6) and (8) for the FTRL part of $\mathcal{R}_T$ . After plugging them together with (4) into (3), we see that

RT≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—15Οƒ(1t)1βˆ’1/Ξ±xt,i1/Ξ±βŸβ‰œst,i].(10) \mathcal {R} _ {T} \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \underbrace {1 5 \sigma \left(\frac {1}{t}\right) ^ {1 - 1 / \alpha} x _ {t , i} ^ {1 / \alpha}} _ {\triangleq s _ {t, i}} \right]. \tag {10}

We further apply the inequality of arithmetic and geometric means (AM-GM inequality) to $s_{t,i}$ , as

st,i=(Ξ±Ξ”i2xt,i)1Ξ±[(Ξ±Ξ”i2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’11t]Ξ±βˆ’1α≀Δi2xt,i+Ξ±βˆ’1Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1Ξ”iβˆ’1Ξ±βˆ’11t. \begin{array}{l} s _ {t, i} = \left(\frac {\alpha \Delta_ {i}}{2} x _ {t, i}\right) ^ {\frac {1}{\alpha}} \left[ \left(\frac {\alpha \Delta_ {i}}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \frac {1}{t} \right] ^ {\frac {\alpha - 1}{\alpha}} \\ \leq \frac {\Delta_ {i}}{2} x _ {t, i} + \frac {\alpha - 1}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \frac {1}{t}. \\ \end{array}

By noticing the fact that $\sum_{t\in [T]}\sum_{i\neq i^*}\Delta_i\mathbb{E}[x_{t,i}]\leq \mathcal{R}_T$ (Lemma E.7), Eq. (10) solves to

RT≀2Ξ±βˆ’2Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1β‹…βˆ‘iβ‰ iβˆ—Ξ”iβˆ’1Ξ±βˆ’1ln⁑(T+1)=exp⁑(O(1Ξ±βˆ’1))ΟƒΞ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iβˆ’1Ξ±βˆ’1ln⁑(T+1). \begin{array}{l} \mathcal {R} _ {T} \leq \frac {2 \alpha - 2}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \\ \cdot \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \ln (T + 1) \\ = \exp \left(\mathcal {O} \left(\frac {1}{\alpha - 1}\right)\right) \sigma^ {\frac {\alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \ln (T + 1). \\ \end{array}

6. Adaptive Algorithm: AdaTINF

In this section, our main goal is to achieve minimax optimal regret bounds for adversarial settings, without any knowledge about $\alpha, \sigma$ . Instead of estimating $\alpha$ and $\sigma$ explicitly, which can be challenging, our key idea is to leverage a trade-off relationship between Part (A) and Part (B) in the FTRL error part (defined in Lemma 5.1), to balance the two parts dynamically.

To achieve a balance, we use a doubling trick to tune the learning rates and skipping thresholds, which has been adopted in the literature to design adaptive algorithms (see, e.g., Wei & Luo (2018)). The formal procedure of AdaTINF is given in Algorithm 3, with the crucial differences between Algorithm 1 highlighted in blue texts.

It can be seen as HTINF equipped with a multiplier to both learning rates and skipping thresholds, maintained at running time, as

Ξ·tβˆ’1=Ξ»tt,rt=Ξ»tΘ2txt,it,βˆ€1≀t≀T, \eta_ {t} ^ {- 1} = \lambda_ {t} \sqrt {t}, \quad r _ {t} = \lambda_ {t} \Theta_ {2} \sqrt {t} \sqrt {x _ {t , i _ {t}}}, \quad \forall 1 \leq t \leq T,

where $\lambda_{t}$ is the doubling magnitude for the $t$ -th time slot.

Algorithm 3 Adaptive Tsallis-INF (AdaTINF)
Input: Number of arms $K$ , time horizon $T$
Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$
1: Initialize $J \gets 0$ , $S_0 \gets 0$
2: for $t = 1, 2, \dots$ do
3: $\lambda_t \gets 2^J$
4: Calculate policy with learning rate $\eta_t^{-1} = \lambda_t \sqrt{t}$ and regularize $\Psi(x) = -2 \sum_{i=1}^{K} x_i^{1/2}$ :
[ x_t \gets \operatorname{argmin}{x \in \triangle{[K]}} \left( \eta_t \sum_{s=1}^{t-1} \langle \hat{\ell}s, x \rangle + \Psi(x) \right) ]
5: Decide action $i_t \sim x_t$ , calculate $r_t \gets \lambda_t (1 - 2^{-1/3}) \sqrt{t} \sqrt{x_t, i_t}$ .
6: Play according to $i_t$ and observe loss feedback $\ell
{t,i_t}$ .
7: if $|\ell_{t,i_t}| > r_t$ then
8: $\hat{\ell}t \gets 0$
9: $c_t \gets \ell
{t,i_t}$
10: else
11: Construct weighted importance sampling loss estimator $\hat{\ell}{t,i} = \frac{\ell{t,i}}{x_{t,i}} \mathbb{1}[i = i_t], \forall i \in [K]$ .
12: $c_t \gets 2\eta_t x_{t,i_t}^{-1/2} \ell_{t,i_t}^2$
13: end if
14: $S_J \gets S_J + c_t$
15: if $2^J \sqrt{K(T+1)} < S_J$ then
16: $J \gets \max {J+1, [\log_2(c_t / \sqrt{K(T+1)})] + 1}$
17: $S_J \gets c_t$
18: end if
19: end for

We briefly explain our design. Suppose, initially, all $\lambda_{t}$ 's are set to a same number $\lambda > 1$ instead of 1. Then, part (A) will become approximately $\lambda$ times bigger than that under HTINF, while the expected value of part (B) will be scaled by a factor $\lambda^{1 - \alpha} < 1$ . In other words, increasing $\lambda$ enlarges part (A) but makes part (B) smaller. Therefore, if we can estimate parts (A) and (B), we can keep them of roughly the same magnitude, by doubling $\lambda$ whenever (A) becomes smaller than (B).

As Eq. (4) and (5) are similar to Eq. (8) and (9), the skipping gap can be treated similarly to (B). Therefore, we also take it into consideration in the doubling-balance mechanism. Due to the future-dependent Eq. (6) is hard to estimate, we use the looser Eq. (7) to represent part (A). This stops Algorithm 3 from enjoying an $\mathcal{O}(\log T)$ -style gap-dependent regret. However, it can still guarantee a minimax optimal regret in general case, as described in Theorem 6.1.

Theorem 6.1 (Performance of AdaTINF). If Assumptions 3.1 and 3.6 hold, Algorithm 3 ensures a regret of

RT≀O(ΟƒK1βˆ’1/Ξ±T1/Ξ±+KT), \mathcal {R} _ {T} \leq \mathcal {O} \left(\sigma K ^ {1 - 1 / \alpha} T ^ {1 / \alpha} + \sqrt {K T}\right),

which is minimax optimal.

The proof is sketched in Section 7, while the formal version is deferred to Appendix C.

Remark. Though $T$ is assumed to be known in Algorithm 3, the assumption can be removed via another doubling trick without effect to the order of the total regret. Check Appendix D for more details.

7. Analysis of AdaTINF

Since the crucial learning rate multiplier $\lambda_{t}$ is maintained by an adaptive doubling trick, in the analysis, we will group time slots with equal $\lambda_{t}$ 's into epochs. For $j \geq 0$ , $\mathcal{T}_j \triangleq {t \in [T] \mid \lambda_t = 2^j}$ are the indices of time slots belonging to epoch $j$ . Further denote the first step in epoch $j$ by $\gamma_j \triangleq \min {t \in \mathcal{T}_j}$ and the last one by $\tau_j \triangleq \max {t \in \mathcal{T}_j}$ . Without loss of generality, assume no doubling happened at slot $T$ , then the final value of $J$ in Algorithm 3 is just the index of the last non-empty epoch.

We will first show

RT≀O(E[2J]KT). \mathcal {R} _ {T} \leq \mathcal {O} \left(\mathbb {E} \left[ 2 ^ {J} \right] \sqrt {K T}\right).

As defined in the pseudo-code, let

ctβ‰œ2Ξ·txt,itβˆ’1/2β„“t,it21[βˆ£β„“t,itβˆ£β‰€rt]+β„“t,it1[βˆ£β„“t,it∣>rt]. c _ {t} \triangleq 2 \eta_ {t} x _ {t, i _ {t}} ^ {- 1 / 2} \ell_ {t, i _ {t}} ^ {2} \mathbb {1} [ | \ell_ {t, i _ {t}} | \leq r _ {t} ] + \ell_ {t, i _ {t}} \mathbb {1} [ | \ell_ {t, i _ {t}} | > r _ {t} ].

According to the condition to enter a new epoch (Line 15 in Algorithm 3), for all $0 \leq j < J$ , if $\mathcal{T}_j$ is non-empty, $\tau_j$ will cause $S_j > 2^j \sqrt{K(T + 1)}$ . Hence, we have the following conditions:

1[Ξ³j>1]cΞ³jβˆ’1+βˆ‘t∈Tj\{Ο„j}ct≀2jK(T+1),(11) \mathbb {1} \left[ \gamma_ {j} > 1 \right] c _ {\gamma_ {j} - 1} + \sum_ {t \in \mathcal {T} _ {j} \backslash \{\tau_ {j} \}} c _ {t} \leq 2 ^ {j} \sqrt {K (T + 1)}, \tag {11}

βˆ‘t∈Tjct>2jβˆ’1K(T+1).(12) \sum_ {t \in \mathcal {T} _ {j}} c _ {t} > 2 ^ {j - 1} \sqrt {K (T + 1)}. \tag {12}

For $j = J$ , as no doubling has happened after that, we have

1[Ξ³J>1]cΞ³Jβˆ’1+βˆ‘t∈TJct≀2JK(T+1).(13) \mathbb {1} \left[ \gamma_ {J} > 1 \right] c _ {\gamma_ {J} - 1} + \sum_ {t \in \mathcal {T} _ {J}} c _ {t} \leq 2 ^ {J} \sqrt {K (T + 1)}. \tag {13}

Similar to Eq. (3) used in Section 5, we begin with the following decomposition of $\mathcal{R}T(y)$ for $y = \mathbf{e}{i^*}$ :

RT(y)=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]⏟RTs+E[βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩]⏟RTf(14) \mathcal {R} _ {T} (y) = \underbrace {\mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y , \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right]} _ {\mathcal {R} _ {T} ^ {s}} + \underbrace {\mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y , \hat {\ell} _ {t} \rangle \right]} _ {\mathcal {R} _ {T} ^ {f}} \tag {14}

where $\mu_{t,i}^{\prime}\triangleq \mathbb{E}[\ell_{t,i}\mathbb{1}[|\ell_{t,i}|\leq r_t]\mid \mathcal{F}_{t - 1},i_t = i]$ . We still call $\mathcal{R}_T^s$ the skipping gap and $\mathcal{R}_T^f$ the FTRL error.

According to Lemma 5.1, we have

RTf≀E[Ξ·Tmax⁑xβˆˆΞ”[K]Ξ¨(x)⏟P a r t (A)]+E[βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)⏟P a r t (B)]≀E[2J]K(T+1)+E[βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)](15) \begin{array}{l} \mathcal {R} _ {T} ^ {f} \leq \mathbb {E} \left[ \underbrace {\eta_ {T} \max _ {x \in \Delta_ {[ K ]}} \Psi (x)} _ {\text {P a r t (A)}} \right] + \mathbb {E} \left[ \underbrace {\sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t} , z _ {t}\right)} _ {\text {P a r t (B)}} \right] \\ \leq \mathbb {E} [ 2 ^ {J} ] \sqrt {K (T + 1)} + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) \right] \tag {15} \\ \end{array}

Similar to Algorithm 1, we can show $D_{\Psi}(x_t,z_t)\leq$ $2\eta_{t}x_{t,i_{t}}^{-1 / 2}\ell_{t,i_{t}}^{2}\mathbb{1}[|\ell_{t,i_{t}}|\leq r_{t}]$ for all $t\in [T]$ . Moreover, by Assumption 3.6, $\mathcal{R}T^s\leq \mathbb{E}[\sum{t = 1}^T\ell_{t,i_t}\mathbb{1}[|\ell_{t,i_t}| > r_t]]$ . Therefore, with the help of Eq. (11) and (13), we have

RTs+E[P a r t (B)]≀E[βˆ‘t=1Tct]≀E[2J+1K(T+1)].(16) \begin{array}{l} \mathcal {R} _ {T} ^ {s} + \mathbb {E} [ \text {P a r t (B)} ] \\ \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} c _ {t} \right] \\ \leq \mathbb {E} \left[ 2 ^ {J + 1} \sqrt {K (T + 1)} \right]. \tag {16} \\ \end{array}

Combining Eq. (14), (15) and (16) gives

RT≀E[2J]β‹…3K(T+1), \mathcal {R} _ {T} \leq \mathbb {E} \left[ 2 ^ {J} \right] \cdot 3 \sqrt {K (T + 1)},

Therefore, it remains to bound $\mathbb{E}[2^J]$ . When $J = 0$ , there is nothing to do. Otherwise, consider the second to last non-empty epoch, $\mathcal{T}{J'}$ . The condition to enter a new epoch also guarantees that $2^J \sqrt{K(T + 1)} \leq 2^{J' + 1} \sqrt{K(T + 1)} + 4c{\tau_{J'}}$ . Applying Eq. (12) to $J' < J$ , we obtain

1[Jβ‰₯1](2Jβ€²)Ξ±K(T+1)≀(2Jβ€²)Ξ±βˆ’12βˆ‘t∈TJβ€²ct,(17) \mathbb {1} [ J \geq 1 ] (2 ^ {J ^ {\prime}}) ^ {\alpha} \sqrt {K (T + 1)} \leq \left(2 ^ {J ^ {\prime}}\right) ^ {\alpha - 1} 2 \sum_ {t \in \mathcal {T} _ {J ^ {\prime}}} c _ {t}, \tag {17}

After appropriate relaxing the RHS of Eq. (17) and taking expectation of both sides, it solves to the following upper-bound for $\mathbb{E}[\mathbb{1}[J\geq 1]2^{J^{\prime}}]$ :

Lemma 7.1. Algorithm 3 guarantees that

E[1[Jβ‰₯1]2Jβ€²]≀28ΟƒK1/2βˆ’1/Ξ±(T+1)1/Ξ±βˆ’1/2. \mathbb {E} [ \mathbb {1} [ J \geq 1 ] 2 ^ {J ^ {\prime}} ] \leq 2 8 \sigma K ^ {1 / 2 - 1 / \alpha} (T + 1) ^ {1 / \alpha - 1 / 2}.

Moreover, we can obtain a bound for $\mathbb{E}[c_{\tau_{J'}}]$ stated as follows:

Lemma 7.2. Algorithm 3 guarantees that

E[1[Jβ‰₯1]cΟ„J′≀0.1E[1[Jβ‰₯1]2Jβ€²T]+4E[max⁑t∈[T]βˆ£β„“t,it∣]. \begin{array}{l} \mathbb {E} [ \mathbb {1} [ J \geq 1 ] c _ {\tau_ {J ^ {\prime}}} \\ \leq 0. 1 \mathbb {E} \left[ \mathbb {1} [ J \geq 1 ] 2 ^ {J ^ {\prime}} \sqrt {T} \right] + 4 \mathbb {E} \left[ \max _ {t \in [ T ]} \left| \ell_ {t, i _ {t}} \right| \right]. \\ \end{array}

Using the fact that $\mathbb{E}[\max_{t\in [T]}\lvert\ell_{t,i_t}\rvert]\leq \sigma T^{1 / \alpha}$ (Lemma E.3), we conclude that Algorithm 3 has the regret guarantee of

RT≀3E[2JK(T+1)]≀3K(T+1)+204ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±+12ΟƒT1/Ξ±. \begin{array}{l} \mathcal {R} _ {T} \leq 3 \mathbb {E} \left[ 2 ^ {J} \sqrt {K (T + 1)} \right] \\ \leq 3 \sqrt {K (T + 1)} + 2 0 4 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha} + 1 2 \sigma T ^ {1 / \alpha}. \\ \end{array}

8. Conclusion

We propose HTINF, a novel algorithm achieving the optimal instance-dependent regret bound for the stochastic heavily-tailed MAB problem, and the optimal instance-independent regret bound for a more general adversarial setting, without extra logarithmic factors. We also propose AdaTINF, which can achieve the same optimal instance-independent regret even when prior knowledge on heavy-tailed parameters $\alpha, \sigma$ are absent. Our work shows that the FTRL (or OMD) technique can be a powerful tool for designing heavily-tailed MAB algorithm, leading to novel theoretical results that have not been achieved by UCB algorithms.

It is an interesting future work to figure out whether it is possible to design a best-of-both-worlds algorithm without knowing the actual heavy-tail distribution parameters $\alpha$ and $\sigma$ .

Acknowledgment

This work is supported by the Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA0108403.

References

Abernethy, J. D., Lee, C., and Tewari, A. Fighting bandits with a new kind of smoothness. Advances in Neural Information Processing Systems, 28, 2015.
Auer, P. and Chiang, C.-K. An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits. In Conference on Learning Theory, pp. 116-120. PMLR, 2016.
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of IEEE 36th Annual Foundations of Computer Science, pp. 322-331. IEEE, 1995.
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48-77, 2002.
Besson, L. and Kaufmann, E. What doubling tricks can and can't do for multi-armed bandits. arXiv preprint arXiv:1803.06971, 2018.
Bubeck, S. and Slivkins, A. The best of both worlds: Stochastic and adversarial bandits. In Conference on Learning Theory, pp. 42-1. JMLR Workshop and Conference Proceedings, 2012.
Bubeck, S., Cesa-Bianchi, N., and Lugosi, G. Bandits with heavy tail. IEEE Transactions on Information Theory, 59 (11):7711-7717, 2013.
Bubeck, S., Li, Y., Luo, H., and Wei, C.-Y. Improved path-length regret bounds for bandits. In Conference On Learning Theory, pp. 508-528. PMLR, 2019.
De Rooij, S., Van Erven, T., GrΓΌnwald, P. D., and Koolen, W. M. Follow the leader if you can, hedge if you must. The Journal of Machine Learning Research, 15(1):1281-1316, 2014.
Erez, L. and Koren, T. Best-of-all-worlds bounds for online learning with feedback graphs. arXiv preprint arXiv:2107.09572, 2021.
Garivier, A. and CappΓ©, O. The kl-ucb algorithm for bounded stochastic bandits and beyond. In Proceedings of the 24th annual conference on learning theory, pp. 359-376. JMLR Workshop and Conference Proceedings, 2011.
Hadiji, H. and Stoltz, G. Adaptation to the range in $k$ -armed bandits. arXiv preprint arXiv:2006.03378, 2020.
Ito, S. Parameter-free multi-armed bandit algorithms with hybrid data-dependent regret bounds. In Conference on Learning Theory, pp. 2552-2583. PMLR, 2021.

Jin, T. and Luo, H. Simultaneously learning stochastic and adversarial episodic mdps with known transition. Advances in neural information processing systems, 33: 16557-16566, 2020.
Jin, T., Huang, L., and Luo, H. The best of both worlds: stochastic and adversarial episodic mdps with unknown transition. Advances in Neural Information Processing Systems, 34, 2021.
Kagrecha, A., Nair, J., and Jagannathan, K. Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewards. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 11272–11281, 2019.
Lattimore, T. Optimally confident ucb: Improved regret for finite-armed bandits. arXiv preprint arXiv:1507.07880, 2015.
Lee, K., Yang, H., Lim, S., and Oh, S. Optimal algorithms for stochastic multi-armed bandits with heavy tailed rewards. Advances in Neural Information Processing Systems, 33:8452-8462, 2020.
Lu, S., Wang, G., Hu, Y., and Zhang, L. Optimal algorithms for lipschitz bandits with heavy-tailed rewards. In International Conference on Machine Learning, pp. 4154-4163. PMLR, 2019.
Luo, H. and Schapire, R. E. Achieving all with no parameters: Adanormalhedge. In Conference on Learning Theory, pp. 1286-1304. PMLR, 2015.
Medina, A. M. and Yang, S. No-regret algorithms for heavily-tailed linear bandits. In International Conference on Machine Learning, pp. 1642-1650. PMLR, 2016.
Orabona, F. and Pal, D. Coin betting and parameter-free online learning. Advances in Neural Information Processing Systems, 29:577-585, 2016.
Orabona, F. and PΓ‘l, D. Scale-free online learning. Theoretical Computer Science, 716:50-69, 2018.
Seldin, Y. and Lugosi, G. An improved parametrization and analysis of the $\exp 3++$ algorithm for stochastic and adversarial bandits. In Conference on Learning Theory, pp. 1743-1759. PMLR, 2017.
Seldin, Y. and Slivkins, A. One practical algorithm for both stochastic and adversarial bandits. In International Conference on Machine Learning, pp. 1287-1295. PMLR, 2014.
Shao, H., Yu, X., King, I., and Lyu, M. R. Almost optimal algorithms for linear stochastic bandits with heavy-tailed payoffs. Advances in Neural Information Processing Systems, 31, 2018.

Tsallis, C. Possible generalization of boltzmann-gibbs statistics. Journal of statistical physics, 52(1):479-487, 1988.
Vakili, S., Liu, K., and Zhao, Q. Deterministic sequencing of exploration and exploitation for multi-armed bandit problems. IEEE Journal of Selected Topics in Signal Processing, 7(5):759-767, 2013.
Wei, C.-Y. and Luo, H. More adaptive algorithms for adversarial bandits. In Conference On Learning Theory, pp. 1263-1291. PMLR, 2018.
Xue, B., Wang, G., Wang, Y., and Zhang, L. Nearly optimal regret for stochastic linear bandits with heavy-tailed payoffs. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 2936-2942, 2021.
Zimmert, J. and Seldin, Y. An optimal algorithm for stochastic and adversarial bandits. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 467-475. PMLR, 2019.

Supplementary Materials: Proofs and Discussions

A Formal Analysis of HTINF (Algorithm 1) 12

A.1 Main Theorem 12
A.2 Proof when Bounding $\mathcal{R}_T^s$ (the skipped part) 15
A.3 Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part) 15

B Formal Analysis of OptTINF (Algorithm 2) 18

B.1 Main Theorem 18
B.2 Proof when Bounding $\mathcal{R}_T^s$ (the skipped part) 20
B.3 Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part) 20

C Formal Analysis of AdaTINF (Algorithm 3) 21

C.1 Main Theorem 21
C.2 Proof when Reducing $\mathcal{R}_T$ to $\mathbb{E}[2^J]$ 24
C.3 Proof when Bounding $\mathbb{E}[2^{J}]$ 24

D Removing Dependency on Time Horizon $T$ in Algorithm 3 25

E Auxiliary Lemmas 26

E.1 Probability Lemmas 26
E.2 Arithmetic Lemmas 26
E.3 Lemmas on the FTRL Framework for MAB Algorithm Design 27

A. Formal Analysis of HTINF (Algorithm 1)

A.1. Main Theorem

In this section, we present a formal proof of Theorem 4.1. For the sake of accuracy, we state the regret guarantees without using any big-Oh notations, as follows (which directly implies Theorem 4.1).

Theorem A.1 (Regret Guarantee of Algorithm 1). If Assumptions 3.1 and 3.6 hold, i.e., the environment is heavy-tailed with parameters $\alpha$ and $\sigma$ , and there is an optimal arm whose all losses are truncated non-negative. Then Algorithm 1 guarantees:

  1. The regret is no more than

RT≀30ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±, \mathcal {R} _ {T} \leq 3 0 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha},

no matter the environment is stochastic or adversarial.

  1. Furthermore, if the environment is stochastically constrained with a unique best arm $i^{*}$ , i.e., Assumption 3.5 holds, then it, in addition, enjoys a regret bound of

RT≀2Ξ±βˆ’2Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iβˆ’1Ξ±βˆ’1ln⁑(T+1). \mathcal {R} _ {T} \leq \frac {2 \alpha - 2}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \ln (T + 1).

Proof. Define $\mu_{t,i}^{\prime}\triangleq \mathbb{E}[\ell_{t,i}\mathbb{1}[|\ell_{t,i}|\leq r_t] \mid \mathcal{F}{t - 1},i_t = i]$ . For the given $y = \mathbf{e}{i^{*}}\in \triangle_{[K]}$ , consider the regret of the algorithm with respect to policy $y$ , defined and decomposed as

RT(y)β‰œβˆ‘t=1TE[⟨xtβˆ’y,ΞΌt⟩]=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]+E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβ€²βŸ©]β‰œRTs(y)+RTf(y), \mathcal {R} _ {T} (y) \triangleq \sum_ {t = 1} ^ {T} \mathbb {E} [ \langle x _ {t} - y, \mu_ {t} \rangle ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right] + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} ^ {\prime} \rangle \right] \triangleq \mathcal {R} _ {T} ^ {s} (y) + \mathcal {R} _ {T} ^ {f} (y),

which we called the skipped part and FTRL part. For simplicity, we abbreviate the parameter $(y)$ for $\mathcal{R}_T^s$ and $\mathcal{R}_T^f$ .

As defined in Algorithm 1, $\hat{\ell}t$ is set to 0 when $|\ell{t,i_t}| > r_t$ . Hence, by the property of weighted importance sampling estimator (Lemma E.8; note that it is applied to the truncated loss with mean $\mu_{t,i}^{\prime}$ ), $\mathbb{E}[\hat{\ell}{t,i} \mid \mathcal{F}{t-1}] = \mu_{t,i}^{\prime}$

RTf=E[βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩]. \mathcal {R} _ {T} ^ {f} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle \right].

For the first term, $\mathcal{R}_T^s$ , we can bound it using the following two lemmas, whose proof are propounded to next subsection.

Lemma A.2. For any $1 \leq t \leq T$ and $i \in [K]$ , we have

ΞΌt,iβˆ’ΞΌt,iβ€²β‰€Ξ˜Ξ±1βˆ’Ξ±Οƒt1/Ξ±βˆ’1xt,i1/Ξ±βˆ’1, \mu_ {t, i} - \mu_ {t, i} ^ {\prime} \leq \Theta_ {\alpha} ^ {1 - \alpha} \sigma t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha - 1},

where $\Theta_{\alpha}$ is a constant used in Algorithm 1 that only depends on $\alpha$ .

Lemma A.3. If $i^{}$ is an optimal arm whose loss feedback are all truncated non-negative, then for $y = \mathbf{e}_{i^{}}$ , we have

RTs(y)≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—xt,i(ΞΌt,iβˆ’ΞΌt,iβ€²)]. \mathcal {R} _ {T} ^ {s} (y) \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} x _ {t, i} \left(\mu_ {t, i} - \mu_ {t, i} ^ {\prime}\right) \right].

Therefore, for $y = \mathbf{e}_{i^*}$ we have

RTs≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—Ξ˜Ξ±1βˆ’Ξ±Οƒt1/Ξ±βˆ’1xt,i1/Ξ±] \mathcal {R} _ {T} ^ {s} \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \Theta_ {\alpha} ^ {1 - \alpha} \sigma t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha} \right]

E[5βˆ‘t=1Tβˆ‘iβ‰ iβˆ—Οƒt1/Ξ±βˆ’1xt,i1/Ξ±](18) \stackrel {(a)} {\leq} \mathbb {E} \left[ 5 \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \sigma t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha} \right] \tag {18}

5Ξ±ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±,(19) \stackrel {(b)} {\leq} 5 \alpha \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha}, \tag {19}

where step (a) is due to $\Theta_{\alpha}^{1 - \alpha}\leq \Theta_2^{-1}\leq 5$ and (b) applies Lemma E.4 and Lemma E.5.

Now consider the second term, $\mathcal{R}_T^f$ . Consider the vector $\hat{\ell}t^\prime \triangleq \mathbb{1}[|\ell_{t,i_t}| \leq r_t](\hat{\ell}_t - \ell_{t,i_t}\mathbf{1})$ . Note that $\langle \hat{\ell}t^\prime ,x\rangle = \langle \hat{\ell}t,x\rangle - \mathbb{1}[|\ell{t,i_t}| \leq r_t]\ell{t,i_t}$ for any vector $x \in \triangle{[K]}$ , so a FTRL algorithm fed with loss vector $\hat{\ell}_t^\prime$ with produce exactly the same action sequence as another instance fed with $\hat{\ell}_t$ (as constant terms will never affect the choice of the argmax operator over the simplex). Therefore, we can apply Lemma E.9 with loss vectors as $\hat{\ell}_t^\prime$ , yielding

βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩=βˆ‘t=1T⟨xtβˆ’y,β„“^tβ€²βŸ©β‰€βˆ‘t=1T(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))⏟P a r t (A)+βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)⏟P a r t (B)(20) \sum_ {t = 1} ^ {T} \left\langle x _ {t} - y, \hat {\ell} _ {t} \right\rangle = \sum_ {t = 1} ^ {T} \left\langle x _ {t} - y, \hat {\ell} _ {t} ^ {\prime} \right\rangle \leq \underbrace {\sum_ {t = 1} ^ {T} \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) (\Psi (y) - \Psi (x _ {t}))} _ {\text {P a r t (A)}} + \underbrace {\sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t} , z _ {t}\right)} _ {\text {P a r t (B)}} \tag {20}

where $z_{t} \triangleq \nabla \Psi^{}(\nabla \Psi(x_{t}) - \eta_{t}\hat{\ell}_{t}^{\prime}) = \nabla \Psi^{}$ ( $\nabla \Psi(x_{t}) - \eta_{t}\mathbb{1}[|\ell_{t,i_{t}}| \leq r_{t}](\hat{\ell}_{t} - \ell_{t,i_{t}}\mathbf{1})$ ).

Now consider the first term $\sum_{t=1}^{T} (\eta_t^{-1} - \eta_{t-1}^{-1})(\Psi(y) - \Psi(x_t))$ , which is denoted by (A) for simplicity. We have

Lemma A.4. For part (A), Algorithm 1 ensures the following inequality for any one-hot vector $y \in \triangle_{[K]}$ :

E[(A)]=E[βˆ‘t=1TE[(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))∣Ftβˆ’1]]≀E[βˆ‘t=1T2Οƒt1/Ξ±βˆ’1βˆ‘iβ‰ iβˆ—xt,i1/Ξ±],(21) \mathbb {E} [ (A) ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mathbb {E} \left[ \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) (\Psi (y) - \Psi (x _ {t})) \mid \mathcal {F} _ {t - 1} \right] \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} 2 \sigma t ^ {1 / \alpha - 1} \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / \alpha} \right], \tag {21}

which further implies

E[(A)]β‰€βˆ‘t=1T2Οƒt1/Ξ±βˆ’1K1βˆ’1/Ξ±.(22) \mathbb {E} [ (A) ] \leq \sum_ {t = 1} ^ {T} 2 \sigma t ^ {1 / \alpha - 1} K ^ {1 - 1 / \alpha}. \tag {22}

For the second term, denoted by (B), we have

Lemma A.5 (Restatement of Lemma 5.2). For Part (B), Algorithm 1 ensures

E[(B)]=E[βˆ‘t=1TE[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]]≀E[βˆ‘t=1T8Οƒt1/Ξ±βˆ’1βˆ‘iβ‰ iβˆ—xt,i1/Ξ±],(23) \mathbb {E} [ (B) ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mathbb {E} \left[ \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) \mid \mathcal {F} _ {t - 1} \right] \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} 8 \sigma t ^ {1 / \alpha - 1} \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / \alpha} \right], \tag {23}

which further implies

E[(B)]β‰€βˆ‘t=1T8Οƒt1/Ξ±βˆ’1K1βˆ’1/Ξ±.(24) \mathbb {E} [ (B) ] \leq \sum_ {t = 1} ^ {T} 8 \sigma t ^ {1 / \alpha - 1} K ^ {1 - 1 / \alpha}. \tag {24}

Hence, for general cases, due to Equations (22) and (24) we have

RTf=E[(A)]+E[(B)]β‰€βˆ‘t=1T10ΟƒK1βˆ’1/Ξ±βˆ‘t=1Tt1/Ξ±βˆ’1≀10Ξ±ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±, \mathcal {R} _ {T} ^ {f} = \mathbb {E} [ (\mathrm {A}) ] + \mathbb {E} [ (\mathrm {B}) ] \leq \sum_ {t = 1} ^ {T} 1 0 \sigma K ^ {1 - 1 / \alpha} \sum_ {t = 1} ^ {T} t ^ {1 / \alpha - 1} \leq 1 0 \alpha \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha},

where the last inequality comes from Lemma E.5. Therefore, taking (18) into consideration, we have:

RT=RTs+RTf≀15Ξ±ΟƒK1βˆ’1/Ξ±(T+1)1/α≀30ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±. \mathcal {R} _ {T} = \mathcal {R} _ {T} ^ {s} + \mathcal {R} _ {T} ^ {f} \leq 1 5 \alpha \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha} \leq 3 0 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha}.

Now, for stochastically constrained adversarial case with unique best arm $i^*$ throughout the process, due to Equations (18), (21) and (23), we have

RT=RTs+E[(A)]+E[(B)]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—15Οƒt1/Ξ±βˆ’1xt,i1/Ξ±βŸβ‰œst,i]. \mathcal {R} _ {T} = \mathcal {R} _ {T} ^ {s} + \mathbb {E} [ (\mathbf {A}) ] + \mathbb {E} [ (\mathbf {B}) ] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \underbrace {1 5 \sigma t ^ {1 / \alpha - 1} x _ {t , i} ^ {1 / \alpha}} _ {\triangleq s _ {t, i}} \right].

We can then write

st,i=(Ξ±2Ξ”ixt,i)1/Ξ±[(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1Ξ”iβˆ’1Ξ±βˆ’11t]Ξ±βˆ’1α≀Δi2xt,i+Ξ±βˆ’1Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1Ξ”iβˆ’1Ξ±βˆ’11t \begin{array}{l} s _ {t, i} = \left(\frac {\alpha}{2} \Delta_ {i} x _ {t, i}\right) ^ {1 / \alpha} \left[ \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \frac {1}{t} \right] ^ {\frac {\alpha - 1}{\alpha}} \\ \leq \frac {\Delta_ {i}}{2} x _ {t, i} + \frac {\alpha - 1}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \frac {1}{t} \\ \end{array}

where the last step uses the inequality of arithmetic and geometric means $a^{1 / \alpha}b^{1 - 1 / \alpha} \leq \frac{1}{\alpha} a + \left(1 - \frac{1}{\alpha}\right)b$ . Therefore

RT≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—Ξ”i2xt,i]+βˆ‘t=1Tβˆ‘iβ‰ iβˆ—Ξ±βˆ’1Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1Ξ”iβˆ’1Ξ±βˆ’11t R _ {T} \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \frac {\Delta_ {i}}{2} x _ {t, i} \right] + \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \frac {\alpha - 1}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \frac {1}{t}

≀12RT+βˆ‘iβ‰ iβˆ—Ξ±βˆ’1Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1Ξ”iβˆ’1Ξ±βˆ’1ln⁑(T+1)(25) \leq \frac {1}{2} R _ {T} + \sum_ {i \neq i ^ {*}} \frac {\alpha - 1}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \ln (T + 1) \tag {25}

where the last step uses Lemma E.7. Equation (25) then solves to

RTβ‰€βˆ‘iβ‰ iβˆ—2Ξ±βˆ’2Ξ±(Ξ±2)βˆ’1Ξ±βˆ’1(30σα)Ξ±Ξ±βˆ’1Ξ”iβˆ’1Ξ±βˆ’1ln⁑(T+1), \mathcal {R} _ {T} \leq \sum_ {i \neq i ^ {*}} \frac {2 \alpha - 2}{\alpha} \left(\frac {\alpha}{2}\right) ^ {- \frac {1}{\alpha - 1}} \left(\frac {3 0 \sigma}{\alpha}\right) ^ {\frac {\alpha}{\alpha - 1}} \Delta_ {i} ^ {- \frac {1}{\alpha - 1}} \ln (T + 1),

as claimed.

Proof of Theorem 4.1. It is a direct consequence of the theorem above.

A.2. Proof when Bounding $\mathcal{R}_T^s$ (the skipped part)

Proof of Lemma A.2. Starting from the definition of $\mu_{t,i}^{\prime}$ and $\mu_{t,i}$ , we can write

ΞΌt,iβˆ’ΞΌt,iβ€²=E[β„“t,it∣Ftβˆ’1,it=i]βˆ’E[β„“t,itβ‹…1[βˆ£β„“t,itβˆ£β‰€rt]∣Ftβˆ’1,it=i]=E[β„“t,itβ‹…1[βˆ£β„“t,it∣>rt]∣Ftβˆ’1,it=i]≀E[βˆ£β„“t,itβˆ£β‹…1[βˆ£β„“t,it∣>rt]∣Ftβˆ’1,it=i]≀E[βˆ£β„“t,it∣αrt1βˆ’Ξ±β‹…1[βˆ£β„“t,it∣>rt]∣Ftβˆ’1,it=i]≀E[βˆ£β„“t,it∣αrt1βˆ’Ξ±βˆ£Ftβˆ’1,it=i]E[βˆ£β„“t,i∣αΘα1βˆ’Ξ±Οƒ1βˆ’Ξ±t1βˆ’Ξ±Ξ±xt,i1βˆ’Ξ±Ξ±βˆ£Ftβˆ’1]β‰€ΟƒΞ˜Ξ±1βˆ’Ξ±t1βˆ’Ξ±Ξ±xt,i1βˆ’Ξ±Ξ± \begin{array}{l} \mu_ {t, i} - \mu_ {t, i} ^ {\prime} = \mathbb {E} \left[ \ell_ {t, i _ {t}} \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] - \mathbb {E} \left[ \ell_ {t, i _ {t}} \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| \leq r _ {t} \right] \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ = \mathbb {E} \left[ \ell_ {t, i _ {t}} \cdot \mathbb {1} \left[ | \ell_ {t, i _ {t}} | > r _ {t} \right] \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ \leq \mathbb {E} \left[ \left| \ell_ {t, i _ {t}} \right| \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| > r _ {t} \right] \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ \leq \mathbb {E} \left[ \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} r _ {t} ^ {1 - \alpha} \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| > r _ {t} \right] \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ \leq \mathbb {E} \left[ \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} r _ {t} ^ {1 - \alpha} \mid \mathcal {F} _ {t - 1}, i _ {t} = i \right] \\ \stackrel {(a)} {=} \mathbb {E} \left[ | \ell_ {t, i} | ^ {\alpha} \Theta_ {\alpha} ^ {1 - \alpha} \sigma^ {1 - \alpha} t ^ {\frac {1 - \alpha}{\alpha}} x _ {t, i} ^ {\frac {1 - \alpha}{\alpha}} \mid \mathcal {F} _ {t - 1} \right] \\ \leq \sigma \Theta_ {\alpha} ^ {1 - \alpha} t ^ {\frac {1 - \alpha}{\alpha}} x _ {t, i} ^ {\frac {1 - \alpha}{\alpha}} \\ \end{array}

where in step (a) we plug in $r_t = \Theta_\alpha \eta_t^{-1} x_{t,i_t}^{1 / \alpha}$ .

Proof of Lemma A.3. Recall that $\mu_{t,i} - \mu_{t,i}' = \mathbb{E}\left[\ell_{t,i} \cdot \mathbb{1}[|\ell_{t,i}| > r_t]\right] \mid \mathcal{F}{t-1}, i_t = i$ , hence according to our assumption that $\ell{t,i^*}$ is truncated non-negative (Assumption 3.6), we have $\mu_{t,i^*} - \mu_{t,i^*}' \geq 0$ a.s., thus when $y = \mathbf{e}_{i^*}$ ,

(xt,iβˆ—βˆ’y)β‹…(ΞΌt,iβˆ’ΞΌt,iβˆ—)=(xt,iβˆ—βˆ’1)β‹…(ΞΌt,iβˆ’ΞΌt,iβˆ—)≀0. \left(x _ {t, i ^ {*}} - y\right) \cdot \left(\mu_ {t, i} - \mu_ {t, i ^ {*}}\right) = \left(x _ {t, i ^ {*}} - 1\right) \cdot \left(\mu_ {t, i} - \mu_ {t, i ^ {*}}\right) \leq 0.

Therefore

RTs(y)=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—(xt,iβˆ’yi)β‹…(ΞΌt,iβˆ’ΞΌt,iβ€²)]=E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—xt,i(ΞΌt,iβˆ’ΞΌt,iβ€²)], \begin{array}{l} \mathcal {R} _ {T} ^ {s} (y) = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right] \\ \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \left(x _ {t, i} - y _ {i}\right) \cdot \left(\mu_ {t, i} - \mu_ {t, i} ^ {\prime}\right) \right] \\ = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} x _ {t, i} \left(\mu_ {t, i} - \mu_ {t, i} ^ {\prime}\right) \right], \\ \end{array}

as claimed.

A.3. Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part)

For our purpose, we need a technical lemma stating that the components of $z_{t}$ are at most a constant times larger than $x_{t}$ 's components.

Lemma A.6. For any $t \in [T]$ and $i \in [K]$ , Algorithm 1 guarantees that

zt,i≀2Ξ±2Ξ±βˆ’1xt,i z _ {t, i} \leq 2 ^ {\frac {\alpha}{2 \alpha - 1}} x _ {t, i}

where $z_{t}\triangleq \nabla \Psi^{*}(\nabla \Psi (x_{t}) - \eta_{t}\mathbb{1}[|\ell_{t,i_{t}}|\leq r_{t}](\hat{\ell}_{t} - \ell_{t,i_{t}}\mathbf{1}))$

Proof. If $|\ell_{t,i_t}| > r_t$ , then $x_{t} = z_{t}$ . Otherwise, we denote $\nabla \Psi (x_{t})$ by $x_{t}^{}$ , and denote $\nabla \Psi (z_{t})$ by $z_{t}^{}$ , then we have $-x_{t,i}^{*} = x^{-\frac{\alpha - 1}{\alpha}}$ and

zt,iβˆ—=xt,iβˆ—βˆ’Ξ·tβ„“^t,i+Ξ·tβ„“t,it={xt,iβˆ—βˆ’Ξ·tβ„“t,ixt,i+Ξ·tβ„“t,ii=itxt,iβˆ—+Ξ·tβ„“t,itiβ‰ it. \begin{array}{l} z _ {t, i} ^ {*} = x _ {t, i} ^ {*} - \eta_ {t} \hat {\ell} _ {t, i} + \eta_ {t} \ell_ {t, i _ {t}} \\ = \left\{ \begin{array}{l l} x _ {t, i} ^ {*} - \eta_ {t} \frac {\ell_ {t , i}}{x _ {t , i}} + \eta_ {t} \ell_ {t, i} & i = i _ {t} \\ x _ {t, i ^ {*}} + \eta_ {t} \ell_ {t, i _ {t}} & i \neq i _ {t}. \end{array} \right. \\ \end{array}

If $i = i_t$ , we have

βˆ’zt,iβˆ—β‰₯βˆ’xt,iβˆ—βˆ’Ξ·tβˆ£β„“t,i∣xt,i=xt,iβˆ’Ξ±βˆ’1Ξ±βˆ’Ξ·tβˆ£β„“t,i∣xt,iβ‰₯xt,iβˆ’Ξ±βˆ’1Ξ±βˆ’Ξ˜Ξ±xt,i1βˆ’Ξ±Ξ±, - z _ {t, i} ^ {*} \geq - x _ {t, i} ^ {*} - \eta_ {t} \frac {| \ell_ {t , i} |}{x _ {t , i}} = x _ {t, i} ^ {- \frac {\alpha - 1}{\alpha}} - \eta_ {t} \frac {| \ell_ {t , i} |}{x _ {t , i}} \geq x _ {t, i} ^ {- \frac {\alpha - 1}{\alpha}} - \Theta_ {\alpha} x _ {t, i} ^ {\frac {1 - \alpha}{\alpha}},

where the last step is due to $|\ell_{t,i_t}|\leq r_t$ and our choice of $r_t$ in Algorithm 1. Thus

zt,i=(βˆ’zt,iβˆ—)βˆ’Ξ±Ξ±βˆ’1≀xt,i(1βˆ’Ξ˜Ξ±)βˆ’Ξ±Ξ±βˆ’1≀2Ξ±2Ξ±βˆ’1xt,i z _ {t, i} = \left(- z _ {t, i} ^ {*}\right) ^ {- \frac {\alpha}{\alpha - 1}} \leq x _ {t, i} (1 - \Theta_ {\alpha}) ^ {- \frac {\alpha}{\alpha - 1}} \leq 2 ^ {\frac {\alpha}{2 \alpha - 1}} x _ {t, i}

where the last step is because $\Theta_{\alpha} \leq 1 - 2^{-\frac{\alpha - 1}{2\alpha - 1}}$ .

If $i \neq i_t$ , we have $-z_{t,i}^* \geq -x_{t,i}^* - \Theta_\alpha x_{t,i_t}^{1/\alpha} \geq x_{t,i}^{-\frac{\alpha - 1}{\alpha}} - \Theta_\alpha$ , thus

zt,i=(βˆ’zt,iβˆ—)βˆ’Ξ±Ξ±βˆ’1≀xt,i(1βˆ’Ξ˜Ξ±xt,iΞ±βˆ’1Ξ±)βˆ’Ξ±Ξ±βˆ’1≀xt,i(1βˆ’Ξ˜Ξ±)βˆ’Ξ±Ξ±βˆ’1≀2Ξ±2Ξ±βˆ’1xt,i. z _ {t, i} = (- z _ {t, i} ^ {*}) ^ {- \frac {\alpha}{\alpha - 1}} \leq x _ {t, i} (1 - \Theta_ {\alpha} x _ {t, i} ^ {\frac {\alpha - 1}{\alpha}}) ^ {- \frac {\alpha}{\alpha - 1}} \leq x _ {t, i} (1 - \Theta_ {\alpha}) ^ {- \frac {\alpha}{\alpha - 1}} \leq 2 ^ {\frac {\alpha}{2 \alpha - 1}} x _ {t, i}.

Combining two cases together gives our conclusion.

Proof of Lemma A.4. By definition, for any $t \in [T]$ , one-hot $y \in \triangle_{[K]}$ and $x_t \in \triangle_{[K]}$ , we have

Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1=Οƒ(t1/Ξ±βˆ’(tβˆ’1)1/Ξ±)Οƒ1Ξ±(tβˆ’1)1/Ξ±βˆ’12Οƒ1Ξ±t1/Ξ±βˆ’1, \eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1} = \sigma \left(t ^ {1 / \alpha} - (t - 1) ^ {1 / \alpha}\right) \stackrel {(a)} {\leq} \sigma \frac {1}{\alpha} (t - 1) ^ {1 / \alpha - 1} \stackrel {(b)} {\leq} 2 \sigma \frac {1}{\alpha} t ^ {1 / \alpha - 1},

where (a) comes from Lemma E.6 and (b) comes from the fact that $t \geq 1$ and $\frac{1}{\alpha} - 1 \geq -\frac{1}{2}$ . Moreover, by definition of $\Psi(x) = -\alpha \sum_{i=1}^{K} x_i^{1/\alpha}$ , we have

Ξ¨(y)βˆ’Ξ¨(x)=Ξ±βˆ‘i=1Kxi1/Ξ±βˆ’Ξ±βˆ‘i=1Kyi1/Ξ±=Ξ±βˆ‘i=1Kxi1/Ξ±βˆ’Ξ±β‰€Ξ±βˆ‘iβ‰ iβˆ—xt,i1/Ξ± \Psi (y) - \Psi (x) = \alpha \sum_ {i = 1} ^ {K} x _ {i} ^ {1 / \alpha} - \alpha \sum_ {i = 1} ^ {K} y _ {i} ^ {1 / \alpha} = \alpha \sum_ {i = 1} ^ {K} x _ {i} ^ {1 / \alpha} - \alpha \leq \alpha \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / \alpha}

from the assumption that $y$ is an one-hot vector. Therefore, we have

E[(A)]=βˆ‘t=1T[(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))∣Ftβˆ’1]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—2Οƒt1/Ξ±βˆ’1xt,i1/Ξ±], \mathbb {E} [ (\mathrm {A}) ] = \sum_ {t = 1} ^ {T} \left[ \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) \left(\Psi (y) - \Psi (x _ {t})\right) \mid \mathcal {F} _ {t - 1} \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} 2 \sigma t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha} \right],

which further implies (by Lemma E.4)

E[(A)]=βˆ‘t=1T[(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))∣Ftβˆ’1]β‰€βˆ‘t=1T2Οƒt1/Ξ±βˆ’1K1βˆ’1/Ξ±. \mathbb {E} [ (\mathrm {A}) ] = \sum_ {t = 1} ^ {T} \left[ \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) \left(\Psi (y) - \Psi \left(x _ {t}\right)\right) \mid \mathcal {F} _ {t - 1} \right] \leq \sum_ {t = 1} ^ {T} 2 \sigma t ^ {1 / \alpha - 1} K ^ {1 - 1 / \alpha}.

Proof of Lemma A.5 (and also Lemma 5.2). Consider a summand before taking expectation, i.e., $\eta_t^{-1}D_\Psi(x_t,z_t)$ . Let $f(x) = -\alpha x^{1/\alpha}$ , we then have

Ξ·tβˆ’1DΞ¨(xt,zt)Ξ·tβˆ’1DΞ¨βˆ—(βˆ‡Ξ¨(zt),βˆ‡Ξ¨(xt)) \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) \stackrel {(a)} {=} \eta_ {t} ^ {- 1} D _ {\Psi^ {*}} \left(\nabla \Psi \left(z _ {t}\right), \nabla \Psi \left(x _ {t}\right)\right)

=Ξ¨βˆ—(βˆ‡Ξ¨(zt))βˆ’Ξ¨βˆ—(βˆ‡Ξ¨(xt))βˆ’βŸ¨xt,βˆ‡Ξ¨(zt)βˆ’βˆ‡Ξ¨(xt)⟩ηtβˆ’1βˆ‘i=1K12max⁑{fβ€²β€²(xt,i)βˆ’1,fβ€²β€²(zt,i)βˆ’1}β‹…Ξ·t2(β„“^t,iβˆ’β„“t,it)2≀ηtβˆ’1βˆ‘i=1KΞ±2(Ξ±βˆ’1)max⁑{xt,i,zt,i}2βˆ’1/Ξ±Ξ·t2(β„“^t,iβˆ’β„“t,it)2≀ηtβˆ’1βˆ‘i=1KΞ±2(Ξ±βˆ’1)(2Ξ±2Ξ±βˆ’1)2βˆ’1/Ξ±xt,i2βˆ’1/Ξ±Ξ·t2(β„“^t,iβˆ’β„“t,it)2=Ξ±Ξ±βˆ’1Ξ·tβˆ‘i=1Kxt,i2βˆ’1/Ξ±(β„“^t,iβˆ’β„“t,it)2=Ξ±Ξ±βˆ’1Ξ·tβ„“t,it2βˆ‘i=1Kxt,i2βˆ’1/Ξ±(1βˆ’1[it=i]xt,it)2β‰€Ξ±Ξ±βˆ’1Ξ·trt2βˆ’Ξ±βˆ£β„“t,itβˆ£Ξ±βˆ‘i=1Kxt,i2βˆ’1/Ξ±(1βˆ’1[it=i]xt,it)2Ξ±Ξ±βˆ’1t1/Ξ±βˆ’1Οƒ1βˆ’Ξ±Ξ˜Ξ±2βˆ’Ξ±βˆ£β„“t,it∣αxt,it2/Ξ±βˆ’1βˆ‘i=1Kxt,i2βˆ’1/Ξ±(1βˆ’1[it=i]xt,it)22t1/Ξ±βˆ’1Οƒ1βˆ’Ξ±βˆ£β„“t,it∣αxt,it2/Ξ±βˆ’1βˆ‘i=1Kxt,i2βˆ’1/Ξ±(1βˆ’1[it=i]xt,it)2 \begin{array}{l} = \Psi^ {*} (\nabla \Psi (z _ {t})) - \Psi^ {*} (\nabla \Psi (x _ {t})) - \langle x _ {t}, \nabla \Psi (z _ {t}) - \nabla \Psi (x _ {t}) \rangle \\ \stackrel {(b)} {\leq} \eta_ {t} ^ {- 1} \sum_ {i = 1} ^ {K} \frac {1}{2} \max \left\{f ^ {\prime \prime} \left(x _ {t, i}\right) ^ {- 1}, f ^ {\prime \prime} \left(z _ {t, i}\right) ^ {- 1} \right\} \cdot \eta_ {t} ^ {2} \left(\hat {\ell} _ {t, i} - \ell_ {t, i _ {t}}\right) ^ {2} \\ \leq \eta_ {t} ^ {- 1} \sum_ {i = 1} ^ {K} \frac {\alpha}{2 (\alpha - 1)} \max \left\{x _ {t, i}, z _ {t, i} \right\} ^ {2 - 1 / \alpha} \eta_ {t} ^ {2} \left(\hat {\ell} _ {t, i} - \ell_ {t, i _ {t}}\right) ^ {2} \\ \leq \eta_ {t} ^ {- 1} \sum_ {i = 1} ^ {K} \frac {\alpha}{2 (\alpha - 1)} (2 ^ {\frac {\alpha}{2 \alpha - 1}}) ^ {2 - 1 / \alpha} x _ {t, i} ^ {2 - 1 / \alpha} \eta_ {t} ^ {2} (\hat {\ell} _ {t, i} - \ell_ {t, i _ {t}}) ^ {2} \\ = \frac {\alpha}{\alpha - 1} \eta_ {t} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / \alpha} \left(\hat {\ell} _ {t, i} - \ell_ {t, i _ {t}}\right) ^ {2} \\ = \frac {\alpha}{\alpha - 1} \eta_ {t} \ell_ {t, i _ {t}} ^ {2} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / \alpha} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2} \\ \leq \frac {\alpha}{\alpha - 1} \eta_ {t} r _ {t} ^ {2 - \alpha} \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / \alpha} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2} \\ \stackrel {(c)} {\leq} \frac {\alpha}{\alpha - 1} t ^ {1 / \alpha - 1} \sigma^ {1 - \alpha} \Theta_ {\alpha} ^ {2 - \alpha} | \ell_ {t, i _ {t}} | ^ {\alpha} x _ {t, i _ {t}} ^ {2 / \alpha - 1} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / \alpha} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2} \\ \stackrel {(d)} {\leq} 2 t ^ {1 / \alpha - 1} \sigma^ {1 - \alpha} \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} x _ {t, i _ {t}} ^ {2 / \alpha - 1} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / \alpha} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2} \\ \end{array}

where step (a) is due to the duality property of Bregman divergences, step (b) regards $D_{\Psi^{*}}(\cdot ,\cdot)$ as a second-order Lagrange remainder. step (c) plugs in $\eta_t^{-1} = \sigma t^{1 / \alpha}$ and $r_t = \Theta_\alpha \eta_t^{-1}x_{t,i_t}^{1 / \alpha}$ , thus $\eta_{t}r_{t}^{2 - \alpha} = t^{1 / \alpha -1}\sigma^{1 - \alpha}\Theta_{\alpha}^{2 - \alpha}x_{t,i_{t}}^{2 / \alpha -1}$ . Step (d) uses $\Theta_{\alpha}\leq (2 - \frac{2}{\alpha})^{\frac{1}{2 - \alpha}}$ and thus $\Theta_{\alpha}^{2 - \alpha}\leq 2\cdot \frac{\alpha - 1}{\alpha}$ .

After taking expectations, we get

E[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]≀2t1/Ξ±βˆ’1Οƒβˆ‘i=1Kxt,i2/Ξ±[βˆ‘j=1Kxt,j2βˆ’1/Ξ±βŸβ‰€1≀xt,iβˆ’1/Ξ±βˆ’2xt,i1βˆ’1/Ξ±+xt,iβˆ’1/Ξ±]≀2Οƒt1/Ξ±βˆ’1β‹…2[βˆ’βˆ‘i=1Kxt,i1+1/Ξ±+βˆ‘i=1Kxt,i1/Ξ±]=4Οƒt1/Ξ±βˆ’1βˆ‘i=1Kxt,i1/Ξ±(1βˆ’xt,i)≀8Οƒt1/Ξ±βˆ’1βˆ‘iβ‰ iβˆ—xt,i1/Ξ±, \begin{array}{l} \mathbb{E}\left[\eta_{t}^{-1}D_{\Psi}(x_{t},z_{t})\mid \mathcal{F}_{t - 1}\right]\leq 2t^{1 / \alpha -1}\sigma \sum_{i = 1}^{K}x_{t,i}^{2 / \alpha}\left[ \begin{array}{c}\underbrace{\sum_{j = 1}^{K}x_{t,j}^{2 - 1 / \alpha}}_{\leq 1\leq x_{t,i}^{-1 / \alpha}} - 2x_{t,i}^{1 - 1 / \alpha} + x_{t,i}^{-1 / \alpha}\\ \end{array} \right] \\ \leq 2 \sigma t ^ {1 / \alpha - 1} \cdot 2 \left[ - \sum_ {i = 1} ^ {K} x _ {t, i} ^ {1 + 1 / \alpha} + \sum_ {i = 1} ^ {K} x _ {t, i} ^ {1 / \alpha} \right] \\ = 4 \sigma t ^ {1 / \alpha - 1} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {1 / \alpha} (1 - x _ {t, i}) \\ \leq 8\sigma t^{1 / \alpha -1}\sum_{i\neq i^{*}}x_{t,i}^{1 / \alpha}, \\ \end{array}

where the last step is due to the fact that $1 - x_{t,i^{}} = \sum_{i\neq i^{}}x_{t,i}\leq \sum_{i\neq i^{}}x_{t,i}^{1 / \alpha}$ and $1 - x_{t,i}\leq 1$ for any $i\neq i^{}$ . After applying Lemma E.4, we get

E[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]≀8Οƒt1/Ξ±βˆ’1K1βˆ’1/Ξ±. \mathbb {E} \left[ \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) \mid \mathcal {F} _ {t - 1} \right] \leq 8 \sigma t ^ {1 / \alpha - 1} K ^ {1 - 1 / \alpha}.

Hence, we have

E[βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)]=βˆ‘t=1TE[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—8Οƒt1/Ξ±βˆ’1xt,i1/Ξ±] \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \right] = \sum_ {t = 1} ^ {T} \mathbb {E} \left[ \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \mid \mathcal {F} _ {t - 1} \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} 8 \sigma t ^ {1 / \alpha - 1} x _ {t, i} ^ {1 / \alpha} \right]

β‰€βˆ‘t=1T8Οƒt1/Ξ±βˆ’1K1βˆ’1/Ξ±. \leq \sum_ {t = 1} ^ {T} 8 \sigma t ^ {1 / \alpha - 1} K ^ {1 - 1 / \alpha}.

B. Formal Analysis of OptTINF (Algorithm 2)

B.1. Main Theorem

In this section, we present a formal proof of Theorem 4.2. We still state a regret guarantee without any big-Oh notation first.

Theorem B.1 (Regret Guarantee of Algorithm 2). If Assumptions 3.1 and 3.6 hold, Algorithm 2 enjoys:

  1. For adversarial environments, the regret is bounded by

RT≀26σαKΞ±βˆ’12(T+1)3βˆ’Ξ±2+4K(T+1). \mathcal {R} _ {T} \leq 2 6 \sigma^ {\alpha} K ^ {\frac {\alpha - 1}{2}} (T + 1) ^ {\frac {3 - \alpha}{2}} + 4 \sqrt {K (T + 1)}.

  1. Moreover, if the environment is stochastically constrained with a unique best arm $i^*$ (Assumption 3.5), then Algorithm 2 enjoys

RT≀2Γ—43βˆ’Ξ±Ξ±βˆ’152Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iΞ±βˆ’3Ξ±βˆ’1ln⁑(T+1)+32ΟƒΞ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iβˆ’1ln⁑(T+1)+2Γ—82Ξ±βˆ’143βˆ’Ξ±Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iΞ±βˆ’3Ξ±βˆ’1ln⁑(T+1). \begin{array}{l} \mathcal {R} _ {T} \leq 2 \times 4 ^ {\frac {3 - \alpha}{\alpha - 1}} 5 ^ {\frac {2}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {\frac {\alpha - 3}{\alpha - 1}} \ln (T + 1) \\ + \frac {3 2 \sigma}{\alpha - 1} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- 1} \ln (T + 1) \\ + 2 \times 8 ^ {\frac {2}{\alpha - 1}} 4 ^ {\frac {3 - \alpha}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {\frac {\alpha - 3}{\alpha - 1}} \ln (T + 1). \\ \end{array}

Proof. In Algorithm 2, when the parameters are set as $\alpha = 2$ and $\sigma = 1$ , we have $\eta_t^{-1} = \sqrt{t}$ and $r_t = \Theta_2\sqrt{t}\sqrt{x_{t,i_t}}$ where $\Theta_2 = 1 - 2^{-1/3}$ is an absolute constant. From now on, to avoid confusion, we use $\alpha, \sigma$ only to denote the real (hidden) parameters of the environment, instead of the parameters of the algorithm.

Following the proof of Theorem 4.1 in Appendix A, we still decompose $\mathcal{R}T(y)$ for $y = \mathbf{e}{i^*}$ into $\mathcal{R}_T^s$ and $\mathcal{R}_T^f$ , as follows.

RT(y)β‰œβˆ‘t=1TE[⟨xtβˆ’y,ΞΌt⟩]=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]+E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβ€²βŸ©]β‰œRTs+RTf. \mathcal {R} _ {T} (y) \triangleq \sum_ {t = 1} ^ {T} \mathbb {E} [ \langle x _ {t} - y, \mu_ {t} \rangle ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right] + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} ^ {\prime} \rangle \right] \triangleq \mathcal {R} _ {T} ^ {s} + \mathcal {R} _ {T} ^ {f}.

Following the analysis of Algorithm 1, we have the following lemma.

Lemma B.2. For the given $y = \mathbf{e}{i^*} \in \triangle{[K]}$ , Algorithm 2 ensures

RTs≀E[5ΟƒΞ±βˆ‘t=1Tβˆ‘iβ‰ iβˆ—t1/2βˆ’Ξ±/2xt,i(3βˆ’Ξ±)/2], \mathcal {R} _ {\mathcal {T}} ^ {s} \leq \mathbb {E} \left[ 5 \sigma^ {\alpha} \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} t ^ {1 / 2 - \alpha / 2} x _ {t, i} ^ {(3 - \alpha) / 2} \right],

which further implies

RTs≀5ΟƒΞ±βˆ‘t=1Tt1βˆ’Ξ±2KΞ±βˆ’12. \mathcal {R} _ {\mathcal {T}} ^ {s} \leq 5 \sigma^ {\alpha} \sum_ {t = 1} ^ {T} t ^ {\frac {1 - \alpha}{2}} K ^ {\frac {\alpha - 1}{2}}.

We continue our analysis by bounding the FTRL part, $\mathcal{R}_T^f$ . As in Appendix A, we also decompose it into two parts from Lemma E.9:

βˆ‘t=1T⟨xtβˆ’y,β„“^tβŸ©β‰€βˆ‘t=1T(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))⏟P a r t (A)+βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)⏟P a r t (B), \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle \leq \underbrace {\sum_ {t = 1} ^ {T} \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) (\Psi (y) - \Psi (x _ {t}))} _ {\text {P a r t (A)}} + \underbrace {\sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t} , z _ {t})} _ {\text {P a r t (B)}},

where $z_{t} \triangleq \nabla \Psi^{*}(\nabla \Psi(x_{t}) - \eta_{t}\mathbb{1}[|\ell_{t,i_{t}}| \leq r_{t}](\hat{\ell}_{t} - \ell_{t,i_{t}}\mathbf{1}))$ . For Part (A), from Lemma A.4, we have (recall that $\Psi$ is now $\frac{1}{2}$ -Tsallis entropy)

Lemma B.3. For part (A), for any one-hot vector $y \in \triangle_{[K]}$ , Algorithm 2 ensures

E[(A)]=E[βˆ‘t=1TE[(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))∣Ftβˆ’1]]≀E[βˆ‘t=1T2tβˆ’1/2βˆ‘iβ‰ iβˆ—xt,i1/2], \mathbb {E} [ (A) ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mathbb {E} [ (\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}) (\Psi (y) - \Psi (x _ {t})) \mid \mathcal {F} _ {t - 1} ] \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} 2 t ^ {- 1 / 2} \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {1 / 2} \right],

which further implies

E[(A)]=E[βˆ‘t=1TE[(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))∣Ftβˆ’1]]β‰€βˆ‘t=1T2tβˆ’1/2K1/2. \mathbb {E} [ (A) ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mathbb {E} [ (\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}) (\Psi (y) - \Psi (x _ {t})) \mid \mathcal {F} _ {t - 1} ] \right] \leq \sum_ {t = 1} ^ {T} 2 t ^ {- 1 / 2} K ^ {1 / 2}.

For part (B), we have

Lemma B.4. For part (B), Algorithm 2 ensures

E[(B)]=E[βˆ‘t=1TE[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—8Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2xt,i(3βˆ’Ξ±)/2], \mathbb {E} [ (B) ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mathbb {E} [ \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \mid \mathcal {F} _ {t - 1} ] \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} 8 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} x _ {t, i} ^ {(3 - \alpha) / 2} \right],

which further implies

E[(B)]=E[βˆ‘t=1TE[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]]β‰€βˆ‘t=1T8Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2KΞ±βˆ’12. \mathbb {E} [ (B) ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \mathbb {E} [ \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \mid \mathcal {F} _ {t - 1} ] \right] \leq \sum_ {t = 1} ^ {T} 8 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} K ^ {\frac {\alpha - 1}{2}}.

Therefore, for adversarial case (i.e., the first statement), we have

RT=RTs+RTf≀RTs+E[(A)]+E[(B)]≀13ΟƒΞ±βˆ‘t=1Tt1βˆ’Ξ±2KΞ±βˆ’12+2βˆ‘t=1Ttβˆ’1/2K1/2≀26σαKΞ±βˆ’12(T+1)3βˆ’Ξ±2+4K(T+1), \begin{array}{l} \mathcal {R} _ {T} = \mathcal {R} _ {T} ^ {s} + \mathcal {R} _ {T} ^ {f} \leq \mathcal {R} _ {T} ^ {s} + \mathbb {E} [ (\mathbf {A}) ] + \mathbb {E} [ (\mathbf {B}) ] \\ \leq 1 3 \sigma^ {\alpha} \sum_ {t = 1} ^ {T} t ^ {\frac {1 - \alpha}{2}} K ^ {\frac {\alpha - 1}{2}} + 2 \sum_ {t = 1} ^ {T} t ^ {- 1 / 2} K ^ {1 / 2} \\ \leq 2 6 \sigma^ {\alpha} K ^ {\frac {\alpha - 1}{2}} (T + 1) ^ {\frac {3 - \alpha}{2}} + 4 \sqrt {K (T + 1)}, \\ \end{array}

where the last step uses Lemma E.5.

Moreover, for the stochastically constrained case with a unique best arm $i^{*} \in [K]$ , with the help of AM-GM inequality, we bound each of $\mathcal{R}_T^s$ , $\mathbb{E}[(\mathbf{A})]$ and $\mathbb{E}[(\mathbf{B})]$ by

RTs≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—(52Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1(Ξ”i4)βˆ’3βˆ’Ξ±Ξ±βˆ’11t)Ξ±βˆ’12(Ξ”i4xt,i)3βˆ’Ξ±2]β‰€Ξ±βˆ’1243βˆ’Ξ±Ξ±βˆ’152Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iΞ±βˆ’3Ξ±βˆ’1ln⁑(T+1)+3βˆ’Ξ±2RT4, \begin{array}{l} \mathcal {R} _ {T} ^ {s} \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \left(5 ^ {\frac {2}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \left(\frac {\Delta_ {i}}{4}\right) ^ {- \frac {3 - \alpha}{\alpha - 1}} \frac {1}{t}\right) ^ {\frac {\alpha - 1}{2}} \left(\frac {\Delta_ {i}}{4} x _ {t, i}\right) ^ {\frac {3 - \alpha}{2}} \right] \\ \leq \frac {\alpha - 1}{2} 4 ^ {\frac {3 - \alpha}{\alpha - 1}} 5 ^ {\frac {2}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {\frac {\alpha - 3}{\alpha - 1}} \ln (T + 1) + \frac {3 - \alpha}{2} \frac {\mathcal {R} _ {T}}{4}, \\ \end{array}

E[(A)]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—(4Οƒ(Ξ”i4)βˆ’11t)1/2(Ξ”i4xt,i)1/2]≀12β‹…16Οƒβˆ‘iβ‰ iβˆ—Ξ”iβˆ’1ln⁑(T+1)+12RT4, \begin{array}{l} \mathbb {E} [ (\mathrm {A}) ] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \left(4 \sigma \left(\frac {\Delta_ {i}}{4}\right) ^ {- 1} \frac {1}{t}\right) ^ {1 / 2} \left(\frac {\Delta_ {i}}{4} x _ {t, i}\right) ^ {1 / 2} \right] \\ \leq \frac {1}{2} \cdot 1 6 \sigma \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- 1} \ln (T + 1) + \frac {1}{2} \frac {\mathcal {R} _ {T}}{4}, \\ \end{array}

E[(B)]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—(82Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1(Ξ”i4)βˆ’3βˆ’Ξ±Ξ±βˆ’11t)Ξ±βˆ’12(Ξ”i4xt,i)3βˆ’Ξ±2] \mathbb {E} [ (\mathbf {B}) ] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \left(8 ^ {\frac {2}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \left(\frac {\Delta_ {i}}{4}\right) ^ {- \frac {3 - \alpha}{\alpha - 1}} \frac {1}{t}\right) ^ {\frac {\alpha - 1}{2}} \left(\frac {\Delta_ {i}}{4} x _ {t, i}\right) ^ {\frac {3 - \alpha}{2}} \right]

β‰€Ξ±βˆ’1282Ξ±βˆ’143βˆ’Ξ±Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iΞ±βˆ’3Ξ±βˆ’1ln⁑(T+1)+3βˆ’Ξ±2RT4. \leq \frac {\alpha - 1}{2} 8 ^ {\frac {2}{\alpha - 1}} 4 ^ {\frac {3 - \alpha}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {\frac {\alpha - 3}{\alpha - 1}} \ln (T + 1) + \frac {3 - \alpha}{2} \frac {\mathcal {R} _ {T}}{4}.

Therefore, we have

(1βˆ’(2βˆ’Ξ±)+1+(3βˆ’Ξ±)214)RT=Ξ±βˆ’14RTβ‰€Ξ±βˆ’1243βˆ’Ξ±Ξ±βˆ’152Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iΞ±βˆ’3Ξ±βˆ’1ln⁑(T+1)+12β‹…16Οƒβˆ‘iβ‰ iβˆ—Ξ”iβˆ’1ln⁑(T+1)+Ξ±βˆ’1282Ξ±βˆ’143βˆ’Ξ±Ξ±βˆ’1Οƒ2Ξ±Ξ±βˆ’1βˆ‘iβ‰ iβˆ—Ξ”iΞ±βˆ’3Ξ±βˆ’1ln⁑(T+1), \begin{array}{l} (1 - \frac {(2 - \alpha) + 1 + (3 - \alpha)}{2} \frac {1}{4}) \mathcal {R} _ {T} = \frac {\alpha - 1}{4} \mathcal {R} _ {T} \leq \frac {\alpha - 1}{2} 4 ^ {\frac {3 - \alpha}{\alpha - 1}} 5 ^ {\frac {2}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {\frac {\alpha - 3}{\alpha - 1}} \ln (T + 1) \\ + \frac {1}{2} \cdot 1 6 \sigma \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {- 1} \ln (T + 1) \\ + \frac {\alpha - 1}{2} 8 ^ {\frac {2}{\alpha - 1}} 4 ^ {\frac {3 - \alpha}{\alpha - 1}} \sigma^ {\frac {2 \alpha}{\alpha - 1}} \sum_ {i \neq i ^ {*}} \Delta_ {i} ^ {\frac {\alpha - 3}{\alpha - 1}} \ln (T + 1), \\ \end{array}

which gives our result.

Proof of Theorem 4.2. It is a direct consequence of the theorem above.

B.2. Proof when Bounding $\mathcal{R}_T^s$ (the skipped part)

Proof of Lemma B.2. For any $t \in [T]$ and $i \in [K]$ , we can bound between the difference between the loss mean, $\mu_{t,i}$ , and the truncated loss mean, $\mu_{t,i}^{\prime}$ , as

ΞΌt,iβˆ’ΞΌt,iβ€²=E[β„“t,i1[βˆ£β„“t,i>rt∣]∣Ftβˆ’1,it=i]≀E[βˆ£β„“t,i∣αrt1βˆ’Ξ±β‹…1[βˆ£β„“t,i∣>rt]∣Ftβˆ’1,it=i]≀E[βˆ£β„“t,i∣αrt1βˆ’Ξ±βˆ£Ftβˆ’1,it=i]β‰€ΟƒΞ±Ξ˜21βˆ’Ξ±t1βˆ’Ξ±2xt,i1βˆ’Ξ±2. \begin{array}{l} \mu_ {t, i} - \mu_ {t, i} ^ {\prime} = \mathbb {E} [ \ell_ {t, i} \mathbb {1} [ | \ell_ {t, i} > r _ {t} | ] \mid \mathcal {F} _ {t - 1}, i _ {t} = i ] \leq \mathbb {E} [ | \ell_ {t, i} | ^ {\alpha} r _ {t} ^ {1 - \alpha} \cdot \mathbb {1} [ | \ell_ {t, i} | > r _ {t} ] \mid \mathcal {F} _ {t - 1}, i _ {t} = i ] \\ \leq \mathbb {E} [ | \ell_ {t, i} | ^ {\alpha} r _ {t} ^ {1 - \alpha} \mid \mathcal {F} _ {t - 1}, i _ {t} = i ] \leq \sigma^ {\alpha} \Theta_ {2} ^ {1 - \alpha} t ^ {\frac {1 - \alpha}{2}} x _ {t, i} ^ {\frac {1 - \alpha}{2}}. \\ \end{array}

Hence, we have

RTs=βˆ‘t=1TE[⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]≀E[ΟƒΞ±Ξ˜21βˆ’Ξ±βˆ‘t=1Tβˆ‘iβ‰ iβˆ—t1/2βˆ’Ξ±/2xt,i1/2βˆ’Ξ±/2β‹…xt,i]≀E[5ΟƒΞ±βˆ‘t=1Tβˆ‘iβ‰ iβˆ—t1/2βˆ’Ξ±/2xt,i3/2βˆ’Ξ±/2], \begin{array}{l} \mathcal {R} _ {T} ^ {s} = \sum_ {t = 1} ^ {T} \mathbb {E} \left[ \langle x _ {t} - y, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right] \leq \mathbb {E} \left[ \sigma^ {\alpha} \Theta_ {2} ^ {1 - \alpha} \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} t ^ {1 / 2 - \alpha / 2} x _ {t, i} ^ {1 / 2 - \alpha / 2} \cdot x _ {t, i} \right] \\ \leq \mathbb {E} \left[ 5 \sigma^ {\alpha} \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} t ^ {1 / 2 - \alpha / 2} x _ {t, i} ^ {3 / 2 - \alpha / 2} \right], \\ \end{array}

where the last step uses $\Theta_2^{1 - \alpha} \leq \Theta_2^{-1} \leq 5$ . It further gives, by Lemma E.4, that

RTs≀5ΟƒΞ±βˆ‘t=1Tt1/2βˆ’Ξ±/2KΞ±/2βˆ’1/2. \mathcal {R} _ {T} ^ {s} \leq 5 \sigma^ {\alpha} \sum_ {t = 1} ^ {T} t ^ {1 / 2 - \alpha / 2} K ^ {\alpha / 2 - 1 / 2}.

B.3. Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part)

Proof of Lemma B.3. This is just a restatement of Lemma A.4.

Proof of Lemma B.4. We simply follow the proof of Lemma A.5, except for some slight modifications (instead of the previous lemma, we cannot directly modify all $\alpha$ 's to 2, as the second moment of $\ell_{t,i_t}$ may not exist). The first few steps are exactly the same, giving

Ξ·tβˆ’1DΞ¨(xt,zt)≀22βˆ’1Ξ·trt2βˆ’Ξ±βˆ£β„“t,itβˆ£Ξ±βˆ‘i=1Kxt,i2βˆ’1/2(1βˆ’1[it=i]xt,it)2≀2(t1/2)βˆ’1Θ22βˆ’Ξ±(t1/2)2βˆ’Ξ±xt,it2βˆ’Ξ±2βˆ£β„“t,itβˆ£Ξ±βˆ‘i=1Kxt,i2βˆ’1/2(1βˆ’1[it=i]xt,it)2 \begin{array}{l} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \leq \frac {2}{2 - 1} \eta_ {t} r _ {t} ^ {2 - \alpha} | \ell_ {t, i _ {t}} | ^ {\alpha} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / 2} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2} \\ \leq 2 \left(t ^ {1 / 2}\right) ^ {- 1} \Theta_ {2} ^ {2 - \alpha} \left(t ^ {1 / 2}\right) ^ {2 - \alpha} x _ {t, i _ {t}} ^ {\frac {2 - \alpha}{2}} | \ell_ {t, i _ {t}} | ^ {\alpha} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / 2} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2} \\ \end{array}

=2Θ22βˆ’Ξ±βˆ£β„“t,it∣αt1βˆ’Ξ±2xt,it2βˆ’Ξ±2βˆ‘i=1Kxt,i2βˆ’1/2(1βˆ’1[it=i]xt,it)2. = 2 \Theta_ {2} ^ {2 - \alpha} | \ell_ {t, i _ {t}} | ^ {\alpha} t ^ {\frac {1 - \alpha}{2}} x _ {t, i _ {t}} ^ {\frac {2 - \alpha}{2}} \sum_ {i = 1} ^ {K} x _ {t, i} ^ {2 - 1 / 2} \left(1 - \frac {\mathbb {1} [ i _ {t} = i ]}{x _ {t , i _ {t}}}\right) ^ {2}.

After taking expectations, we have

E[Ξ·tβˆ’1DΞ¨(xt,zt)∣Ftβˆ’1]≀2Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2βˆ‘i=1Kxt,i2βˆ’Ξ±/2[βˆ‘j=1Kxt,j3/2βŸβ‰€1≀xt,iβˆ’1/2βˆ’2xt,i1/2+xt,iβˆ’1/2]≀4Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2[βˆ‘i=1Kxt,i3/2βˆ’Ξ±/2βˆ’βˆ‘i=1Kxt,i5/2βˆ’Ξ±/2]=4Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2[βˆ‘i=1Kxt,i3/2βˆ’Ξ±/2(1βˆ’xt,i)]≀8Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2βˆ‘iβ‰ iβˆ—xt,i3/2βˆ’Ξ±/2 \begin{array}{l} \mathbb{E}[\eta_{t}^{-1}D_{\Psi}(x_{t},z_{t}) \mid \mathcal{F}_{t - 1}]\leq 2\Theta_{2}^{2 - \alpha}\sigma^{\alpha}t^{\frac{1 - \alpha}{2}}\sum_{i = 1}^{K}x_{t,i}^{2 - \alpha /2}\left[ \begin{array}{c}\underbrace{\sum_{j = 1}^{K}x_{t,j}^{3 / 2}}_{\leq 1\leq x_{t,i}^{-1 / 2}} - 2x_{t,i}^{1 / 2} + x_{t,i}^{-1 / 2} \end{array} \right] \\ \leq 4 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} \left[ \sum_ {i = 1} ^ {K} x _ {t, i} ^ {3 / 2 - \alpha / 2} - \sum_ {i = 1} ^ {K} x _ {t, i} ^ {5 / 2 - \alpha / 2} \right] \\ = 4 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} \left[ \sum_ {i = 1} ^ {K} x _ {t, i} ^ {3 / 2 - \alpha / 2} (1 - x _ {t, i}) \right] \\ \leq 8 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} \sum_ {i \neq i ^ {*}} x _ {t, i} ^ {3 / 2 - \alpha / 2} \\ \end{array}

Therefore, we have

E[βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)]≀E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—8Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2xt,i(3βˆ’Ξ±)/2], \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \right] \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} 8 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} x _ {t, i} ^ {(3 - \alpha) / 2} \right],

which further gives

E[βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)]β‰€βˆ‘t=1T8Θ22βˆ’Ξ±ΟƒΞ±t1βˆ’Ξ±2KΞ±βˆ’12 \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) \right] \leq \sum_ {t = 1} ^ {T} 8 \Theta_ {2} ^ {2 - \alpha} \sigma^ {\alpha} t ^ {\frac {1 - \alpha}{2}} K ^ {\frac {\alpha - 1}{2}}

by Lemma E.4.

C. Formal Analysis of AdaTINF (Algorithm 3)

C.1. Main Theorem

We again begin with a regret guarantee without any big-Oh notations.

Theorem C.1 (Regret Guarantee of Algorithm 3). If Assumptions 3.1 and 3.6 hold, Algorithm 3 ensures

RT≀3K(T+1)+204ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±+12ΟƒT1/Ξ±. \mathcal {R} _ {T} \leq 3 \sqrt {K (T + 1)} + 2 0 4 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha} + 1 2 \sigma T ^ {1 / \alpha}.

Proof. As defined in the text, we group time slots with equal $\lambda_{t}$ 's into epochs, as

Tjβ‰œ{t∈[T]∣λt=2j},βˆ€jβ‰₯0. \mathcal {T} _ {j} \triangleq \{t \in [ T ] \mid \lambda_ {t} = 2 ^ {j} \}, \quad \forall j \geq 0.

For any non-empty $\mathcal{T}_j$ 's, denote the first and last time slot of $\mathcal{T}_j$ by

Ξ³jβ‰œmin⁑{t∈Tj},Ο„jβ‰œmax⁑{t∈Tj}. \gamma_ {j} \triangleq \min \{t \in \mathcal {T} _ {j} \}, \tau_ {j} \triangleq \max \{t \in \mathcal {T} _ {j} \}.

Then, without loss of generality, assume that no doubling has happened for time slot $T$ . Otherwise, one can always add a virtual time slot $t = T + 1$ with $\ell_{t,i} = 0$ for all $i$ . Therefore, we have $\mathcal{T}_J \neq \emptyset$ where $J$ is the final value of variable $J$ defined in the code.

We adopt the notation of $c_{t}$ as defined in Algorithm 3:

ct=2Ξ·txt,itβˆ’1/2β„“t,it21[βˆ£β„“t,itβˆ£β‰€rt]+β„“t,it1[βˆ£β„“t,it∣>rt]. c _ {t} = 2 \eta_ {t} x _ {t, i _ {t}} ^ {- 1 / 2} \ell_ {t, i _ {t} ^ {2}} \mathbb {1} [ | \ell_ {t, i _ {t}} | \leq r _ {t} ] + \ell_ {t, i _ {t}} \mathbb {1} [ | \ell_ {t, i _ {t}} | > r _ {t} ].

Moreover, from the doubling criterion of Algorithm 3, for each non-empty epoch, we have the following lemma.

Lemma C.2. For any $0 \leq j < J$ such that $\mathcal{T}_j \neq \emptyset$ , we have

1[Ξ³j>1]cΞ³jβˆ’1+βˆ‘t∈Tj\{Ο„j}ct≀2jK(T+1),(26) \mathbb {1} \left[ \gamma_ {j} > 1 \right] c _ {\gamma_ {j} - 1} + \sum_ {t \in \mathcal {T} _ {j} \backslash \{\tau_ {j} \}} c _ {t} \leq 2 ^ {j} \sqrt {K (T + 1)}, \tag {26}

βˆ‘t∈Tjct>2jβˆ’1K(T+1),(27) \sum_ {t \in \mathcal {T} _ {j}} c _ {t} > 2 ^ {j - 1} \sqrt {K (T + 1)}, \tag {27}

Moreover, for $j = J$ (recall that $\mathcal{T}_J \neq \emptyset$ ), we have

1[Ξ³j>1]cΞ³jβˆ’1+βˆ‘t∈Tjct≀2JK(T+1).(28) \mathbb {1} \left[ \gamma_ {j} > 1 \right] c _ {\gamma_ {j} - 1} + \sum_ {t \in \mathcal {T} _ {j}} c _ {t} \leq 2 ^ {J} \sqrt {K (T + 1)}. \tag {28}

Similar to previous analysis, we define $\mu_{t,i}^{\prime} = \mathbb{E}[\ell_{t,i}\mathbb{1}[|\ell_{t,i}|\leq r_t] \mid \mathcal{F}_{t - 1},i_t = i]$ and decompose the regret $\mathcal{R}_T(y)$ as follows

RT(y)=E[βˆ‘t=1T⟨xtβˆ’y,ΞΌtβˆ’ΞΌtβ€²βŸ©]+E[βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩]β‰œRTs+RTf. \mathcal {R} _ {T} (y) = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \mu_ {t} - \mu_ {t} ^ {\prime} \rangle \right] + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle \right] \triangleq \mathcal {R} _ {T} ^ {s} + \mathcal {R} _ {T} ^ {f}.

According to Lemma A.3, we have

RTs≀E[βˆ‘t=1Tβˆ‘i=1Kxt,i(ΞΌt,iβˆ’ΞΌt,iβ€²)]=E[βˆ‘t=1Tβˆ‘i=1Kβ„“t,itβ‹…1[βˆ£β„“t,it∣>rt]].(29) \begin{array}{l} R _ {T} ^ {s} \leq \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {K} x _ {t, i} \left(\mu_ {t, i} - \mu_ {t, i} ^ {\prime}\right) \right] \\ = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {K} \ell_ {t, i _ {t}} \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| > r _ {t} \right] \right]. \tag {29} \\ \end{array}

Furthermore, due to the properties of weighted importance sampling estimator (as in Appendix A, $\mathbb{E}[\hat{\ell}{t,i}\mid \mathcal{F}{t - 1}] = \mu_{t,i}^{\prime}$ ), we have

RTf=E[βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩]. \mathcal {R} _ {T} ^ {f} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle \right].

We can then apply Lemma E.9 to $\mathcal{R}_T^f$ , giving

βˆ‘t=1T⟨xtβˆ’y,β„“^tβŸ©β‰€Ξ·Tmax⁑xβˆˆβ–³[K]Ξ¨(x)+βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt) \sum_ {t = 1} ^ {T} \left\langle x _ {t} - y, \hat {\ell} _ {t} \right\rangle \leq \eta_ {T} \max _ {x \in \triangle_ {[ K ]}} \Psi (x) + \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right)

where $z_{t} \triangleq \nabla \Psi^{*}(\nabla \Psi(x_{t}) - \eta_{t}\hat{\ell}_{t})$ . The first term is simply within $2^{J}\sqrt{KT}$ . For the second term, we have the following property similar to Lemma A.5:

Lemma C.3. Algorithm 3 guarantees that for any $t \in [T]$ ,

Ξ·tβˆ’1DΞ¨(xt,zt)≀2Ξ·txt,it3/2β„“^t,it2 \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) \leq 2 \eta_ {t} x _ {t, i _ {t}} ^ {3 / 2} \hat {\ell} _ {t, i _ {t}} ^ {2}

where $z_{t}\triangleq \nabla \Psi^{*}(\nabla \Psi (x_{t}) - \eta_{t}\hat{\ell}_{t})$

Thus we have

RTf≀E[2J]KT+E[βˆ‘t=1T2Ξ·txt,it3/2β„“^t,it2]=E[2J]KT+E[βˆ‘t=1T2Ξ·txt,itβˆ’1/2β„“t,it2β‹…1[βˆ£β„“t,itβˆ£β‰€rt]].(30) \begin{array}{l} \mathcal {R} _ {T} ^ {f} \leq \mathbb {E} [ 2 ^ {J} ] \sqrt {K T} + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} 2 \eta_ {t} x _ {t, i _ {t}} ^ {3 / 2} \hat {\ell} _ {t, i _ {t}} ^ {2} \right] \\ = \mathbb {E} \left[ 2 ^ {J} \right] \sqrt {K T} + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} 2 \eta_ {t} x _ {t, i _ {t}} ^ {- 1 / 2} \ell_ {t, i _ {t}} ^ {2} \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| \leq r _ {t} \right] \right]. \tag {30} \\ \end{array}

Combining Eq. (30) and (30), we can see

RT≀E[2J]KT+E[βˆ‘t=1T(β„“t,itβ‹…1[βˆ£β„“t,it∣>rt]+2Ξ·txt,itβˆ’1/2β„“t,it2β‹…1[βˆ£β„“t,itβˆ£β‰€rt])]=E[2J]KT+E[βˆ‘t=1Tct]. \begin{array}{l} \mathcal {R} _ {T} \leq \mathbb {E} \left[ 2 ^ {J} \right] \sqrt {K T} + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\ell_ {t, i _ {t}} \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| > r _ {t} \right] + 2 \eta_ {t} x _ {t, i _ {t}} ^ {- 1 / 2} \ell_ {t, i _ {t}} ^ {2} \cdot \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| \leq r _ {t} \right]\right) \right] \\ = \mathbb {E} \left[ 2 ^ {J} \right] \sqrt {K T} + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} c _ {t} \right]. \\ \end{array}

Summing up Equation (26) for all non-empty epoch $j < J$ and Equation (28), we get

βˆ‘t=1Tct=βˆ‘j=0Jβˆ‘t∈Tjctβ‰€βˆ‘j=0J2JK(T+1)≀2J+1K(T+1), \sum_ {t = 1} ^ {T} c _ {t} = \sum_ {j = 0} ^ {J} \sum_ {t \in \mathcal {T} _ {j}} c _ {t} \leq \sum_ {j = 0} ^ {J} 2 ^ {J} \sqrt {K (T + 1)} \leq 2 ^ {J + 1} \sqrt {K (T + 1)},

and we can conclude

RT≀E[2J]β‹…3K(T+1). \mathcal {R} _ {T} \leq \mathbb {E} \left[ 2 ^ {J} \right] \cdot 3 \sqrt {K (T + 1)}.

It remains to bound $\mathbb{E}[2^J]$ . When $J \geq 1$ , there are at least two non-empty epochs. Let $J'$ be the index of the second last epoch. The doubling condition of Algorithm 3 further reduces the task to bound $2^J$ into bounding $2^{J'}$ and $c_{\tau_{J'}}$ , as the following lemma states.

Lemma C.4. Algorithm 3 guarantees that, when $J \geq 1$ , we have

2JK(T+1)≀2Jβ€²+1K(T+1)+4cΟ„Jβ€².(31) 2 ^ {J} \sqrt {K (T + 1)} \leq 2 ^ {J ^ {\prime} + 1} \sqrt {K (T + 1)} + 4 c _ {\tau_ {J ^ {\prime}}}. \tag {31}

We can derive the following expectation bound for both $2^{J'}$ and $c_{\tau_{J'}}$ :

Lemma C.5 (Restatement of Lemma 7.1). Algorithm 3 guarantees that

E[1[Jβ‰₯1]2Jβ€²]≀28ΟƒK1/2βˆ’1/Ξ±(T+1)1/Ξ±βˆ’1/2.(32) \mathbb {E} \left[ \mathbb {1} [ J \geq 1 ] 2 ^ {J ^ {\prime}} \right] \leq 2 8 \sigma K ^ {1 / 2 - 1 / \alpha} (T + 1) ^ {1 / \alpha - 1 / 2}. \tag {32}

Lemma C.6 (Restatement of Lemma 7.2). Algorithm 3 guarantees that

E[1[Jβ‰₯1]cΟ„Jβ€²]≀0.1E[1[Jβ‰₯1]2Jβ€²T]+E[max⁑t∈[T]βˆ£β„“t,it∣].(33) \mathbb {E} [ \mathbb {1} [ J \geq 1 ] c _ {\tau_ {J ^ {\prime}}} ] \leq 0. 1 \mathbb {E} [ \mathbb {1} [ J \geq 1 ] 2 ^ {J ^ {\prime}} \sqrt {T} ] + \mathbb {E} \left[ \max _ {t \in [ T ]} \left| \ell_ {t, i _ {t}} \right| \right]. \tag {33}

Applying Lemma E.3 and Equation (32) to Equation (33), we get

E[1[Jβ‰₯1]cΟ„Jβ€²]≀3ΟƒK1/2βˆ’1/Ξ±(T+1)1/Ξ±+ΟƒT1/Ξ±. \mathbb {E} \left[ \mathbb {1} \left[ J \geq 1 \right] c _ {\tau_ {J ^ {\prime}}} \right] \leq 3 \sigma K ^ {1 / 2 - 1 / \alpha} (T + 1) ^ {1 / \alpha} + \sigma T ^ {1 / \alpha}.

Plugging this into Equation (31), we get

E[1[Jβ‰₯1]2JK(T+1)]≀68ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±+4ΟƒT1/Ξ±, \mathbb {E} \left[ \mathbb {1} [ J \geq 1 ] 2 ^ {J} \sqrt {K (T + 1)} \right] \leq 6 8 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha} + 4 \sigma T ^ {1 / \alpha},

and thus

E[2JK(T+1)]≀68ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±+4ΟƒT1/Ξ±+K(T+1), \mathbb {E} \left[ 2 ^ {J} \sqrt {K (T + 1)} \right] \leq 6 8 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha} + 4 \sigma T ^ {1 / \alpha} + \sqrt {K (T + 1)},

RT≀3E[2JK(T+1)]≀204ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±+12ΟƒT1/Ξ±+3K(T+1). \mathcal {R} _ {T} \leq 3 \mathbb {E} \left[ 2 ^ {J} \sqrt {K (T + 1)} \right] \leq 2 0 4 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha} + 1 2 \sigma T ^ {1 / \alpha} + 3 \sqrt {K (T + 1)}.

Proof of Theorem 6.1. It is a direct consequence of the theorem above.

C.2. Proof when Reducing $\mathcal{R}_T$ to $\mathbb{E}[2^J]$

Proof of Lemma C.2. It suffices to notice that in Algorithm 3, during a particular epoch $j$ , when the doubling condition at Line 15 evaluates to true, the current value of the variable $S_{j}$ is $1[\gamma_j > 1]c_{\gamma_j - 1} + \sum_{t\in \mathcal{T}_j}c_t$ , thus

1[Ξ³j>1]cΞ³jβˆ’1+βˆ‘t∈Tjct>2jK(T+1). 1 \left[ \gamma_ {j} > 1 \right] c _ {\gamma_ {j} - 1} + \sum_ {t \in \mathcal {T} _ {j}} c _ {t} > 2 ^ {j} \sqrt {K (T + 1)}.

When $\gamma_{j} = 1$ (or equivalently, $j = 0$ ), Equation (27) automatically holds. Otherwise Line 16 guarantees that $j \geq \lceil \log_2(c_{\tau_{\gamma_j - 1}} / \sqrt{K(T + 1)}) \rceil + 1$ , hence $2^{j - 1}\sqrt{K(T + 1)} \geq c_{\tau_{\gamma_j - 1}}$ . We will have

2jβˆ’1K(T+1)+βˆ‘t∈Tjct>2jK(T+1), 2 ^ {j - 1} \sqrt {K (T + 1)} + \sum_ {t \in \mathcal {T} _ {j}} c _ {t} > 2 ^ {j} \sqrt {K (T + 1)},

which also solves to Equation (27).

When the doubling condition at Line 15 evaluates to false for the last time, the value of $S_{j}$ is $1[\gamma_{j} > 1]c_{\gamma_{j} - 1} + \sum_{t\in \mathcal{T}{j}\setminus {\tau{j}}}c_{t}$ . At this time we have $S_{j}\leq 2^{j}\sqrt{K(T + 1)}$ , hence Equation (26) and (28) hold.

Proof of Lemma C.3. It is exactly the same calculation we did in Lemma A.5, the only difference is that $\hat{\ell}t$ does not come with a $-\ell{t,i_t}$ drift.

C.3. Proof when Bounding $\mathbb{E}[2^{J}]$

Proof of Lemma C.4. According to Line 16 of Algorithm 3, $J, J'$ and $c_{\tau_{J'}}$ satisfy

J=max⁑{Jβ€²+1,⌈log⁑2(cΟ„Jβ€²/K(T+1))βŒ‰+1}, J = \max \left\{J ^ {\prime} + 1, \lceil \log_ {2} (c _ {\tau_ {J ^ {\prime}}} / \sqrt {K (T + 1)}) \rceil + 1 \right\},

thus

2J≀max⁑{2β‹…2Jβ€²,4β‹…cΟ„Jβ€²/K(T+1)} 2 ^ {J} \leq \max \left\{2 \cdot 2 ^ {J ^ {\prime}}, 4 \cdot c _ {\tau_ {J ^ {\prime}}} / \sqrt {K (T + 1)} \right\}

and

2JK(T+1)≀max⁑{2β‹…2Jβ€²K(T+1),4β‹…cΟ„Jβ€²}≀2β‹…2Jβ€²K(T+1)+4β‹…cΟ„Jβ€². \begin{array}{l} 2 ^ {J} \sqrt {K (T + 1)} \leq \max \left\{2 \cdot 2 ^ {J ^ {\prime}} \sqrt {K (T + 1)}, 4 \cdot c _ {\tau_ {J ^ {\prime}}} \right\} \\ \leq 2 \cdot 2 ^ {J ^ {\prime}} \sqrt {K (T + 1)} + 4 \cdot c _ {\tau_ {J ^ {\prime}}}. \\ \end{array}

Proof of Lemma C.5. Applying Eq. (27) to $j = J' < J$ , we get

1[Jβ‰₯1](2Jβ€²)Ξ±K(T+1)≀(2Jβ€²)Ξ±βˆ’12βˆ‘t∈TJβ€²ct(34) \mathbb {1} [ J \geq 1 ] (2 ^ {J ^ {\prime}}) ^ {\alpha} \sqrt {K (T + 1)} \leq \left(2 ^ {J ^ {\prime}}\right) ^ {\alpha - 1} 2 \sum_ {t \in \mathcal {T} _ {J ^ {\prime}}} c _ {t} \tag {34}

We further upper-bound the RHS of (34) by enlarging the summation range to $[T]$ . Specifically, let $\tilde{\eta}_t = 2^{-J'} t^{-1/2}$ , $\tilde{r}t = 2^{J'} \Theta_2 \sqrt{tx{t,i_t}}$ . Define the summands by

c~tβ‰œ2Ξ·~txt,itβˆ’1/2β„“t,it21[βˆ£β„“t,itβˆ£β‰€r~t]+β„“t,it1[βˆ£β„“t,it∣>r~t]≀2Ξ·~txt,itβˆ’1/2βˆ£β„“t,it∣αr~t2βˆ’Ξ±+βˆ£β„“t,it∣αr~t1βˆ’Ξ±β‰€(2Ξ·~tr~t2βˆ’Ξ±xt,itβˆ’1/2+r~t1βˆ’Ξ±)β‹…βˆ£β„“t,it∣α=(2Θ22βˆ’Ξ±+Θ21βˆ’Ξ±)β‹…2(1βˆ’Ξ±)Jβ€²t1βˆ’Ξ±2xt,it1βˆ’Ξ±2βˆ£β„“t,itβˆ£Ξ±β‰€(2+Θ2βˆ’1)β‹…2(1βˆ’Ξ±)Jβ€²t1βˆ’Ξ±2xt,it1βˆ’Ξ±2βˆ£β„“t,itβˆ£Ξ±β‰€7β‹…2(1βˆ’Ξ±)Jβ€²t1βˆ’Ξ±2xt,it1βˆ’Ξ±2βˆ£β„“t,it∣α.(35) \begin{array}{l} \tilde {c} _ {t} \triangleq 2 \tilde {\eta} _ {t} x _ {t, i _ {t}} ^ {- 1 / 2} \ell_ {t, i _ {t}} ^ {2} \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| \leq \tilde {r} _ {t} \right] + \ell_ {t, i _ {t}} \mathbb {1} \left[ \left| \ell_ {t, i _ {t}} \right| > \tilde {r} _ {t} \right] \tag {35} \\ \leq 2 \tilde {\eta} _ {t} x _ {t, i _ {t}} ^ {- 1 / 2} \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} \tilde {r} _ {t} ^ {2 - \alpha} + \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} \tilde {r} _ {t} ^ {1 - \alpha} \\ \leq \left(2 \tilde {\eta} _ {t} \tilde {r} _ {t} ^ {2 - \alpha} x _ {t, i _ {t}} ^ {- 1 / 2} + \tilde {r} _ {t} ^ {1 - \alpha}\right) \cdot | \ell_ {t, i _ {t}} | ^ {\alpha} \\ = \left(2 \Theta_ {2} ^ {2 - \alpha} + \Theta_ {2} ^ {1 - \alpha}\right) \cdot 2 ^ {\left(1 - \alpha\right) J ^ {\prime}} t ^ {\frac {1 - \alpha}{2}} x _ {t, i _ {t}} ^ {\frac {1 - \alpha}{2}} \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} \\ \leq \left(2 + \Theta_ {2} ^ {- 1}\right) \cdot 2 ^ {\left(1 - \alpha\right) J ^ {\prime}} t ^ {\frac {1 - \alpha}{2}} x _ {t, i _ {t}} ^ {\frac {1 - \alpha}{2}} \left| \ell_ {t, i _ {t}} \right| ^ {\alpha} \\ \leq 7 \cdot 2 ^ {(1 - \alpha) J ^ {\prime}} t ^ {\frac {1 - \alpha}{2}} x _ {t, i _ {t}} ^ {\frac {1 - \alpha}{2}} | \ell_ {t, i _ {t}} | ^ {\alpha}. \\ \end{array}

We see that the definition in Eq. (35) coincides with $c_{t}$ for $t \in \mathcal{T}_{J^{\prime}}$ . Thus, the RHS of (34) is no more than

14βˆ‘t=1Tt1βˆ’Ξ±2xt,it1βˆ’Ξ±2βˆ£β„“t,it∣α 1 4 \sum_ {t = 1} ^ {T} t ^ {\frac {1 - \alpha}{2}} x _ {t, i _ {t}} ^ {\frac {1 - \alpha}{2}} | \ell_ {t, i _ {t}} | ^ {\alpha}

Taking expectation on both sides of (34), we get

E[1[Jβ‰₯1](2Jβ€²)Ξ±]K(T+1)≀28σαKΞ±βˆ’12(T+1)3βˆ’Ξ±2, \mathbb {E} \left[ \mathbb {1} [ J \geq 1 ] (2 ^ {J ^ {\prime}}) ^ {\alpha} \right] \sqrt {K (T + 1)} \leq 2 8 \sigma^ {\alpha} K ^ {\frac {\alpha - 1}{2}} (T + 1) ^ {\frac {3 - \alpha}{2}},

which gives $\mathbb{E}[\mathbb{1}J\geq 1^{\alpha}]\leq 28\sigma^{\alpha}K^{\alpha /2 - 1}(T + 1)^{1 - \alpha /2}$ . By Jensen's inequality,

E[1[Jβ‰₯1]2Jβ€²]≀(E[1[Jβ‰₯1]Ξ±(2Jβ€²)Ξ±])1/α≀28ΟƒK1/2βˆ’1/Ξ±(T+1)1/Ξ±βˆ’1/2. \begin{array}{l} \mathbb {E} \left[ \mathbb {1} [ J \geq 1 ] 2 ^ {J ^ {\prime}} \right] \leq \left(\mathbb {E} \left[ \mathbb {1} [ J \geq 1 ] ^ {\alpha} \left(2 ^ {J ^ {\prime}}\right) ^ {\alpha} \right]\right) ^ {1 / \alpha} \\ \leq 2 8 \sigma K ^ {1 / 2 - 1 / \alpha} (T + 1) ^ {1 / \alpha - 1 / 2}. \\ \end{array}

Proof of Lemma C.6. We can do the calculation

cΟ„Jβ€²=2Ξ·Ο„Jβ€²xΟ„Jβ€²,iΟ„Jβ€²βˆ’1/2β„“Ο„Jβ€²,iΟ„Jβ€²21[βˆ£β„“Ο„Jβ€²,iΟ„Jβ€²βˆ£β‰€rΟ„Jβ€²]+β„“Ο„Jβ€²,iΟ„Jβ€²1[βˆ£β„“Ο„Jβ€²,iΟ„Jβ€²βˆ£>rΟ„Jβ€²]≀2Ξ·Ο„Jβ€²xΟ„Jβ€²,iΟ„Jβ€²βˆ’1/2rΟ„Jβ€²2+max⁑t∈[T]βˆ£β„“t,it∣=2Jβ€²β‹…2Θ22Ο„Jβ€²xΟ„Jβ€²,iΟ„Jβ€²1/2+max⁑t∈[T]βˆ£β„“t,itβˆ£β‰€0.1β‹…2Jβ€²T+max⁑t∈[T]βˆ£β„“t,it∣. \begin{array}{l} c _ {\tau_ {J ^ {\prime}}} = 2 \eta_ {\tau_ {J ^ {\prime}}} x _ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} ^ {- 1 / 2} \ell_ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} ^ {2} \mathbb {1} [ | \ell_ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} | \leq r _ {\tau_ {J ^ {\prime}}} ] + \ell_ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} \mathbb {1} [ | \ell_ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} | > r _ {\tau_ {J ^ {\prime}}} ] \\ \leq 2 \eta_ {\tau_ {J ^ {\prime}}} x _ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} ^ {- 1 / 2} r _ {\tau_ {J ^ {\prime}}} ^ {2} + \max _ {t \in [ T ]} | \ell_ {t, i _ {t}} | \\ = 2 ^ {J ^ {\prime}} \cdot 2 \Theta_ {2} ^ {2} \sqrt {\tau_ {J ^ {\prime}}} x _ {\tau_ {J ^ {\prime}}, i _ {\tau_ {J ^ {\prime}}}} ^ {1 / 2} + \max _ {t \in [ T ]} | \ell_ {t, i _ {t}} | \\ \leq 0. 1 \cdot 2 ^ {J ^ {\prime}} \sqrt {T} + \max _ {t \in [ T ]} | \ell_ {t, i _ {t}} |. \\ \end{array}

D. Removing Dependency on Time Horizon $T$ in Algorithm 3

To remove the dependency of $T$ , we leverage the following doubling trick, which is commonly used for unknown $T$ 's (Auer et al., 1995; Besson & Kaufmann, 2018). This gives our More Adaptive AdaTINF algorithm, which we called AdaΒ²TINF.

Algorithm 4 More Adaptive AdaTINF (AdaΒ²TINF)
Input: Number of arms $K$

Output: Sequence of actions  $i_1, i_2, \dots, i_T \in [K]$

1: Initialize $T_0\gets 1,S\gets 0$
2: for $t = 1,2,\dots$ do
3: if $t \geq S$ then
4: $T_{0}\gets 2T_{0},S\gets S + T_{0} - 1$
5: Initialize a new AdaTINF instance (Algorithm 3) with parameters $K$ and $T_0 - 1$ .
6: end if
7: Run current AdaTINF instance for one time slot, act what it acts and feed it with the feedback $\ell_{t,i_t}$
8: end for

Theorem D.1 (Regret Guarantee of Algorithm 4). Under the same assumptions of Theorem 6.1, i.e., Assumptions 3.1 and 3.6 hold, $\text{Ada}^2\text{TINF}(\text{Algorithm 4})$ ensures

RT≀600ΟƒK1βˆ’1/Ξ±(T+1)1/Ξ±. \mathcal {R} _ {T} \leq 6 0 0 \sigma K ^ {1 - 1 / \alpha} (T + 1) ^ {1 / \alpha}.

Proof. We divide the time horizon $T$ into several super-epochs, each with length $T_0 - 1 = 2^1 - 1, 2^2 - 1, 2^3 - 1, \dots$ . For each of the super-epochs, as we restarted the whole process, we can regard each of them as a independent execution of

AdaTINF. Therefore, by Theorem 6.1, for an super-epoch from $t_0$ to $t_0 + T_0 - 2$ , we have

E[βˆ‘t=t0t0+T0βˆ’2⟨xtβˆ’eiβˆ—,ΞΌt⟩]=RT0βˆ’1≀300ΟƒK1βˆ’1/Ξ±T01/Ξ±. \mathbb {E} \left[ \sum_ {t = t _ {0}} ^ {t _ {0} + T _ {0} - 2} \langle x _ {t} - \mathbf {e} _ {i ^ {*}}, \mu_ {t} \rangle \right] = \mathcal {R} _ {T _ {0} - 1} \leq 3 0 0 \sigma K ^ {1 - 1 / \alpha} T _ {0} ^ {1 / \alpha}.

Therefore, the total regret is bounded by

RTβ‰€βˆ‘T0=21βˆ’1,22βˆ’1,…,2⌈log⁑2(T+1)βŒ‰βˆ’1300ΟƒK1βˆ’1/Ξ±(T0+1)1/α≀600ΟƒK1βˆ’1/Ξ±T1/Ξ±, \mathcal{R}_{T}\leq \sum_{\substack{T_{0} = 2^{1} - 1,2^{2} - 1,\dots ,2^{\lceil \log_{2}(T + 1)\rceil} - 1}}300\sigma K^{1 - 1 / \alpha}(T_{0} + 1)^{1 / \alpha}\leq 600\sigma K^{1 - 1 / \alpha}T^{1 / \alpha},

as desired.

E. Auxiliary Lemmas

E.1. Probability Lemmas

Lemma E.1. For a non-negative random variable $X$ whose $\alpha$ -th moment exists and a constant $c > 0$ , we have

Pr⁑{Xβ‰₯c}≀E[XΞ±]cΞ± \Pr \{X \geq c \} \leq \frac {\mathbb {E} \left[ X ^ {\alpha} \right]}{c ^ {\alpha}}

Proof. As both $X, c$ are non-negative, $\operatorname*{Pr}{X \geq c} = \operatorname*{Pr}{X^{\alpha} \geq c^{\alpha}} \leq \frac{\mathbb{E}[X^{\alpha}]}{c^{\alpha}}$ by Markov's inequality.

Lemma E.2. For a random variable $Y$ with $q$ -th moment $\mathbb{E}[|Y|^q]$ bounded by $\sigma^q$ (where $q \in [1, 2]$ ), its $p$ -th moment $\mathbb{E}[|Y|^p]$ is also bounded by $\sigma^p$ if $1 \leq p \leq q$ .

Proof. As the function $f\colon x \mapsto x^{\alpha}$ is convex for any $\alpha \geq 1$ , by Jensen's inequality, we have $f(\mathbb{E}[X]) \leq \mathbb{E}[f(X)]$ for any random variable $X$ . Hence, by picking $X = |Y|^{p}$ and $\alpha = \frac{q}{p}$ , we have $(\mathbb{E}[|Y|^{p}])^{q/p} \leq \mathbb{E}[(|Y|^{p})^{q/p}] = \mathbb{E}[|Y|^{q}] \leq \sigma^{q}$ , so $\mathbb{E}[|Y|^{p}] \leq \sigma^{p}$ for any $1 \leq p \leq q$ .

Lemma E.3. For $n$ independent random variables $X_{1}, X_{2}, \dots, X_{n}$ , each with $\alpha$ -th moment ( $1 < \alpha \leq 2$ ) bounded by $\sigma^{\alpha}$ , i.e., $\mathbb{E}_{x_i \sim X_i} [|x_i|^{\alpha}] \leq \sigma^{\alpha}$ for all $1 \leq i \leq n$ , we have

E⁑x1∼X1,x2∼X2,…,xn∼Xn[max⁑1≀i≀n∣xi∣]≀σn1/Ξ±. \operatorname *{\mathbb{E}}_{x_{1}\sim X_{1},x_{2}\sim X_{2},\dots ,x_{n}\sim X_{n}}\left[\max_{1\leq i\leq n}|x_{i}|\right]\leq \sigma n^{1 / \alpha}.

Proof. By Jensen's inequality, we have (here, $\mathbf{x}\sim \mathbf{X}$ denotes $x_{1}\sim X_{1},x_{2}\sim X_{2},\dots ,x_{n}\sim X_{n})$

([max⁑1≀i≀n∣xi∣])α≀[(max⁑1≀i≀n∣xi∣)Ξ±]=[max⁑1≀i≀n∣xi∣α]≀[βˆ‘i=1n∣xi∣α]=βˆ‘i=1n[∣xi∣α]≀nσα, \left(\underset {\mathbf {x} \sim \mathbf {X}} {\mathbb {E}} \left[ \max _ {1 \leq i \leq n} | x _ {i} | \right]\right) ^ {\alpha} \leq \underset {\mathbf {x} \sim \mathbf {X}} {\mathbb {E}} \left[ \left(\max _ {1 \leq i \leq n} | x _ {i} |\right) ^ {\alpha} \right] = \underset {\mathbf {x} \sim \mathbf {X}} {\mathbb {E}} \left[ \max _ {1 \leq i \leq n} | x _ {i} | ^ {\alpha} \right] \leq \underset {\mathbf {x} \sim \mathbf {X}} {\mathbb {E}} \left[ \sum_ {i = 1} ^ {n} | x _ {i} | ^ {\alpha} \right] = \sum_ {i = 1} ^ {n} \underset {x _ {i} \sim X _ {i}} {\mathbb {E}} [ | x _ {i} | ^ {\alpha} ] \leq n \sigma^ {\alpha},

which gives $\mathbb{E}{\mathbf{x}\sim \mathbf{X}}\left[\max{1\leq i\leq n}|x_i|\right]\leq \sigma n^{1 / \alpha}$

E.2. Arithmetic Lemmas

Lemma E.4. For any $x \in \triangle_{[K]}$ (i.e., $\sum_{i=1}^{K} x_i = 1$ ), we have

βˆ‘i=1Kxit≀K1βˆ’t \sum_ {i = 1} ^ {K} x _ {i} ^ {t} \leq K ^ {1 - t}

for $\frac{1}{2} \leq t < 1$ .

Proof. By HΓΆlder's inequality $| fg| 1\leq | f| p| g| q$ , we have $\sum{i = 1}^{K}x{i}^{t}\leq (\sum{i = 1}^{K}(x_{i}^{t})^{1 / t})^{t}(\sum_{i = 1}^{K}1^{q})^{1 / q} = K^{1 - t}$ by picking $p = \frac{1}{t}$ and $q = \frac{1}{1 - t}$ .

Lemma E.5. For any positive integer $n$ , we have

βˆ‘i=1n1i≀ln⁑(n+1). \sum_ {i = 1} ^ {n} \frac {1}{i} \leq \ln (n + 1).

Moreover, for any $-1 < t < 0$ , we have

βˆ‘i=1nit≀(n+1)t+1t+1. \sum_ {i = 1} ^ {n} i ^ {t} \leq \frac {(n + 1) ^ {t + 1}}{t + 1}.

Proof. If $t = -1$ , we have $\sum_{i=1}^{n} i^t \leq \int_{1}^{(n+1)} \frac{\mathrm{d}x}{x} = \ln n$ . If $t > -1$ , we have $\sum_{i=1}^{n} i^t \leq \int_{0}^{n+1} x^t , \mathrm{d}x = \frac{(n+1)^{t+1}}{t+1}$ .

Lemma E.6. For any $x \geq 1$ and $q \in (0,1)$ , we have

xqβˆ’(xβˆ’1)q≀q(xβˆ’1)qβˆ’1. x ^ {q} - (x - 1) ^ {q} \leq q (x - 1) ^ {q - 1}.

Proof. Consider the function $f$ defined by $x \mapsto x^q$ . We have $f''(x) = q(q - 1)x^{q - 2} \leq 0$ for $x \geq 0$ and $q \in (0,1)$ . Hence, $f(x)$ is concave for $x \geq 0$ and $q \in (0,1)$ . Therefore, by properties of concave functions, we have $f(x) \leq f(x - 1) + f'(x - 1)(x - (x - 1)) = f(x - 1) + q(x - 1)^{q - 1}$ for any $x \geq 1$ and $q \in (0,1)$ , which gives $x^q - x^{q - 1} \leq q(x - 1)^{q - 1}$ .

E.3. Lemmas on the FTRL Framework for MAB Algorithm Design

Lemma E.7. For any algorithm that plays action $i_t \sim x_t$ where ${x_t}_{t=1}^T$ can be regarded as a stochastic process adapted to the natural filtration ${\mathcal{F}t}{t=0}^T$ , its regret, in a stochastically constrained adversarial environment with unique best arm $i^* \in [K]$ , is lower-bounded by

RTβ‰₯βˆ‘t∈[T]βˆ‘iβ‰ iβˆ—Ξ”iE[xt,i∣Ftβˆ’1]. \mathcal {R} _ {T} \geq \sum_ {t \in [ T ]} \sum_ {i \neq i ^ {*}} \Delta_ {i} \mathbb {E} \left[ x _ {t, i} \mid \mathcal {F} _ {t - 1} \right].

Proof. By definition of $\mathcal{R}_T$ and $\Delta_i$ , we have

RT=E[βˆ‘t=1T⟨xtβˆ’eiβˆ—,ΞΌt⟩]=E[βˆ‘t=1Tβˆ‘iβ‰ iβˆ—xt,iΞΌt,iβˆ’(1βˆ’xt,iβˆ—)ΞΌt,iβˆ—]=βˆ‘t=1TE[βˆ‘iβ‰ iβˆ—xt,i(ΞΌt,iβˆ’ΞΌt,iβˆ—)], \mathcal {R} _ {T} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \langle x _ {t} - \mathbf {e} _ {i ^ {*}}, \mu_ {t} \rangle \right] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} x _ {t, i} \mu_ {t, i} - (1 - x _ {t, i ^ {*}}) \mu_ {t, i ^ {*}} \right] = \sum_ {t = 1} ^ {T} \mathbb {E} \left[ \sum_ {i \neq i ^ {*}} x _ {t, i} (\mu_ {t, i} - \mu_ {t, i ^ {*}}) \right],

which is exactly $\sum_{t\in [T]}\sum_{i\neq i^*}\Delta_i\mathbb{E}[x_{t,i}\mid \mathcal{F}_{t - 1}]$

Lemma E.8 (Property of Weighted Importance Sampling Estimator). For any distribution $x \in \triangle_{[K]}$ and loss vector $\ell \in \mathbb{R}^K$ sampled from a distribution $\nu \in \triangle_{\mathbb{R}^k}$ , if we pulled an arm $i$ according to $x$ , then the weighted importance sampler $\tilde{\ell}(j) \triangleq \frac{\ell(j)}{x_j} \mathbb{1}[i = j]$ gives an unbiased estimate of $\mathbb{E}[\ell]$ , i.e.,

[β„“~(j)]=E[β„“(j)],βˆ€1≀j≀K. \underset {i \sim x} {\mathbb {E}} \left[ \tilde {\ell} (j) \right] = \mathbb {E} [ \ell (j) ], \quad \forall 1 \leq j \leq K.

Proof. As the adversary is oblivious (or even stochastic),

[β„“~(j)]=βˆ‘i=1KE[β„“(j)]xj1i=jβ‹…Pr⁑{1i}=Pr⁑{i=j}E[β„“(j)]xj=E[β„“(j)], \underset {i \sim x} {\mathbb {E}} \left[ \tilde {\ell} (j) \right] = \sum_ {i = 1} ^ {K} \frac {\mathbb {E} [ \ell (j) ]}{x _ {j}} \mathbb {1} _ {i = j} \cdot \Pr \{\mathbb {1} _ {i} \} = \Pr \{i = j \} \frac {\mathbb {E} [ \ell (j) ]}{x _ {j}} = \mathbb {E} [ \ell (j) ],

for any $1 \leq j \leq K$ .

Lemma E.9 (FTRL Regret Decomposition). For any FTRL algorithm, i.e., the action $x_{t}$ for any $t \in [T]$ is decided by $\operatorname{argmin}{x \in \triangle{[K]}} (\eta_t \sum_{1 \leq s \leq t} \langle \hat{\ell}_s, x \rangle + \Psi(x))$ , where $\eta_t$ is the learning rate, $\hat{\ell}_s$ is some arbitrary vector and $\Psi(x)$ is a convex regularizer, we have

βˆ‘t=1T⟨xtβˆ’y,β„“^tβŸ©β‰€βˆ‘t=1T(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt))+βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt) \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle \leq \sum_ {t = 1} ^ {T} \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) (\Psi (y) - \Psi (x _ {t})) + \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t})

for any $y \in \triangle_{[K]}$ , where $z_t \triangleq \nabla \Psi^*(\nabla \Psi(x_t) - \eta_t \hat{\ell}_t)$ .

Proof. Let $\hat{L}t \triangleq \sum{s=1}^t \hat{\ell}_s$ , we then have

βˆ‘t=1T⟨xtβˆ’y,β„“^t⟩=βˆ‘t=1Tβˆ’Ξ·tβˆ’1⟨xt,βˆ’Ξ·tβ„“^t⟩+⟨y,βˆ’L^T⟩=βˆ‘t=1TΞ·tβˆ’1[Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^t)βˆ’Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^tβˆ’1)βˆ’βŸ¨xt,βˆ’Ξ·tβ„“^t⟩]+βˆ‘t=1T[Ξ·tβˆ’1Ξ¨Λ‰βˆ—(βˆ’Ξ·tL^tβˆ’1)βˆ’Ξ·tβˆ’1Ξ¨Λ‰βˆ—(βˆ’Ξ·tL^t)]+⟨y,βˆ’L^T⟩=βˆ‘t=1TΞ·tβˆ’1DΞ¨β€Ύβˆ—(βˆ’Ξ·tL^t,βˆ’Ξ·tL^tβˆ’1)+βˆ‘t=1T[Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^tβˆ’1)βˆ’Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^t)]+⟨y,βˆ’L^T⟩=βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,βˆ‡Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^t))+βˆ‘t=1T[Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^tβˆ’1)βˆ’Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^t)]+⟨y,βˆ’L^TβŸ©βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,βˆ‡Ξ¨βˆ—(βˆ’Ξ·tL^t))+βˆ‘t=1T[Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^tβˆ’1)βˆ’Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^t)]+⟨y,βˆ’L^T⟩=βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1T[Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^tβˆ’1)βˆ’Ξ·tβˆ’1Ξ¨β€Ύβˆ—(βˆ’Ξ·tL^t)]+⟨y,βˆ’L^TβŸ©βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1Tβˆ’1[⟨xt,βˆ’L^tβˆ’1βŸ©βˆ’Ξ·tβˆ’1Ξ¨(xt)βˆ’sup⁑xβˆˆΞ”[K]{⟨x,βˆ’L^tβŸ©βˆ’Ξ·tβˆ’1Ξ¨(x)}]+⟨xT,βˆ’L^Tβˆ’1βŸ©βˆ’Ξ·Tβˆ’1Ξ¨(xT)βˆ’sup⁑xβˆˆΞ”[K]{⟨x,βˆ’L^TβŸ©βˆ’Ξ·Tβˆ’1Ξ¨(x)}+⟨y,βˆ’L^TβŸ©β‰€βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1Tβˆ’1[⟨xt,βˆ’L^tβˆ’1βŸ©βˆ’Ξ·tβˆ’1Ξ¨(xt)βˆ’βŸ¨xt+1,βˆ’L^t⟩+Ξ·tβˆ’1Ξ¨(xt+1)]+⟨xT,βˆ’L^Tβˆ’1βŸ©βˆ’Ξ·Tβˆ’1Ξ¨(xT)βˆ’sup⁑xβˆˆΞ”[K]{⟨x,βˆ’L^TβŸ©βˆ’Ξ·Tβˆ’1Ξ¨(x)}+⟨y,βˆ’L^T⟩=βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1T(Ξ·tβˆ’1βˆ’1βˆ’Ξ·tβˆ’1)Ξ¨(xt)βˆ’sup⁑xβˆˆΞ”[K]{⟨x,βˆ’L^TβŸ©βˆ’Ξ·Tβˆ’1Ξ¨(x)}+⟨y,βˆ’L^T⟩=βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1T(Ξ·tβˆ’1βˆ’1βˆ’Ξ·tβˆ’1)Ξ¨(xt)βˆ’sup⁑xβˆˆΞ”[K]{⟨x,βˆ’L^TβŸ©βˆ’Ξ·Tβˆ’1Ξ¨(x)}+⟨y,βˆ’L^TβŸ©βˆ’Ξ·Tβˆ’1Ξ¨(y)+Ξ·Tβˆ’1Ξ¨(y)β‰€βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1T(Ξ·tβˆ’1βˆ’1βˆ’Ξ·tβˆ’1)Ξ¨(xt)+Ξ·Tβˆ’1Ξ¨(y)=βˆ‘t=1TΞ·tβˆ’1DΞ¨(xt,zt)+βˆ‘t=1T(Ξ·tβˆ’1βˆ’Ξ·tβˆ’1βˆ’1)(Ξ¨(y)βˆ’Ξ¨(xt)) \begin{array}{l} \sum_ {t = 1} ^ {T} \langle x _ {t} - y, \hat {\ell} _ {t} \rangle = \sum_ {t = 1} ^ {T} - \eta_ {t} ^ {- 1} \langle x _ {t}, - \eta_ {t} \hat {\ell} _ {t} \rangle + \langle y, - \hat {L} _ {T} \rangle \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} \left[ \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t}) - \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t - 1}) - \langle x _ {t}, - \eta_ {t} \hat {\ell} _ {t} \rangle \right] \\ + \sum_ {t = 1} ^ {T} \left[ \eta_ {t} ^ {- 1} \bar {\Psi} ^ {*} (- \eta_ {t} \hat {L} _ {t - 1}) - \eta_ {t} ^ {- 1} \bar {\Psi} ^ {*} (- \eta_ {t} \hat {L} _ {t}) \right] + \langle y, - \hat {L} _ {T} \rangle \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\overline {{\Psi}} ^ {*}} (- \eta_ {t} \hat {L} _ {t}, - \eta_ {t} \hat {L} _ {t - 1}) + \sum_ {t = 1} ^ {T} \left[ \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t - 1}) - \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t}) \right] + \langle y, - \hat {L} _ {T} \rangle \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, \nabla \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t})\right) + \sum_ {t = 1} ^ {T} \left[ \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t - 1}) - \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t}) \right] + \langle y, - \hat {L} _ {T} \rangle \\ \stackrel {\text {(a)}} {\leq} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, \nabla \Psi^ {*} (- \eta_ {t} \hat {L} _ {t})\right) + \sum_ {t = 1} ^ {T} \left[ \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t - 1}) - \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t}) \right] + \langle y, - \hat {L} _ {T} \rangle \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) + \sum_ {t = 1} ^ {T} \left[ \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t - 1}) - \eta_ {t} ^ {- 1} \overline {{\Psi}} ^ {*} (- \eta_ {t} \hat {L} _ {t}) \right] + \langle y, - \hat {L} _ {T} \rangle \\ \stackrel {\text {(b)}} {=} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) + \sum_ {t = 1} ^ {T - 1} \left[ \left\langle x _ {t}, - \hat {L} _ {t - 1} \right\rangle - \eta_ {t} ^ {- 1} \Psi (x _ {t}) - \sup _ {x \in \Delta_ {[ K ]}} \left\{\left\langle x, - \hat {L} _ {t} \right\rangle - \eta_ {t} ^ {- 1} \Psi (x) \right\} \right] \\ + \left\langle x _ {T}, - \hat {L} _ {T - 1} \right\rangle - \eta_ {T} ^ {- 1} \Psi (x _ {T}) - \sup _ {x \in \Delta_ {[ K ]}} \left\{\left\langle x, - \hat {L} _ {T} \right\rangle - \eta_ {T} ^ {- 1} \Psi (x) \right\} + \left\langle y, - \hat {L} _ {T} \right\rangle \\ \leq \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) + \sum_ {t = 1} ^ {T - 1} \left[ \langle x _ {t}, - \hat {L} _ {t - 1} \rangle - \eta_ {t} ^ {- 1} \Psi (x _ {t}) - \langle x _ {t + 1}, - \hat {L} _ {t} \rangle + \eta_ {t} ^ {- 1} \Psi (x _ {t + 1}) \right] \\ + \langle x _ {T}, - \hat {L} _ {T - 1} \rangle - \eta_ {T} ^ {- 1} \Psi (x _ {T}) - \sup _ {x \in \Delta_ {[ K ]}} \left\{\langle x, - \hat {L} _ {T} \rangle - \eta_ {T} ^ {- 1} \Psi (x) \right\} + \langle y, - \hat {L} _ {T} \rangle \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) + \sum_ {t = 1} ^ {T} (\eta_ {t - 1} ^ {- 1} - \eta_ {t} ^ {- 1}) \Psi (x _ {t}) - \sup _ {x \in \Delta_ {[ K ]}} \left\{\langle x, - \hat {L} _ {T} \rangle - \eta_ {T} ^ {- 1} \Psi (x) \right\} + \langle y, - \hat {L} _ {T} \rangle \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) + \sum_ {t = 1} ^ {T} \left(\eta_ {t - 1} ^ {- 1} - \eta_ {t} ^ {- 1}\right) \Psi \left(x _ {t}\right) \\ - \sup _ {x \in \Delta_ {[ K ]}} \left\{\langle x, - \hat {L} _ {T} \rangle - \eta_ {T} ^ {- 1} \Psi (x) \right\} + \langle y, - \hat {L} _ {T} \rangle - \eta_ {T} ^ {- 1} \Psi (y) + \eta_ {T} ^ {- 1} \Psi (y) \\ \leq \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} (x _ {t}, z _ {t}) + \sum_ {t = 1} ^ {T} (\eta_ {t - 1} ^ {- 1} - \eta_ {t} ^ {- 1}) \Psi (x _ {t}) + \eta_ {T} ^ {- 1} \Psi (y) \\ = \sum_ {t = 1} ^ {T} \eta_ {t} ^ {- 1} D _ {\Psi} \left(x _ {t}, z _ {t}\right) + \sum_ {t = 1} ^ {T} \left(\eta_ {t} ^ {- 1} - \eta_ {t - 1} ^ {- 1}\right) \left(\Psi (y) - \Psi \left(x _ {t}\right)\right) \\ \end{array}

where step (a) is due to the Pythagoras property of Bregman divergences, and in step (b) we just plugged in the definition of $\overline{\Psi}^*$ in $\Psi$ .