Adaptive Best-of-Both-Worlds Algorithm for Heavy-Tailed Multi-Armed Bandits
Jiatai Huang1 Yan Dai1 Longbo Huang
Abstract
In this paper, we generalize the concept of heavily-tailed multi-armed bandits to adversarial environments, and develop robust best-of-both-worlds algorithms for heavy-tailed multi-armed bandits (MAB), where losses have $\alpha$ -th $(1 < \alpha \leq 2)$ moments bounded by $\sigma^{\alpha}$ , while the variances may not exist. Specifically, we design an algorithm HTINF, when the heavy-tail parameters $\alpha$ and $\sigma$ are known to the agent, HTINF simultaneously achieves the optimal regret for both stochastic and adversarial environments, without knowing the actual environment type a-priori. When $\alpha, \sigma$ are unknown, HTINF achieves a log $T$ -style instance-dependent regret in stochastic cases and $o(T)$ no-regret guarantee in adversarial cases. We further develop an algorithm AdaTINF, achieving $\mathcal{O}(\sigma K^{1 - 1 / \alpha} T^{1 / \alpha})$ minimax optimal regret even in adversarial settings, without prior knowledge on $\alpha$ and $\sigma$ . This result matches the known regret lower-bound (Bubeck et al., 2013), which assumed a stochastic environment and $\alpha$ and $\sigma$ are both known. To our knowledge, the proposed HTINF algorithm is the first to enjoy a best-of-both-worlds regret guarantee, and AdaTINF is the first algorithm that can adapt to both $\alpha$ and $\sigma$ to achieve optimal gap-independent regret bound in classical heavy-tailed stochastic MAB setting and our novel adversarial formulation.
1. Introduction
In this paper, we focus on the multi-armed bandit problem with heavy-tailed losses. Specifically, in our setting, there is an agent facing $K$ feasible actions (called bandit arms) to
*Equal contribution ${}^{1}$ Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China. Correspondence to: Longbo Huang longbohuang@tsinghua.edu.cn.
Proceedings of the $39^{th}$ International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
sequentially make decisions on. For each time step $t\in [T]$ 1 each arm $i\in [K]$ is associated with a loss distribution $\nu_{t,i}$ which is unknown to the agent. The only constraint on $\nu_{t,i}$ is that the $\alpha$ -th moment $(\alpha \in (1,2])$ is bounded by some constant $\sigma^{\alpha}$ , i.e., $\mathbb{E}{\ell \sim \nu{t,i}}[|\ell |^{\alpha}]\leq \sigma^{\alpha}$ for all $t\in [T]$ and $i\in [K]$ . However, neither $\alpha$ nor $\sigma$ is known to the agent.
At each step $t$ , the agent picks an arm $i_t$ and observes a loss $\ell_{t,i_t}$ drawn from the distribution $\nu_{t,i_t}$ , which is independent to all previous steps. The goal of the agent is to minimize the pseudo-regret, which is defined by the expected difference between the loss he suffered and the loss of always pulling the best arm in hindsight (formally defined in Definition 3.2), where the expectation is taken with respect to the randomness both in the algorithm and the environment.
Prior MAB literature mostly studies settings where the loss distributions are supported on a bounded interval $I$ (e.g., $I = [0,1]$ ) known to the agent before-hand, which is a special case of our setting where all $\nu_{t,i}$ 's are Dirac measures centered within $I$ (Seldin & Slivkins, 2014; Zimmert & Seldin, 2019). By constraint, there is another common existing MAB setting called scale-free MAB (De Rooij et al., 2014; Orabona & PΓ‘l, 2018), where the range of losses are not known. In this case, the loss range itself can even depend on other scale parameter of the problem instance (e.g., $T$ and $K$ ) rather than being a constant. Our heavy-tailed setting can be seen as an intermediate setting between bounded-loss MAB and scale-free MAB, where loss feedback can be indefinitely large, but not in a completely arbitrary manner. This setting naturally extends classical MAB settings, including bounded-loss MAB and sub-Gaussian-loss MAB.
Following the convention of prior MAB literature, we further distinguish the environment into two typical types. Environment of the first type consists with time homogeneous distributions, i.e., for each $i\in [K]$ $\nu_{t,i} = \nu_{1,i}$ holds for all $t\in [T]$ . We call them stochastic environments. Bubeck et al. (2013) proved that, for heavy-tailed stochastic bandits, even when $\alpha$ and $\sigma$ are both known to the agent, an $\Omega (\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ instance-independent regret and $\Omega (\sigma^{\frac{\alpha}{\alpha - 1}}\sum_{i\neq i^{*}}\log T\Delta_{i}^{-\frac{1}{\alpha - 1}})$ instance-dependent regret is
unavoidable, where $i^*$ denotes the optimal arm in hindsight, and $\Delta_i \triangleq \mathbb{E}{\ell \sim \nu{1,i}}[\ell] - \mathbb{E}{\ell \sim \nu{1,i*}}[\ell]$ is the sub-optimally gap between $i$ and $i^*$ . They also designed an algorithm that matches these lower-bounds up to logarithmic factors when both $\alpha$ and $\sigma$ are known.
In the second type of environments, loss distributions can be time inhomogeneous, and we call them adversarial environments. To our knowledge, no previous work studied similar adversarial heavy-tailed MAB problems. It can be seen that the instance-independent lower-bound $\Omega (\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ for stochastic heavy-tailed MAB proved by Bubeck et al. (2013) is also a lower-bound for this adversarial extension.
In this paper, we develop algorithms for heavy-tailed bandits in both stochastic and adversarial cases. In contrast to existing (stochastic) heavy-tailed MAB algorithms (Bubeck et al., 2013; Lee et al., 2020) that heavily use well-designed mean estimators for heavy-tailed distributions, our algorithms are mainly designed based on the Follow-the-Regularized-Leader (FTRL) framework, which has been applied in a number of adversarial MAB works (Zimmert & Seldin, 2019; Seldin & Lugosi, 2017). Our proposed algorithms enjoy optimal or near-optimal regret guarantees and require much less prior knowledge compared to prior works. When $\sigma, \alpha$ are known before-hand, our algorithm matches existing gap dependent and independent regret lower-bounds, while previous algorithms suffer extra log-factors (check Table 1 for a comparison). Finally, we propose an algorithm with $\mathcal{O}(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ regret even when $\sigma, \alpha$ are both unknown, which shows the existing $\Omega(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ lower-bound is tight even when all prior knowledge on $\sigma, \alpha$ is absent.
1.1. Our Contributions
We first introduce a novel adversarial MAB setting where losses are heavy-tailed, which generalizes the existing heavy-tailed stochastic MAB setting and scalar-loss adversarial MAB setting. Three novel algorithms are proposed. HTINF enjoys an optimal best-of-both-worlds regret guarantee when $\alpha, \sigma$ are known. Without the knowledge of $\alpha, \sigma$ , OptTINF guarantees $o(T)$ adversarial regret (a.k.a. "no-regret guarantee") and $\mathcal{O}(\log T)$ gap-dependent bound for stochastically constrained environments. AdaTINF guarantees minimax optimal $\mathcal{O}(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha})$ adversarial regret.
1.1.1. KNOWN $\alpha ,\sigma$ CASE
When $\alpha, \sigma$ are both known to the agent, we provide a novel algorithm called Heavy-Tail Tsallis-INF (HTINF, Algorithm 1), based on the Follow-the-Regularized-Leader (FTRL) framework. In HTINF, We introduce a novel skipping technique equipped with an action-dependent skipping threshold ( $r_t$ in Algorithm 1) to handle the heavy-tailed losses, which can be of independent interest.
HTINF enjoys the so-called best-of-both-worlds property (Bubeck & Slivkins, 2012) to achieve $\mathcal{O}\left(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha}\right)$ regret in adversarial settings and $\mathcal{O}\left(\sigma^{-\frac{\alpha}{\alpha - 1}}\sum_{i\neq i^*}\Delta_i^{-\frac{1}{\alpha - 1}}\log T\right)$ regret in stochastically constrained adversarial settings (which contains stochastic cases; see Section 3 for definition) simultaneously, without knowing the actual environment type a-priori. The claimed regret bounds both match the corresponding lower-bounds by Bubeck et al. (2013), showing that these bounds are indeed tight even for our adversarial setting.
1.1.2. UNKNOWN $\alpha, \sigma$ CASE
When the agent does not access to $\alpha$ and $\sigma$ , running HTINF optimistically with $\alpha = 2$ and $\sigma = 1$ (named OptTINF; Algorithm 2) also gives non-trivial regret guarantees. Specifically, we showed that it enjoys a near-optimal regret of $\mathcal{O}\left(\sum_{i \neq i^*} \left( \frac{\sigma^{2\alpha}}{\Delta_i^{3 - \alpha}} \right)^{\frac{1}{\alpha - 1}} \log T\right)$ in stochastically constrained adversarial environments and $\mathcal{O}\left( \sigma^\alpha K^{\frac{\alpha - 1}{2}} T^{\frac{3 - \alpha}{2}} \right)$ regret in adversarial cases, which is still $o(T)$ .
We further present another novel algorithm called Adaptive Tsallis-INF (AdaTINF, Algorithm 3) for heavy-tailed bandits. Without knowing the heavy-tail parameters $\alpha$ and $\sigma$ before-hand, AdaTINF is capable of guaranteeing an $\mathcal{O}\left(\sigma K^{1 - 1 / \alpha}T^{1 / \alpha}\right)$ regret in the adversarial setting, matching the regret lower-bound from Bubeck et al. (2013).
To the best of our knowledge, all prior algorithms for MAB with heavy-tailed losses need to know $\alpha$ before-hand. The proposed two algorithms, OptTINF and AdaTINF, are the first algorithms to have the adaptivity for both unknown heavy-tail parameters $\alpha$ and $\sigma$ , while achieving near-optimal regrets in stochastic or adversarial settings.
1.2. Related Work
Heavy-tailed losses: The heavy-tailed (stochastic) bandit model was first introduced by Bubeck et al. (2013), where instance-dependent and independent lower-bounds were given. They designed an algorithm nearly matching these lower-bounds (with an extra log $T$ factor in the gap-independent regret), when $\sigma, \alpha$ are both known to the agent. Vakili et al. (2013) derived a tighter upper-bound with $\alpha, \sigma$ and min $\Delta_{i}$ all presented to the agent. Kagrecha et al. (2019) gave an algorithm adaptive to $\sigma$ in a pure exploration setting. Lee et al. (2020) got rid of the requirement of $\sigma$ , yielding near-optimal regret bounds with a prior knowledge on $\alpha$ only. Moreover, all above algorithms built on the UCB framework, which does not directly apply to adversarial environments. One can refer to Table 1 for a comparison.
Other variations with heavy-tailed losses are also studied in the literature, e.g., linear bandits (Medina & Yang, 2016;
Table 1. An overview of the proposed algorithms and related works.
aAs discussed in Section 3, the instance-independent lower bounds automatically apply to adversarial settings, and the main result of this paper shows that it is indeed tight even for adversarial settings. ${}^{b}$ Lee et al. (2020) regarded $\sigma$ as a constant when stating their regret bounds. By designing different estimators, they also gave various instance-dependent bounds, each with ${\left( \log T\right) }^{\frac{\alpha }{\alpha - 1}}$ (sub-optimal) dependency on $T$ . One can check Table 1 in their paper for more details. ${}^{c}$ Abbreviation for stochastically constrained adversarial settings with a unique optimal arm. Though the time horizon $T$ is assumed to be known in Algorithm 3, it is in fact non-essential for AdaTINF. The removal of $T$ , via a usual doubling trick, will not cause extra factors. Check Appendix D for more discussions.
Xue et al., 2021), contextual bandits (Shao et al., 2018) and Lipschitz bandits (Lu et al., 2019). However, none of above algorithms removes the dependency on $\alpha$ .
Best-of-both-worlds: This concept of designing a single algorithm to yield near-optimal regret in both stochastic and adversarial environments was first proposed by Bubeck & Slivkins (2012). Bubeck & Slivkins (2012); Auer & Chiang (2016); Besson & Kaufmann (2018) designed algorithms that initially run a policy for stochastic settings, and may permanently switch to a policy for adversarial settings during execution. Seldin & Slivkins (2014); Seldin & Lugosi (2017); Wei & Luo (2018); Zimmert & Seldin (2019) designed algorithm using the Online Mirror Descent (OMD) or Follow the Regularized Leader (FTRL) framework. Our
We use $[N]$ to denote the integer set ${1,2,\dots ,N}$ . Let $f$ be any strictly convex function defined on a convex set $\Omega \subseteq \mathbb{R}^K$ . For $x,y\in \Omega$ , if $\nabla f(x)$ exists, we denote the Bregman divergence induced by $f$ as
We use $f^{*}(y) \triangleq \sup_{x \in \mathbb{R}^{K}} {\langle y, x \rangle - f(x)}$ to denote the Fenchel conjugate of $f$ . Denote the $K-1$ -dimensional probability simplex by $\triangle_{[K]} = {x \in \mathbb{R}_+^K \mid x_1 + x_2 + \dots + x_K = 1}$ . We use $\mathbf{e}i \in \triangle{[K]}$ to denote the vector whose $i$ -th coordinate is 1 and others are 0.
Let $\overline{f}$ denote the restriction of $f$ on $\triangle_{[K]}$ , i.e.,
Let $\mathcal{E}$ be a random event, we use $\mathbb{1}[\mathcal{E}]$ to denote the indicator of $\mathcal{E}$ , which equals 1 if $\mathcal{E}$ happens, and 0 otherwise.
3. Problem Setting
We now introduce our formulation of the heavy-tailed MAB problem. Formally speaking, there are $K \geq 2$ available arms indexed from 1 to $K$ , and $T \geq 1$ time slots for the agent to make decisions sequentially. ${\nu_{t,i}}_{t \in [T], i \in [K]}$ are $T \times K$ probability distributions over real numbers, which are fixed before the game starts and unknown to the agent (i.e., obliviously adversely chosen). Instead of the usual assumption of bounded variance or even bounded range, we only assume that they are heavy-tailed, as follows.
Assumption 3.1 (Heavy-tailed Losses Assumption). The $\alpha$ -th moment of all loss distributions ${\nu_{t,i}}$ are bounded by $\sigma^{\alpha}$ for some constants $1 < \alpha \leq 2$ and $\sigma > 0$ , i.e.,
In this paper, we will discuss how to design algorithms for two different cases: $\alpha$ and $\sigma$ are known before-hand or oblivious (i.e., fixed before-hand but unknown to the agent). We denote by $\mu_{t,i} \triangleq \mathbb{E}{x \sim \nu{t,i}}[x]$ the individual mean loss for each arm and $\mu_t \triangleq (\mu_{t,1}, \mu_{t,2}, \ldots, \mu_{t,K})$ the mean loss vector at time $t$ , respectively.
At the beginning of each time slot $t$ , the agent needs to choose an action $i_t \in [K]$ . At the end of time slot $t$ , the agent will receive and suffer a loss $\ell_{t,i_t}$ , which is guaranteed to be an independent sample from the distribution $\nu_{t,i_t}$ . The agent is allowed to make the decision $i_t$ based on all history actions $i_1, \ldots, i_{t-1}$ , all history feedback $\ell_{1,i_1}, \ldots, \ell_{t-1,i_{t-1}}$ , and any amount of private randomness of the agent.
The objective of the agent is to minimize the total loss. Equivalently, the agent aims to minimize the following pseudo-regret defined by Bubeck & Slivkins (2012) (also referred to as the regret in this paper for simplicity):
to be the pseudo-regret of an MAB algorithm, where the expectation is taken with respect to randomness from both the algorithm and the environment.
In the remaining of this paper, we will use $\mathcal{F}t \triangleq \sigma(i_1, \dots, i_t, \ell{1,i_1}, \dots, \ell_{t,i_t})$ to denote the natural filtration of an MAB algorithm execution.
3.1. Stochastically Constrained Environments
Definition 3.3 (Stochastic Environments). If, for each arm $i \in [K]$ , all $T$ loss distributions $\nu_{1,i}, \nu_{2,i}, \ldots, \nu_{T,i}$ are identical, we call such environment a stochastic environment.
A more general setting is called stochastically constrained adversarial setting (Wei & Luo, 2018), defined as follows.
Definition 3.4 (Stochastically Constrained Adversarial Environments). If, there exists an optimal arm $i^{} \in [K]$ and mean gaps $\Delta_{i} \geq 0$ such that for all $t \in [T]$ , we have $\mu_{t,i} - \mu_{t,i^{}} \geq \Delta_{i}$ for all $i \neq i^{*}$ , we call such environment a stochastically constrained adversarial environment.
It can be seen that stochastic problem instances are special cases of stochastically constrained adversarial instances. Hence, in this paper, we study this more general setting instead of stochastic cases. As in Zimmert & Seldin (2019), we make the following assumption.
Assumption 3.5 (Unique Optimal Arm Assumption). In stochastically constrained adversarial environments, $i^{*}$ is the unique best arm throughout the process, i.e.,
Ξiβ>0,βiξ =iβ.
Remark. The existence of a unique optimal arm is a common assumption in MAB and RL literature leveraging FTRL with Tsallis entropy regularizers (Zimmert & Seldin, 2019; Erez & Koren, 2021; Jin & Luo, 2020; Jin et al., 2021). Recently, Ito (2021) gave a new analysis of Tsallis-INF's logarithmic regret on stochastic MAB instances without this assumption. It is an interesting future work to figure out whether it is doable in our heavy-tailed losses setting.
3.2. Adversarial Environments
In contrast, an environment without any extra requirement is called an adversarial environment. We denote the best arm(s) in hindsight by $i^{}$ , i.e., the $i \in [K]$ that makes the expectation in Eq. (1) maximum. We make the following assumption on the losses of arm $i^{}$ .
Assumption 3.6 (Truncated Non-negative Losses Assumption). There exists an optimal arm $i^{}$ such that $\ell_{t,i^{}}$ is truncated non-negative for all $t \in [T]$ .
In the assumption, the truncated non-negative property is defined as follows.
Definition 3.7 (Truncated Non-negativity). A random variable $X$ is truncated non-negative, if for any $M \geq 0$ ,
E[Xβ 1[β£Xβ£>M]]β₯0.
Remark. This truncated non-negative requirement is strictly weaker than the common non-negative losses assumption in MAB literature, especially works fitting in the FTRL framework (Auer et al., 2002; Zimmert & Seldin, 2019). Intuitively, truncated non-negativity forbids the random variable to hold too much mass on its negative part, but it can still have negative outcomes.
4. Static Algorithm: HTINF
In this section, we first present an algorithm achieving optimal regrets when $\alpha, \sigma$ are both known before-hand, and then extend it to the unknown $\alpha, \sigma$ case.
4.1. Known $\alpha, \sigma$ Case
For the case where both $\alpha$ and $\sigma$ are known a-priori, we present a FTRL algorithm with the $\frac{1}{\alpha}$ -Tsallis entropy function $\Psi(\mathbf{x}) = -\alpha \sum_{i=1}^{K} x_i^{1/\alpha}$ (Tsallis, 1988; Abernethy et al., 2015; Zimmert & Seldin, 2019) as the regularizer. We pick $\eta = t^{-1/\alpha}$ as the learning rate of the FTRL algorithm. Importance sampling is used to construct estimates $\hat{\ell}_t$ of the true loss feedback vector $\ell_t$ .
In this algorithm, to handle the heavy-tailed losses, we designed a novel skipping technique with action-dependent threshold $r_t \propto \eta_t^{-1} x_{t,i_t}^{1 / \alpha}$ at time slot $t$ , i.e., the agent simply discards those time slots with the absolute value of the loss feedback more than $r_t$ . Note that this skipping criterion with dependency on $i_t$ is properly defined, for it is checked after deciding the arm $i_t$ and receiving the feedback. To decide $x_t$ , the probability to pull each arm in a new time step, we pick the best mixed action $x$ against the sum of all non-skipped estimated loss $\hat{\ell}_t$ 's, in a regularized manner. The pseudo-code of the algorithm is presented in Algorithm 1.
The performance of Algorithm 1 is presented in the following Theorem 4.1. The proof is sketched in Section 5. For a
Algorithm 1 Heavy-Tail Tsallis-INF (HTINF)
Input: Number of arms $K$ , heavy-tail parameters $\alpha$ and $\sigma$ Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$
1: for $t = 1,2,\cdots$ do 2: Calculate policy with learning rate $\eta_t^{-1} = \sigma t^{1 / \alpha}$ ; Pick the regularizer $\Psi (x) = -\alpha \sum_{i = 1}^{K}x_i^{1 / \alpha}$ :
The $\mathcal{O}(\log T)$ instance-dependent bound in Theorem 4.1 is due to a property similar to the self-bounding property of $1 / 2$ -Tsallis entropy (Zimmert & Seldin, 2019). For $\alpha < 2$ , such properties of $1 / \alpha$ -Tsallis entropy do not automatically hold, they are made possible by our novel skipping mechanism with action-dependent threshold.
4.2. Extending to Unknown $\alpha, \sigma$ Case: OptTINF
The two hyper-parameters $\sigma, \alpha$ in Algorithm 3 are just set to the true heavy-tail parameters of the loss distributions when they are known before-hand. When the distributions' heavy-tail parameters $\alpha, \sigma$ are both unknown to the agent, we can prove that by directly running HTINF with algorithm hyper-parameters $\alpha = 2$ and $\sigma = 1$ (not necessarily equal to the true $\alpha, \sigma$ values) "optimistically" as in Algorithm 2, one can still achieve $\mathcal{O}(\log T)$ regret in stochastic case and sub-linear regret in adversarial case.
Algorithm 2 Optimistic HTINF (OptTINF)
Input: Number of arms $K$
Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$
1: Run HTINF (Algorithm 1) with hyper-parameters $\alpha = 2$ and $\sigma = 1$ .
The performance of Algorithm 2 is described below. As the analysis is quite similar to that of Algorithm 1, we postpone the formal proof to Appendix B.
Theorem 4.2 (Performance of OptTINF). If Assumptions 3.1 and 3.6 hold, the following two statements are valid.
In adversarial cases, Algorithm 2 achieves
RTββ€O(ΟΞ±K2Ξ±β1βT23βΞ±β+KTβ).
In stochastically constrained adversarial environments with a unique optimal arm $i^{*}$ (Assumption 3.5), it ensures
For both cases, $\sigma$ and $\alpha$ in the regret bounds refer to the true heavy-tail parameters of the loss distributions.
Theorem 4.2 claims that when facing an instance with unknown $1 < \alpha < 2$ , Algorithm 2 still guarantees $O(T^{\frac{3 - \alpha}{2}})$ "no-regret" performance and $\mathcal{O}(\log T)$ instance-dependent regret upper-bound for stochastic instances.
5. Regret Analysis of HTINF
In this section, we sketch the analysis of Algorithm 1. By definition, we need to bound
for the one-hot vector $y \triangleq \mathbf{e}{i^*}$ . For any $t \in [T], i \in [K]$ , let $\mu{t,i}^{\prime} \triangleq \mathbb{E}[\ell_{t,i} \mathbb{1}[|\ell_{t,i}| \leq r_t] \mid \mathcal{F}_{t-1}, i_t = i]$ . For a given $y$ ,
where the last step is due to $\mathbb{E}[\hat{\ell}t\mid \mathcal{F}{t - 1}] = \mu_t^{\prime}$ . We call the first part the skipping gap, and the second, the FTRL error.
In the following sections, we will show that both parts can be controlled and transformed into expressions similar to the bounds with self-bounding properties in (Zimmert & Seldin, 2019), guaranteeing best-of-both-worlds style regret upper-bounds. Therefore, the design of HT INF and our new analysis generalizes the self-bounding property of (Zimmert & Seldin, 2019) from $1/2$ -Tsallis entropy regularizer to general $\alpha$ -Tsallis entropy regularizers where $1/2 \leq \alpha < 1$ .
5.1. To Control the Skipping Gap
To control the skipping gap part, notice that for all $t \in [T], i \in [K]$ , we can bound
where $\Theta_{\alpha}$ is a factor in $r_t$ and only dependent on $\alpha$ , as defined in Line 4 of Algorithm 1. Moreover, by Assumption 3.6, $\mu_{t,i^*} - \mu_{t,i^*}' \geq 0$ a.s. Summing over $i$ and $t$ gives
For the FTRL error part, we follow the regular analysis for FTRL algorithms. Note that our skipping mechanism is equivalent to plugging in $\hat{\ell}_t = 0$ for all skipped time step $t$ in a FTRL framework for MAB algorithms. Therefore, due to the definition that $\mathbb{E}[\hat{\ell}_t] = \mu_t'$ , we can leverage most standard techniques on regret analysis of a FTRL algorithm and obtain following lemma.
In Lemma 5.1, $z_{t}$ is an intermediate action probability-like measure vector (which does not necessarily sum up to 1) during the FTRL algorithm. Here we leverage a trick of drifting the loss vectors (Wei & Luo, 2018) $\hat{\ell}{t}^{\prime} \triangleq \hat{\ell}{t} - \ell_{t,i_{t}} \mathbf{1}$ . Intuitively, one can see that feeding $\hat{\ell}{t}^{\prime}$ into a FTRL framework will produce exactly the same action sequence as $\hat{\ell}{t}$ .
We then divide this upper-bound in Lemma 5.1 into two parts, parts (A) and (B), and analyze them separately.
5.2.1. BOUND FOR PART (A)
As $y$ is an one-hot vector, we have $\Psi(y) = -\alpha$ for $\Psi(x) = -\alpha \sum_{i=1}^{K} x_i^{1/\alpha}$ . Hence, each summand in part (A) becomes
In order to derive the claimed regret upper-bounds in Theorem 4.1, it suffices to plug in the bounds for the terms in Eq. (3) and Lemma 5.1.
Adversarial Case (Statement 1 in Theorem 4.1): To obtain an instance-independent bound for the expected total pseudo-regret $\mathcal{R}_T$ , we can plug inequalities (5), (7) and (9) into Eq. (3) to obtain
RTββ€30ΟK1β1/Ξ±(T+1)1/Ξ±.
Stochastically Constrained Adversarial Case (Statement 2 in Theorem 4.1): To obtain an instance-dependent bound for $\mathcal{R}_T$ , we leverage the arm-pulling probability ${x_t}$ dependent bounds (6) and (8) for the FTRL part of $\mathcal{R}_T$ . After plugging them together with (4) into (3), we see that
In this section, our main goal is to achieve minimax optimal regret bounds for adversarial settings, without any knowledge about $\alpha, \sigma$ . Instead of estimating $\alpha$ and $\sigma$ explicitly, which can be challenging, our key idea is to leverage a trade-off relationship between Part (A) and Part (B) in the FTRL error part (defined in Lemma 5.1), to balance the two parts dynamically.
To achieve a balance, we use a doubling trick to tune the learning rates and skipping thresholds, which has been adopted in the literature to design adaptive algorithms (see, e.g., Wei & Luo (2018)). The formal procedure of AdaTINF is given in Algorithm 3, with the crucial differences between Algorithm 1 highlighted in blue texts.
It can be seen as HTINF equipped with a multiplier to both learning rates and skipping thresholds, maintained at running time, as
where $\lambda_{t}$ is the doubling magnitude for the $t$ -th time slot.
Algorithm 3 Adaptive Tsallis-INF (AdaTINF) Input: Number of arms $K$ , time horizon $T$ Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$ 1: Initialize $J \gets 0$ , $S_0 \gets 0$ 2: for $t = 1, 2, \dots$ do 3: $\lambda_t \gets 2^J$ 4: Calculate policy with learning rate $\eta_t^{-1} = \lambda_t \sqrt{t}$ and regularize $\Psi(x) = -2 \sum_{i=1}^{K} x_i^{1/2}$ : [ x_t \gets \operatorname{argmin}{x \in \triangle{[K]}} \left( \eta_t \sum_{s=1}^{t-1} \langle \hat{\ell}s, x \rangle + \Psi(x) \right) ] 5: Decide action $i_t \sim x_t$ , calculate $r_t \gets \lambda_t (1 - 2^{-1/3}) \sqrt{t} \sqrt{x_t, i_t}$ . 6: Play according to $i_t$ and observe loss feedback $\ell{t,i_t}$ . 7: if $|\ell_{t,i_t}| > r_t$ then 8: $\hat{\ell}t \gets 0$ 9: $c_t \gets \ell{t,i_t}$ 10: else 11: Construct weighted importance sampling loss estimator $\hat{\ell}{t,i} = \frac{\ell{t,i}}{x_{t,i}} \mathbb{1}[i = i_t], \forall i \in [K]$ . 12: $c_t \gets 2\eta_t x_{t,i_t}^{-1/2} \ell_{t,i_t}^2$ 13: end if 14: $S_J \gets S_J + c_t$ 15: if $2^J \sqrt{K(T+1)} < S_J$ then 16: $J \gets \max {J+1, [\log_2(c_t / \sqrt{K(T+1)})] + 1}$ 17: $S_J \gets c_t$ 18: end if 19: end for
We briefly explain our design. Suppose, initially, all $\lambda_{t}$ 's are set to a same number $\lambda > 1$ instead of 1. Then, part (A) will become approximately $\lambda$ times bigger than that under HTINF, while the expected value of part (B) will be scaled by a factor $\lambda^{1 - \alpha} < 1$ . In other words, increasing $\lambda$ enlarges part (A) but makes part (B) smaller. Therefore, if we can estimate parts (A) and (B), we can keep them of roughly the same magnitude, by doubling $\lambda$ whenever (A) becomes smaller than (B).
As Eq. (4) and (5) are similar to Eq. (8) and (9), the skipping gap can be treated similarly to (B). Therefore, we also take it into consideration in the doubling-balance mechanism. Due to the future-dependent Eq. (6) is hard to estimate, we use the looser Eq. (7) to represent part (A). This stops Algorithm 3 from enjoying an $\mathcal{O}(\log T)$ -style gap-dependent regret. However, it can still guarantee a minimax optimal regret in general case, as described in Theorem 6.1.
Theorem 6.1 (Performance of AdaTINF). If Assumptions 3.1 and 3.6 hold, Algorithm 3 ensures a regret of
RTββ€O(ΟK1β1/Ξ±T1/Ξ±+KTβ),
which is minimax optimal.
The proof is sketched in Section 7, while the formal version is deferred to Appendix C.
Remark. Though $T$ is assumed to be known in Algorithm 3, the assumption can be removed via another doubling trick without effect to the order of the total regret. Check Appendix D for more details.
7. Analysis of AdaTINF
Since the crucial learning rate multiplier $\lambda_{t}$ is maintained by an adaptive doubling trick, in the analysis, we will group time slots with equal $\lambda_{t}$ 's into epochs. For $j \geq 0$ , $\mathcal{T}_j \triangleq {t \in [T] \mid \lambda_t = 2^j}$ are the indices of time slots belonging to epoch $j$ . Further denote the first step in epoch $j$ by $\gamma_j \triangleq \min {t \in \mathcal{T}_j}$ and the last one by $\tau_j \triangleq \max {t \in \mathcal{T}_j}$ . Without loss of generality, assume no doubling happened at slot $T$ , then the final value of $J$ in Algorithm 3 is just the index of the last non-empty epoch.
According to the condition to enter a new epoch (Line 15 in Algorithm 3), for all $0 \leq j < J$ , if $\mathcal{T}_j$ is non-empty, $\tau_j$ will cause $S_j > 2^j \sqrt{K(T + 1)}$ . Hence, we have the following conditions:
where $\mu_{t,i}^{\prime}\triangleq \mathbb{E}[\ell_{t,i}\mathbb{1}[|\ell_{t,i}|\leq r_t]\mid \mathcal{F}_{t - 1},i_t = i]$ . We still call $\mathcal{R}_T^s$ the skipping gap and $\mathcal{R}_T^f$ the FTRL error.
According to Lemma 5.1, we have
RTfββ€EβP a r t (A)Ξ·TβxβΞ[K]βmaxβΞ¨(x)βββ+EβP a r t (B)t=1βTβΞ·tβ1βDΞ¨β(xtβ,ztβ)ββββ€E[2J]K(T+1)β+E[βt=1TβΞ·tβ1βDΞ¨β(xtβ,ztβ)]β(15)
Similar to Algorithm 1, we can show $D_{\Psi}(x_t,z_t)\leq$ $2\eta_{t}x_{t,i_{t}}^{-1 / 2}\ell_{t,i_{t}}^{2}\mathbb{1}[|\ell_{t,i_{t}}|\leq r_{t}]$ for all $t\in [T]$ . Moreover, by Assumption 3.6, $\mathcal{R}T^s\leq \mathbb{E}[\sum{t = 1}^T\ell_{t,i_t}\mathbb{1}[|\ell_{t,i_t}| > r_t]]$ . Therefore, with the help of Eq. (11) and (13), we have
RTsβ+E[P a r t (B)]β€E[βt=1Tβctβ]β€E[2J+1K(T+1)β].β(16)
Combining Eq. (14), (15) and (16) gives
RTββ€E[2J]β 3K(T+1)β,
Therefore, it remains to bound $\mathbb{E}[2^J]$ . When $J = 0$ , there is nothing to do. Otherwise, consider the second to last non-empty epoch, $\mathcal{T}{J'}$ . The condition to enter a new epoch also guarantees that $2^J \sqrt{K(T + 1)} \leq 2^{J' + 1} \sqrt{K(T + 1)} + 4c{\tau_{J'}}$ . Applying Eq. (12) to $J' < J$ , we obtain
After appropriate relaxing the RHS of Eq. (17) and taking expectation of both sides, it solves to the following upper-bound for $\mathbb{E}[\mathbb{1}[J\geq 1]2^{J^{\prime}}]$ :
Using the fact that $\mathbb{E}[\max_{t\in [T]}\lvert\ell_{t,i_t}\rvert]\leq \sigma T^{1 / \alpha}$ (Lemma E.3), we conclude that Algorithm 3 has the regret guarantee of
We propose HTINF, a novel algorithm achieving the optimal instance-dependent regret bound for the stochastic heavily-tailed MAB problem, and the optimal instance-independent regret bound for a more general adversarial setting, without extra logarithmic factors. We also propose AdaTINF, which can achieve the same optimal instance-independent regret even when prior knowledge on heavy-tailed parameters $\alpha, \sigma$ are absent. Our work shows that the FTRL (or OMD) technique can be a powerful tool for designing heavily-tailed MAB algorithm, leading to novel theoretical results that have not been achieved by UCB algorithms.
It is an interesting future work to figure out whether it is possible to design a best-of-both-worlds algorithm without knowing the actual heavy-tail distribution parameters $\alpha$ and $\sigma$ .
Acknowledgment
This work is supported by the Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA0108403.
Jin, T. and Luo, H. Simultaneously learning stochastic and adversarial episodic mdps with known transition. Advances in neural information processing systems, 33: 16557-16566, 2020. Jin, T., Huang, L., and Luo, H. The best of both worlds: stochastic and adversarial episodic mdps with unknown transition. Advances in Neural Information Processing Systems, 34, 2021. Kagrecha, A., Nair, J., and Jagannathan, K. Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewards. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 11272β11281, 2019. Lattimore, T. Optimally confident ucb: Improved regret for finite-armed bandits. arXiv preprint arXiv:1507.07880, 2015. Lee, K., Yang, H., Lim, S., and Oh, S. Optimal algorithms for stochastic multi-armed bandits with heavy tailed rewards. Advances in Neural Information Processing Systems, 33:8452-8462, 2020. Lu, S., Wang, G., Hu, Y., and Zhang, L. Optimal algorithms for lipschitz bandits with heavy-tailed rewards. In International Conference on Machine Learning, pp. 4154-4163. PMLR, 2019. Luo, H. and Schapire, R. E. Achieving all with no parameters: Adanormalhedge. In Conference on Learning Theory, pp. 1286-1304. PMLR, 2015. Medina, A. M. and Yang, S. No-regret algorithms for heavily-tailed linear bandits. In International Conference on Machine Learning, pp. 1642-1650. PMLR, 2016. Orabona, F. and Pal, D. Coin betting and parameter-free online learning. Advances in Neural Information Processing Systems, 29:577-585, 2016. Orabona, F. and PΓ‘l, D. Scale-free online learning. Theoretical Computer Science, 716:50-69, 2018. Seldin, Y. and Lugosi, G. An improved parametrization and analysis of the $\exp 3++$ algorithm for stochastic and adversarial bandits. In Conference on Learning Theory, pp. 1743-1759. PMLR, 2017. Seldin, Y. and Slivkins, A. One practical algorithm for both stochastic and adversarial bandits. In International Conference on Machine Learning, pp. 1287-1295. PMLR, 2014. Shao, H., Yu, X., King, I., and Lyu, M. R. Almost optimal algorithms for linear stochastic bandits with heavy-tailed payoffs. Advances in Neural Information Processing Systems, 31, 2018.
Tsallis, C. Possible generalization of boltzmann-gibbs statistics. Journal of statistical physics, 52(1):479-487, 1988. Vakili, S., Liu, K., and Zhao, Q. Deterministic sequencing of exploration and exploitation for multi-armed bandit problems. IEEE Journal of Selected Topics in Signal Processing, 7(5):759-767, 2013. Wei, C.-Y. and Luo, H. More adaptive algorithms for adversarial bandits. In Conference On Learning Theory, pp. 1263-1291. PMLR, 2018. Xue, B., Wang, G., Wang, Y., and Zhang, L. Nearly optimal regret for stochastic linear bandits with heavy-tailed payoffs. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 2936-2942, 2021. Zimmert, J. and Seldin, Y. An optimal algorithm for stochastic and adversarial bandits. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 467-475. PMLR, 2019.
Supplementary Materials: Proofs and Discussions
A Formal Analysis of HTINF (Algorithm 1) 12
A.1 Main Theorem 12 A.2 Proof when Bounding $\mathcal{R}_T^s$ (the skipped part) 15 A.3 Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part) 15
B Formal Analysis of OptTINF (Algorithm 2) 18
B.1 Main Theorem 18 B.2 Proof when Bounding $\mathcal{R}_T^s$ (the skipped part) 20 B.3 Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part) 20
C Formal Analysis of AdaTINF (Algorithm 3) 21
C.1 Main Theorem 21 C.2 Proof when Reducing $\mathcal{R}_T$ to $\mathbb{E}[2^J]$ 24 C.3 Proof when Bounding $\mathbb{E}[2^{J}]$ 24
D Removing Dependency on Time Horizon $T$ in Algorithm 3 25
E Auxiliary Lemmas 26
E.1 Probability Lemmas 26 E.2 Arithmetic Lemmas 26 E.3 Lemmas on the FTRL Framework for MAB Algorithm Design 27
A. Formal Analysis of HTINF (Algorithm 1)
A.1. Main Theorem
In this section, we present a formal proof of Theorem 4.1. For the sake of accuracy, we state the regret guarantees without using any big-Oh notations, as follows (which directly implies Theorem 4.1).
Theorem A.1 (Regret Guarantee of Algorithm 1). If Assumptions 3.1 and 3.6 hold, i.e., the environment is heavy-tailed with parameters $\alpha$ and $\sigma$ , and there is an optimal arm whose all losses are truncated non-negative. Then Algorithm 1 guarantees:
The regret is no more than
RTββ€30ΟK1β1/Ξ±(T+1)1/Ξ±,
no matter the environment is stochastic or adversarial.
Furthermore, if the environment is stochastically constrained with a unique best arm $i^{*}$ , i.e., Assumption 3.5 holds, then it, in addition, enjoys a regret bound of
Proof. Define $\mu_{t,i}^{\prime}\triangleq \mathbb{E}[\ell_{t,i}\mathbb{1}[|\ell_{t,i}|\leq r_t] \mid \mathcal{F}{t - 1},i_t = i]$ . For the given $y = \mathbf{e}{i^{*}}\in \triangle_{[K]}$ , consider the regret of the algorithm with respect to policy $y$ , defined and decomposed as
which we called the skipped part and FTRL part. For simplicity, we abbreviate the parameter $(y)$ for $\mathcal{R}_T^s$ and $\mathcal{R}_T^f$ .
As defined in Algorithm 1, $\hat{\ell}t$ is set to 0 when $|\ell{t,i_t}| > r_t$ . Hence, by the property of weighted importance sampling estimator (Lemma E.8; note that it is applied to the truncated loss with mean $\mu_{t,i}^{\prime}$ ), $\mathbb{E}[\hat{\ell}{t,i} \mid \mathcal{F}{t-1}] = \mu_{t,i}^{\prime}$
where step (a) is due to $\Theta_{\alpha}^{1 - \alpha}\leq \Theta_2^{-1}\leq 5$ and (b) applies Lemma E.4 and Lemma E.5.
Now consider the second term, $\mathcal{R}_T^f$ . Consider the vector $\hat{\ell}t^\prime \triangleq \mathbb{1}[|\ell_{t,i_t}| \leq r_t](\hat{\ell}_t - \ell_{t,i_t}\mathbf{1})$ . Note that $\langle \hat{\ell}t^\prime ,x\rangle = \langle \hat{\ell}t,x\rangle - \mathbb{1}[|\ell{t,i_t}| \leq r_t]\ell{t,i_t}$ for any vector $x \in \triangle{[K]}$ , so a FTRL algorithm fed with loss vector $\hat{\ell}_t^\prime$ with produce exactly the same action sequence as another instance fed with $\hat{\ell}_t$ (as constant terms will never affect the choice of the argmax operator over the simplex). Therefore, we can apply Lemma E.9 with loss vectors as $\hat{\ell}_t^\prime$ , yielding
where the last step uses the inequality of arithmetic and geometric means $a^{1 / \alpha}b^{1 - 1 / \alpha} \leq \frac{1}{\alpha} a + \left(1 - \frac{1}{\alpha}\right)b$ . Therefore
Proof. If $|\ell_{t,i_t}| > r_t$ , then $x_{t} = z_{t}$ . Otherwise, we denote $\nabla \Psi (x_{t})$ by $x_{t}^{}$ , and denote $\nabla \Psi (z_{t})$ by $z_{t}^{}$ , then we have $-x_{t,i}^{*} = x^{-\frac{\alpha - 1}{\alpha}}$ and
where (a) comes from Lemma E.6 and (b) comes from the fact that $t \geq 1$ and $\frac{1}{\alpha} - 1 \geq -\frac{1}{2}$ . Moreover, by definition of $\Psi(x) = -\alpha \sum_{i=1}^{K} x_i^{1/\alpha}$ , we have
Proof of Lemma A.5 (and also Lemma 5.2). Consider a summand before taking expectation, i.e., $\eta_t^{-1}D_\Psi(x_t,z_t)$ . Let $f(x) = -\alpha x^{1/\alpha}$ , we then have
where the last step is due to the fact that $1 - x_{t,i^{}} = \sum_{i\neq i^{}}x_{t,i}\leq \sum_{i\neq i^{}}x_{t,i}^{1 / \alpha}$ and $1 - x_{t,i}\leq 1$ for any $i\neq i^{}$ . After applying Lemma E.4, we get
Proof. In Algorithm 2, when the parameters are set as $\alpha = 2$ and $\sigma = 1$ , we have $\eta_t^{-1} = \sqrt{t}$ and $r_t = \Theta_2\sqrt{t}\sqrt{x_{t,i_t}}$ where $\Theta_2 = 1 - 2^{-1/3}$ is an absolute constant. From now on, to avoid confusion, we use $\alpha, \sigma$ only to denote the real (hidden) parameters of the environment, instead of the parameters of the algorithm.
Following the proof of Theorem 4.1 in Appendix A, we still decompose $\mathcal{R}T(y)$ for $y = \mathbf{e}{i^*}$ into $\mathcal{R}_T^s$ and $\mathcal{R}_T^f$ , as follows.
where $z_{t} \triangleq \nabla \Psi^{*}(\nabla \Psi(x_{t}) - \eta_{t}\mathbb{1}[|\ell_{t,i_{t}}| \leq r_{t}](\hat{\ell}_{t} - \ell_{t,i_{t}}\mathbf{1}))$ . For Part (A), from Lemma A.4, we have (recall that $\Psi$ is now $\frac{1}{2}$ -Tsallis entropy)
Lemma B.3. For part (A), for any one-hot vector $y \in \triangle_{[K]}$ , Algorithm 2 ensures
Moreover, for the stochastically constrained case with a unique best arm $i^{*} \in [K]$ , with the help of AM-GM inequality, we bound each of $\mathcal{R}_T^s$ , $\mathbb{E}[(\mathbf{A})]$ and $\mathbb{E}[(\mathbf{B})]$ by
Proof of Theorem 4.2. It is a direct consequence of the theorem above.
B.2. Proof when Bounding $\mathcal{R}_T^s$ (the skipped part)
Proof of Lemma B.2. For any $t \in [T]$ and $i \in [K]$ , we can bound between the difference between the loss mean, $\mu_{t,i}$ , and the truncated loss mean, $\mu_{t,i}^{\prime}$ , as
where the last step uses $\Theta_2^{1 - \alpha} \leq \Theta_2^{-1} \leq 5$ . It further gives, by Lemma E.4, that
RTsββ€5ΟΞ±t=1βTβt1/2βΞ±/2KΞ±/2β1/2.
B.3. Proof when Bounding $\mathcal{R}_T^f$ (the FTRL part)
Proof of Lemma B.3. This is just a restatement of Lemma A.4.
Proof of Lemma B.4. We simply follow the proof of Lemma A.5, except for some slight modifications (instead of the previous lemma, we cannot directly modify all $\alpha$ 's to 2, as the second moment of $\ell_{t,i_t}$ may not exist). The first few steps are exactly the same, giving
Proof. As defined in the text, we group time slots with equal $\lambda_{t}$ 's into epochs, as
Tjββ{tβ[T]β£Ξ»tβ=2j},βjβ₯0.
For any non-empty $\mathcal{T}_j$ 's, denote the first and last time slot of $\mathcal{T}_j$ by
Ξ³jββmin{tβTjβ},Οjββmax{tβTjβ}.
Then, without loss of generality, assume that no doubling has happened for time slot $T$ . Otherwise, one can always add a virtual time slot $t = T + 1$ with $\ell_{t,i} = 0$ for all $i$ . Therefore, we have $\mathcal{T}_J \neq \emptyset$ where $J$ is the final value of variable $J$ defined in the code.
We adopt the notation of $c_{t}$ as defined in Algorithm 3:
Similar to previous analysis, we define $\mu_{t,i}^{\prime} = \mathbb{E}[\ell_{t,i}\mathbb{1}[|\ell_{t,i}|\leq r_t] \mid \mathcal{F}_{t - 1},i_t = i]$ and decompose the regret $\mathcal{R}_T(y)$ as follows
Furthermore, due to the properties of weighted importance sampling estimator (as in Appendix A, $\mathbb{E}[\hat{\ell}{t,i}\mid \mathcal{F}{t - 1}] = \mu_{t,i}^{\prime}$ ), we have
where $z_{t} \triangleq \nabla \Psi^{*}(\nabla \Psi(x_{t}) - \eta_{t}\hat{\ell}_{t})$ . The first term is simply within $2^{J}\sqrt{KT}$ . For the second term, we have the following property similar to Lemma A.5:
Lemma C.3. Algorithm 3 guarantees that for any $t \in [T]$ ,
It remains to bound $\mathbb{E}[2^J]$ . When $J \geq 1$ , there are at least two non-empty epochs. Let $J'$ be the index of the second last epoch. The doubling condition of Algorithm 3 further reduces the task to bound $2^J$ into bounding $2^{J'}$ and $c_{\tau_{J'}}$ , as the following lemma states.
Lemma C.4. Algorithm 3 guarantees that, when $J \geq 1$ , we have
Proof of Theorem 6.1. It is a direct consequence of the theorem above.
C.2. Proof when Reducing $\mathcal{R}_T$ to $\mathbb{E}[2^J]$
Proof of Lemma C.2. It suffices to notice that in Algorithm 3, during a particular epoch $j$ , when the doubling condition at Line 15 evaluates to true, the current value of the variable $S_{j}$ is $1[\gamma_j > 1]c_{\gamma_j - 1} + \sum_{t\in \mathcal{T}_j}c_t$ , thus
When $\gamma_{j} = 1$ (or equivalently, $j = 0$ ), Equation (27) automatically holds. Otherwise Line 16 guarantees that $j \geq \lceil \log_2(c_{\tau_{\gamma_j - 1}} / \sqrt{K(T + 1)}) \rceil + 1$ , hence $2^{j - 1}\sqrt{K(T + 1)} \geq c_{\tau_{\gamma_j - 1}}$ . We will have
2jβ1K(T+1)β+tβTjβββctβ>2jK(T+1)β,
which also solves to Equation (27).
When the doubling condition at Line 15 evaluates to false for the last time, the value of $S_{j}$ is $1[\gamma_{j} > 1]c_{\gamma_{j} - 1} + \sum_{t\in \mathcal{T}{j}\setminus {\tau{j}}}c_{t}$ . At this time we have $S_{j}\leq 2^{j}\sqrt{K(T + 1)}$ , hence Equation (26) and (28) hold.
Proof of Lemma C.3. It is exactly the same calculation we did in Lemma A.5, the only difference is that $\hat{\ell}t$ does not come with a $-\ell{t,i_t}$ drift.
C.3. Proof when Bounding $\mathbb{E}[2^{J}]$
Proof of Lemma C.4. According to Line 16 of Algorithm 3, $J, J'$ and $c_{\tau_{J'}}$ satisfy
We further upper-bound the RHS of (34) by enlarging the summation range to $[T]$ . Specifically, let $\tilde{\eta}_t = 2^{-J'} t^{-1/2}$ , $\tilde{r}t = 2^{J'} \Theta_2 \sqrt{tx{t,i_t}}$ . Define the summands by
D. Removing Dependency on Time Horizon $T$ in Algorithm 3
To remove the dependency of $T$ , we leverage the following doubling trick, which is commonly used for unknown $T$ 's (Auer et al., 1995; Besson & Kaufmann, 2018). This gives our More Adaptive AdaTINF algorithm, which we called AdaΒ²TINF.
Algorithm 4 More Adaptive AdaTINF (AdaΒ²TINF) Input: Number of arms $K$
Output: Sequence of actions $i_1, i_2, \dots, i_T \in [K]$
1: Initialize $T_0\gets 1,S\gets 0$ 2: for $t = 1,2,\dots$ do 3: if $t \geq S$ then 4: $T_{0}\gets 2T_{0},S\gets S + T_{0} - 1$ 5: Initialize a new AdaTINF instance (Algorithm 3) with parameters $K$ and $T_0 - 1$ . 6: end if 7: Run current AdaTINF instance for one time slot, act what it acts and feed it with the feedback $\ell_{t,i_t}$ 8: end for
Theorem D.1 (Regret Guarantee of Algorithm 4). Under the same assumptions of Theorem 6.1, i.e., Assumptions 3.1 and 3.6 hold, $\text{Ada}^2\text{TINF}(\text{Algorithm 4})$ ensures
RTββ€600ΟK1β1/Ξ±(T+1)1/Ξ±.
Proof. We divide the time horizon $T$ into several super-epochs, each with length $T_0 - 1 = 2^1 - 1, 2^2 - 1, 2^3 - 1, \dots$ . For each of the super-epochs, as we restarted the whole process, we can regard each of them as a independent execution of
AdaTINF. Therefore, by Theorem 6.1, for an super-epoch from $t_0$ to $t_0 + T_0 - 2$ , we have
Lemma E.1. For a non-negative random variable $X$ whose $\alpha$ -th moment exists and a constant $c > 0$ , we have
Pr{Xβ₯c}β€cΞ±E[XΞ±]β
Proof. As both $X, c$ are non-negative, $\operatorname*{Pr}{X \geq c} = \operatorname*{Pr}{X^{\alpha} \geq c^{\alpha}} \leq \frac{\mathbb{E}[X^{\alpha}]}{c^{\alpha}}$ by Markov's inequality.
Lemma E.2. For a random variable $Y$ with $q$ -th moment $\mathbb{E}[|Y|^q]$ bounded by $\sigma^q$ (where $q \in [1, 2]$ ), its $p$ -th moment $\mathbb{E}[|Y|^p]$ is also bounded by $\sigma^p$ if $1 \leq p \leq q$ .
Proof. As the function $f\colon x \mapsto x^{\alpha}$ is convex for any $\alpha \geq 1$ , by Jensen's inequality, we have $f(\mathbb{E}[X]) \leq \mathbb{E}[f(X)]$ for any random variable $X$ . Hence, by picking $X = |Y|^{p}$ and $\alpha = \frac{q}{p}$ , we have $(\mathbb{E}[|Y|^{p}])^{q/p} \leq \mathbb{E}[(|Y|^{p})^{q/p}] = \mathbb{E}[|Y|^{q}] \leq \sigma^{q}$ , so $\mathbb{E}[|Y|^{p}] \leq \sigma^{p}$ for any $1 \leq p \leq q$ .
Lemma E.3. For $n$ independent random variables $X_{1}, X_{2}, \dots, X_{n}$ , each with $\alpha$ -th moment ( $1 < \alpha \leq 2$ ) bounded by $\sigma^{\alpha}$ , i.e., $\mathbb{E}_{x_i \sim X_i} [|x_i|^{\alpha}] \leq \sigma^{\alpha}$ for all $1 \leq i \leq n$ , we have
which gives $\mathbb{E}{\mathbf{x}\sim \mathbf{X}}\left[\max{1\leq i\leq n}|x_i|\right]\leq \sigma n^{1 / \alpha}$
E.2. Arithmetic Lemmas
Lemma E.4. For any $x \in \triangle_{[K]}$ (i.e., $\sum_{i=1}^{K} x_i = 1$ ), we have
i=1βKβxitββ€K1βt
for $\frac{1}{2} \leq t < 1$ .
Proof. By HΓΆlder's inequality $| fg| 1\leq | f| p| g| q$ , we have $\sum{i = 1}^{K}x{i}^{t}\leq (\sum{i = 1}^{K}(x_{i}^{t})^{1 / t})^{t}(\sum_{i = 1}^{K}1^{q})^{1 / q} = K^{1 - t}$ by picking $p = \frac{1}{t}$ and $q = \frac{1}{1 - t}$ .
Lemma E.5. For any positive integer $n$ , we have
i=1βnβi1ββ€ln(n+1).
Moreover, for any $-1 < t < 0$ , we have
i=1βnβitβ€t+1(n+1)t+1β.
Proof. If $t = -1$ , we have $\sum_{i=1}^{n} i^t \leq \int_{1}^{(n+1)} \frac{\mathrm{d}x}{x} = \ln n$ . If $t > -1$ , we have $\sum_{i=1}^{n} i^t \leq \int_{0}^{n+1} x^t , \mathrm{d}x = \frac{(n+1)^{t+1}}{t+1}$ .
Lemma E.6. For any $x \geq 1$ and $q \in (0,1)$ , we have
xqβ(xβ1)qβ€q(xβ1)qβ1.
Proof. Consider the function $f$ defined by $x \mapsto x^q$ . We have $f''(x) = q(q - 1)x^{q - 2} \leq 0$ for $x \geq 0$ and $q \in (0,1)$ . Hence, $f(x)$ is concave for $x \geq 0$ and $q \in (0,1)$ . Therefore, by properties of concave functions, we have $f(x) \leq f(x - 1) + f'(x - 1)(x - (x - 1)) = f(x - 1) + q(x - 1)^{q - 1}$ for any $x \geq 1$ and $q \in (0,1)$ , which gives $x^q - x^{q - 1} \leq q(x - 1)^{q - 1}$ .
E.3. Lemmas on the FTRL Framework for MAB Algorithm Design
Lemma E.7. For any algorithm that plays action $i_t \sim x_t$ where ${x_t}_{t=1}^T$ can be regarded as a stochastic process adapted to the natural filtration ${\mathcal{F}t}{t=0}^T$ , its regret, in a stochastically constrained adversarial environment with unique best arm $i^* \in [K]$ , is lower-bounded by
which is exactly $\sum_{t\in [T]}\sum_{i\neq i^*}\Delta_i\mathbb{E}[x_{t,i}\mid \mathcal{F}_{t - 1}]$
Lemma E.8 (Property of Weighted Importance Sampling Estimator). For any distribution $x \in \triangle_{[K]}$ and loss vector $\ell \in \mathbb{R}^K$ sampled from a distribution $\nu \in \triangle_{\mathbb{R}^k}$ , if we pulled an arm $i$ according to $x$ , then the weighted importance sampler $\tilde{\ell}(j) \triangleq \frac{\ell(j)}{x_j} \mathbb{1}[i = j]$ gives an unbiased estimate of $\mathbb{E}[\ell]$ , i.e.,
iβΌxEβ[β~(j)]=E[β(j)],β1β€jβ€K.
Proof. As the adversary is oblivious (or even stochastic),
Lemma E.9 (FTRL Regret Decomposition). For any FTRL algorithm, i.e., the action $x_{t}$ for any $t \in [T]$ is decided by $\operatorname{argmin}{x \in \triangle{[K]}} (\eta_t \sum_{1 \leq s \leq t} \langle \hat{\ell}_s, x \rangle + \Psi(x))$ , where $\eta_t$ is the learning rate, $\hat{\ell}_s$ is some arbitrary vector and $\Psi(x)$ is a convex regularizer, we have
where step (a) is due to the Pythagoras property of Bregman divergences, and in step (b) we just plugged in the definition of $\overline{\Psi}^*$ in $\Psi$ .