Buckets:

|
download
raw
240 kB

Title: 1 Introduction

URL Source: https://arxiv.org/html/2510.00073

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF 1Introduction 2Problem Formulation 3Lower Bound and Problem Complexity 4Algorithm and Upper Bound 5Model Misspecification 6Generalized Linear Model 7Numerical Experiments 8Conclusion References

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: informs3.cls

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: arXiv.org perpetual non-exclusive license arXiv:2510.00073v1 [stat.ML] 29 Sep 2025 \OneAndAHalfSpacedXI\TheoremsNumberedThrough\ECRepeatTheorems\EquationsNumberedThrough\NAT@set@cites \RUNTITLE

Identifying All ๐œ€ -Best Arms In Linear Bandits

\TITLE

Identifying All ๐œ€ -Best Arms in (Misspecified) Linear Bandits

\ARTICLEAUTHORS\AUTHOR

Zhekai Li \AFFDepartment of Civil and Environmental Engineering, Massachusetts Institute of Technology, \EMAILzhekaili@mit.edu \AUTHORTianyi Ma \AFFSchool of Operations Research and Information Engineering, Cornell University, \EMAILtm693@cornell.edu \AUTHORCheng Hua \AFFAntai College of Economics and Management, Shanghai Jiao Tong University, \EMAILcheng.hua@sjtu.edu.cn \AUTHORRuihao Zhu \AFFSC Johnson College of Business, Cornell University, \EMAILruihao.zhu@cornell.edu

\ABSTRACT

Motivated by the need to efficiently identify multiple candidates in high trial-and-error cost tasks such as drug discovery, we propose a near-optimal algorithm to identify all ๐œ€ -best arms (i.e., those at most ๐œ€ worse than the optimum). Specifically, we introduce LinFACT, an algorithm designed to optimize the identification of all ๐œ€ -best arms in linear bandits. We establish a novel information-theoretic lower bound on the sample complexity of this problem and demonstrate that LinFACT achieves instance optimality by matching this lower bound up to a logarithmic factor. A key ingredient of our proof is to integrate the lower bound directly into the scaling process for upper bound derivation, determining the termination round and thus the sample complexity. We also extend our analysis to settings with model misspecification and generalized linear models. Numerical experiments, including synthetic and real drug discovery data, demonstrate that LinFACT identifies more promising candidates with reduced sample complexity, offering significant computational efficiency and accelerating early-stage exploratory experiments.

\KEYWORDS

ranking and selection; sequential decision making; simulation; adaptive experiment; model misspecification

1Introduction

This paper addresses the problem of identifying the best set of options from a finite pool of candidates. A decision-maker sequentially selects candidates for evaluation, observing independent noisy rewards that reflect their quality. The goal is to strategically allocate measurement efforts to identify the desired candidates. This problem belongs to the class of pure exploration problems, which fall under the bandit framework but differ from traditional multi-armed bandits (MABs) that balance exploration and exploitation to minimize cumulative regret. Instead, pure exploration focuses on efficient information gathering to confidently apply the chosen best or best set of options. This approach is particularly relevant in applications such as drug discovery and product testing, where identifying the most promising candidates is followed by utilizing them under high-cost conditions, such as clinical trials or large-scale manufacturing tests.

Conventional pure exploration focuses on identifying the optimal candidate, often referred to as the best arm in the bandit setting. However, in many real-world scenarios, candidates with rewards falling slightly below the optimum may later demonstrate advantageous traits, such as fewer side effects, simpler manufacturing processes, or lower resistance during implementation. Motivated by this insight, this paper aims to identify all ๐œ€ -best candidates (i.e., those whose performance is at most ๐œ€ worse than the optimum). This approach is especially valuable when exploring a range of nearly optimal options is necessary. Promoting multiple promising candidates not only mitigates risk but also increases the chances that at least one will prove successful.

This setting captures many applications in real-world scenarios. For example:

โ€ข

Drug Discovery: In drug discovery, pharmaceutical companies aim to identify as many promising drug candidates as possible during preclinical stages. These candidates, known as preclinical candidate compounds (PCCs), are optimized compounds prepared for preclinical testing to assess efficacy, safety, and pharmacokinetics before advancing to clinical trials. Given the inherent risks and high failure rates in subsequent drug development (Das et al. 2021), starting with a larger pool of potential candidates increases the chances of identifying at least one successful, marketable drug.

โ€ข

Assortment Design: In e-commerce (Boyd and Bilegan 2003, Elmaghraby and Keskinocak 2003, Feng et al. 2022), recommender systems (Huang et al. 2007, Peukert et al. 2023), and streaming services (Alaei et al. 2022, Godinho de Matos et al. 2018), expanding the consideration set (e.g., products, movies, or songs) can improve user satisfaction and increase revenues. Offering a diverse range of recommendations helps cater to varying tastes and preferences.

โ€ข

Automatic Machine Learning: In automatic machine learning (AutoML) (Thornton et al. 2013), the goal is to automate the process of selecting algorithms and tuning hyperparameters by providing multiple promising choices for predictive tasks. Due to the randomness of limited data, the best out-of-sample model may not always be optimal. Therefore, providing users with a diverse set of models that yield good results is critical to assisting them in selecting the best algorithm and hyperparameters.

1.1Main Contributions

This paper focuses on identifying all ๐œ€ -best arms in the linear bandit setting and presents contributions in both the algorithmic and theoretical dimensions.

โ€ข

๐›ฟ -Probably Approximately Correct (PAC) Algorithm: On the algorithmic front, we introduce LinFACT (Linear Fast Arm Classification with Threshold estimation), a ๐›ฟ -PAC (see Section 2.1 for a formal definition) phase-based algorithm for identifying all ๐œ€ -best arms in linear bandit problems. LinFACT demonstrates superior effectiveness compared to existing pure exploration algorithms.

โ€ข

Matching Bound of Sample Complexity: We make two key technical contributions. First, we derive an information-theoretic lower bound on the problem complexity for identifying all ๐œ€ -best arms in linear bandits. To the best of our knowledge, this is the first such result in the literature. Second, we establish two distinct upper bounds on the expected sample complexity of LinFACT, illustrating the differences between various optimal design criteria used in its implementation. Notably, we demonstrate that LinFACT achieves instance optimality when using the ๐’ณ โ€‹ ๐’ด -optimal design criterion, matching the lower bound up to a logarithmic factor. The ๐’ณ โ€‹ ๐’ด -optimal design focuses on contrasting pairs of arms rather than evaluating each arm individually. Our analysis leverages the lower bound directly in defining the classification termination round and in scaling the upper bound.

โ€ข

Accounting for Misspecified Models and GLMs: We extend our framework beyond linear models to handle misspecified linear bandits and generalized linear models (GLMs). For both cases, we provide theoretical upper bounds on the expected sample complexity. Furthermore, we analyze how prior knowledge of model misspecification impacts the algorithmic upper bounds and performance, and how the incorporation of GLMs influences the sample complexity.

โ€ข

Numerical Studies with Real-World Datasets: We conduct extensive numerical experiments to demonstrate that our LinFACT algorithm outperforms existing methods in terms of sample complexity, computational efficiency, and reliable identification of all ๐œ€ -best arms. In experiments with synthetic data, LinFACT outperforms other baselines in both adaptive and static settings. Using a real-world drug discovery dataset (Free and Wilson 1964), we further show that LinFACT achieves superior performance compared to previous algorithms. Notably, LinFACT is computationally efficient with time complexity of ๐‘‚ โ€‹ ( ๐พ โ€‹ ๐‘‘ 2 ) , which is lower than ๐‘‚ โ€‹ ( ๐‘› โ€‹ ๐พ โ€‹ ๐‘‘ 2 ) of Lazy TTS ( ๐‘› is a non-negligible number) (Rivera and Tewari 2024), ๐‘‚ โ€‹ ( ๐พ โ€‹ ๐‘‘ 3 ) of top ๐‘š algorithms (Rรฉda et al. 2021a), and ๐‘‚ โ€‹ ( ๐พ 2 โ€‹ log โก ๐พ ) of KGCB (Negoescu et al. 2011).

1.2Related Literature

Pure Exploration. The multi-armed bandits (MABs) model has been a critical paradigm for addressing the exploration-exploitation trade-off since its introduction by Thompson (1933) in the context of medical trials. While much of the research focuses on minimizing cumulative regret (Bubeck et al. 2012, Lattimore and Szepesvรกri 2020), our work focuses on the pure exploration setting (Koenig and Law 1985), where the goal is to select a subset of arms and evaluation is based on the final outcome. This distinction highlights the context-specific benefits of each approach: MABs are suited for tasks where the goal is to optimize rewards in real-time, balancing exploration and exploitation, whereas pure exploration is focused on identifying a set of satisfactory arms, without the immediate concern for reward maximization. In pure exploration, the algorithm prioritizes information gathering over reward collection, transforming the objective from reward-centric to information-centric. The focus is on efficiently acquiring sufficient information about all arms for confident identification.

The origins of pure exploration problems date back to the 1950s in the context of stochastic simulation, specifically within ordinal optimization (Shin et al. 2018, Shen et al. 2021) or the Ranking and Selection (R&S) problem, first addressed by Bechhofer (1954). Various methodologies have since been developed to solve the canonical R&S problem, including elimination-type algorithms (Kim and Nelson 2001, Bubeck et al. 2013, Fan et al. 2016), Optimal Computing Budget Allocation (OCBA) (Chen et al. 2000), knowledge-gradient algorithms (Frazier et al. 2008, 2009, Ryzhov et al. 2012), UCB-type algorithms (Kaufmann and Kalyanakrishnan 2013), and the unified gap-based exploration (UGapE) algorithm (Gabillon et al. 2012). Comprehensive reviews of the R&S problem can be found in Kim and Nelson (2006), Hong et al. (2021), with the most recent overview provided by Li et al. (2024).

The general framework of pure exploration encompasses various exploration tasks (Qin and You 2025), including best arm identification (BAI) (Mannor and Tsitsiklis 2004, Even-Dar et al. 2006, Russo 2020, Komiyama et al. 2023, Simchi-Levi et al. 2024), top ๐‘š identification (Kalyanakrishnan and Stone 2010, Kalyanakrishnan et al. 2012), threshold bandits (Locatelli et al. 2016, Abernethy et al. 2016), and satisficing bandits (Feng et al. 2025). In applications such as drug discovery, pharmacologists aim to identify a set of highly potent drug candidates from potentially millions of compounds, with only the selected candidates advancing to more extensive testing. Given the uncertainty of final outcomes and the high cost of trial-and-error, identifying multiple promising candidates simultaneously is crucial. To minimize the cost of early-stage exploration, adaptive, sequential experimental designs are necessary, as they require fewer experiments compared to fixed designs.

All ๐œ€ -Best Arms Identification. Conventional objectives, such as identifying the top ๐‘š best arms or all arms above a certain threshold, often face significant challenges. In the top ๐‘š task, selecting a small ๐‘š may exclude promising candidates, while choosing a large ๐‘š may include ineffective options, requiring an impractically large number of experiments. Similarly, setting a threshold too high can exclude viable candidates. Both approaches depend on prior knowledge of the problem to achieve good performance, which may not be available in real-world applications.

In contrast, identifying all ๐œ€ -best arms (those within ๐œ€ of the best arm) overcomes these limitations. This approach promotes broader exploration while providing a robust guarantee: no significantly suboptimal arms will be selected, thereby improving the reliability of downstream tasks (Mason et al. 2020). The all ๐œ€ -best arms identification problem generalizes both the top ๐‘š and threshold bandit problems. It reduces to the top ๐‘š problem if the number of ๐œ€ -best arms is known in advance, and to a threshold bandit problem if the value of the best arm is known.

Mason et al. (2020) introduced the problem complexity for identifying all ๐œ€ -best arms and derived a lower bound in the low-confidence regime. However, their lower bound involves a summation that may be unnecessary, indicating room for improvement. Building on Masonโ€™s work, Al Marjani et al. (2022) derived tighter lower bounds by fully characterizing the alternative bandit instances that an optimal sampling strategy must distinguish and eliminate. They also proposed the asymptotically optimal Track-and-Stop algorithm. However, both Mason et al. (2020) and Al Marjani et al. (2022) consider stochastic bandits without structures. In contrast, we study this problem in the linear bandit setting (Abe and Long 1999), which leverages structural relationships among arms. This presents new challenges, but also allows it to handle more complex scenarios. As a result, our work establishes the first information-theoretic lower bound for identifying all ๐œ€ -best arms in linear bandits, applicable to any ๐›ฟ -PAC algorithm.

An extended literature review of misspecified linear bandits and generalized linear bandit models can be found in Section EC.1 of the online appendix.

2Problem Formulation

Notations. In this paper, we denote the set of positive integers up to ๐‘ by [ ๐‘ ]

{ 1 , โ€ฆ , ๐‘ } . Vectors and matrices are represented using boldface notation. The inner product of two vectors is denoted by โŸจ โ‹… , โ‹… โŸฉ . We define the weighted matrix norm โ€– ๐’™ โ€– ๐‘จ as ๐’™ โŠค โ€‹ ๐‘จ โ€‹ ๐’™ , where ๐‘จ is a positive semi-definite matrix that weights and scales the norm. For two probability measures ๐‘ƒ and ๐‘„ over a common measurable space, if ๐‘ƒ is absolutely continuous with respect to ๐‘„ , the Kullback-Leibler divergence between ๐‘ƒ and ๐‘„ is defined as

KL โ€‹ ( ๐‘ƒ , ๐‘„ )

{ โˆซ log โก ( ๐‘‘ โ€‹ ๐‘ƒ ๐‘‘ โ€‹ ๐‘„ ) โ€‹ ๐‘‘ ๐‘ƒ ,

if  โ€‹ ๐‘„ โ‰ช ๐‘ƒ ;

โˆž ,

otherwise ,

(1)

where ๐‘‘ โ€‹ ๐‘ƒ ๐‘‘ โ€‹ ๐‘„ is the Radon-Nikodym derivative of ๐‘ƒ with respect to ๐‘„ , and ๐‘„ โ‰ช ๐‘ƒ indicates that ๐‘„ is absolutely continuous with respect to ๐‘ƒ .

Setting. We address the problem of identifying all ๐œ€ -best arms from a finite set of ๐พ arms where ๐พ is a (possibly large) positive integer. Each arm ๐‘– โˆˆ [ ๐พ ] has an associated reward distribution with an unknown fixed mean ๐œ‡ ๐‘– . Let the mean vector of all arms be denoted as ๐

( ๐œ‡ 1 , ๐œ‡ 2 , โ€ฆ , ๐œ‡ ๐พ ) , which can only be estimated through bandit feedback from the selected arms. Without loss of generality, we assume ๐œ‡ 1 > ๐œ‡ 2 โ‰ฅ โ€ฆ โ‰ฅ ๐œ‡ ๐พ . The gap ฮ” ๐‘–

๐œ‡ 1 โˆ’ ๐œ‡ ๐‘– (for ๐‘– โ‰  1 ) represents the difference in expected rewards between the optimal arm and arm ๐‘– . To this end, we give a formal definition of the notion of ๐œ€ -best arm.

Definition 2.1

( ๐œ€ -Best Arm). Given ๐œ€

0 , an arm ๐‘– is called ๐œ€ -best if ๐œ‡ ๐‘– โ‰ฅ ๐œ‡ 1 โˆ’ ๐œ€ .

Here, we adopt an additive framework to define ๐œ€ -best arms. There also exists a multiplicative counterpart, where an arm ๐‘– is considered ๐œ€ -best if ๐œ‡ ๐‘– โ‰ฅ ( 1 โˆ’ ๐œ€ ) โ€‹ ๐œ‡ 1 . While our study focuses on the additive model, the analysis for the multiplicative model follows similar reasoning. We denote the set of all ๐œ€ -best arms1 for a mean vector ๐ as

๐บ ๐œ€ โ€‹ ( ๐ ) โ‰” { ๐‘– : ๐œ‡ ๐‘– โ‰ฅ ๐œ‡ 1 โˆ’ ๐œ€ } .

(2)

Define ๐›ผ ๐œ€ โ‰” min ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) โก ( ๐œ‡ ๐‘– โˆ’ ( ๐œ‡ 1 โˆ’ ๐œ€ ) ) as the distance from the smallest ๐œ€ -best arm to the threshold ๐œ‡ 1 โˆ’ ๐œ€ . Furthermore, if the complement of ๐บ ๐œ€ โ€‹ ( ๐ ) , denoted as ๐บ ๐œ€ ๐‘ โ€‹ ( ๐ ) , is non-empty, we define ๐›ฝ ๐œ€ โ‰” min ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โ€‹ ( ๐ ) โก ( ( ๐œ‡ 1 โˆ’ ๐œ€ ) โˆ’ ๐œ‡ ๐‘– ) as the closest distance from the threshold to the highest mean value of any arm that is not considered ๐œ€ -best.

We study this problem under a linear structure, where the mean values depend on an unknown parameter vector ๐œฝ โˆˆ โ„ ๐‘‘ . Each arm ๐‘– is associated with a feature vector ๐’‚ ๐‘– โˆˆ โ„ ๐‘‘ . Let ๐’œ โŠ‚ โ„ ๐‘‘ be the set of feature vectors, and let ๐šฟ โ‰” [ ๐’‚ 1 , ๐’‚ 2 , โ€ฆ , ๐’‚ ๐พ ] โˆˆ โ„ ๐พ ร— ๐‘‘ be the feature matrix. With parameter ๐œฝ , the mean value can be represented as ๐ โ€‹ ( ๐œฝ ) . For simplicity, we will consistently refer to the bandit instance as ๐ . When an arm ๐ด ๐‘ก with corresponding feature vector ๐’‚ ๐ด ๐‘ก โˆˆ ๐’œ is selected at time ๐‘ก , we observe the bandit feedback ๐‘‹ ๐‘ก , given by

๐‘‹ ๐‘ก

๐’‚ ๐ด ๐‘ก โŠค โ€‹ ๐œฝ + ๐œ‚ ๐‘ก ,

(3)

where ๐œ‡ ๐ด ๐‘ก

๐’‚ ๐ด ๐‘ก โŠค โ€‹ ๐œฝ is the true mean reward of the selected arm, and ๐œ‚ ๐‘ก is a noise variable. We also make two additional standard assumptions on the norm of the parameters and the noise distributions (Abbasi-Yadkori et al. 2011). {assumption} Assume max ๐‘– โˆˆ [ ๐พ ] โก โ€– ๐’‚ ๐‘– โ€– 2 โ‰ค ๐ฟ 1 , where โˆฅ โ‹… โˆฅ 2 denotes the โ„“ 2 -norm and ๐ฟ 1 is a constant.

{assumption}

The noise ๐œ‚ ๐‘ก is conditionally 1-sub-Gaussian, i.e., for any ๐œ† โˆˆ โ„ ,

๐”ผ โ€‹ [ ๐‘’ ๐œ† โ€‹ ๐œ‚ ๐‘ก | ๐’‚ ๐ด 1 , โ€ฆ , ๐’‚ ๐ด ๐‘ก โˆ’ 1 , ๐œ‚ 1 , โ€ฆ , ๐œ‚ ๐‘ก โˆ’ 1 ] โ‰ค exp โก ( ๐œ† 2 2 ) .

(4) 2.1Probably Approximately Correct Algorithm Framework

Our goal is to identify all ๐œ€ -best arms with high confidence while minimizing the sampling budget. To achieve this, we employ three main components: stopping rule, sampling rule, and decision rule.

At each time step ๐‘ก , the stopping rule ๐œ ๐›ฟ determines whether to continue or stop the process. If the process continues, an arm is selected according to the sampling rule, and the corresponding random reward is observed. When the process stops at ๐‘ก

๐œ ๐›ฟ , a decision rule provides an estimate โ„ ^ ๐œ ๐›ฟ of the true solution set โ„ โ€‹ ( ๐ ) , which in our problem is the set of all ๐œ€ -best arms, ๐บ ๐œ€ โ€‹ ( ๐ ) .

We define the set of all viable mean vectors ๐ as

๐‘€ โ‰” { ๐ โˆˆ โ„ ๐พ | โˆƒ ๐œฝ โˆˆ โ„ ๐‘‘ , ๐

๐šฟ โ€‹ ๐œฝ โˆง โ€– ๐’‚ ๐‘– โ€– 2 โ‰ค ๐ฟ 1 โ€‹  for each  โ€‹ ๐‘– โˆˆ [ ๐พ ] } .

(5)

Here, the set ๐‘€ consists of all possible mean vectors ๐ that can be expressed as a linear combination of the parameter vector ๐œฝ through the matrix ๐šฟ .

We focus on algorithms that are probably approximately correct with high confidence, referred to as ๐›ฟ -PAC algorithms.

Definition 2.2

( ๐›ฟ -PAC Algorithm). An algorithm is ๐›ฟ -PAC for all ๐œ€ -best arms identification if it identifies the correct solution set with a probability of at least 1 โˆ’ ๐›ฟ for any problem instance with mean ๐› โˆˆ ๐‘€ , i.e.,

โ„™ ๐ โ€‹ ( ๐œ ๐›ฟ < โˆž , โ„ ^ ๐œ ๐›ฟ

๐บ ๐œ€ โ€‹ ( ๐ ) ) โ‰ฅ 1 โˆ’ ๐›ฟ , โˆ€ ๐ โˆˆ ๐‘€ .

(6)

Upon stopping, ๐›ฟ -PAC algorithms ensure the identification of all ๐œ€ -best arms with high confidence. Therefore, our goal is to design a ๐›ฟ -PAC algorithm that minimizes the stopping time, formulated as the following optimization problem.

min

๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ]

(7)

s.t.	

โ„™ ๐ โ€‹ ( ๐œ ๐›ฟ < โˆž , โ„ ^ ๐œ ๐›ฟ

๐บ ๐œ€ โ€‹ ( ๐ ) ) โ‰ฅ 1 โˆ’ ๐›ฟ , โˆ€ ๐ โˆˆ ๐‘€ .

(8) 2.2Optimal Design of Experiment

Linear bandit algorithms can be viewed as an online, adaptive counterpart to the classical optimal design problem. This section develops the key theoretical foundations, emphasizing the confidence bounds for parameter estimation that guide both the design of our algorithm and the subsequent analysis.

Ordinary Least Squares. Consider a sequence of pulled arms, denoted as ๐ด 1 , ๐ด 2 , โ€ฆ , ๐ด ๐‘ก , and the corresponding observed rewards ๐‘‹ 1 , ๐‘‹ 2 , โ€ฆ , ๐‘‹ ๐‘ก . If the feature vectors of these arms, ๐’‚ ๐ด 1 , ๐’‚ ๐ด 2 , โ€ฆ , ๐’‚ ๐ด ๐‘ก , span the space โ„ ๐‘‘ , the ordinary least squares (OLS) estimator for the parameter ๐œฝ is given by

๐œฝ ^ ๐‘ก

๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ โˆ‘ ๐‘›

1 ๐‘ก ๐’‚ ๐ด ๐‘› โ€‹ ๐‘‹ ๐‘› ,

(9)

where ๐‘ฝ ๐‘ก

โˆ‘ ๐‘›

1 ๐‘ก ๐’‚ ๐ด ๐‘› โ€‹ ๐’‚ ๐ด ๐‘› โŠค โˆˆ โ„ ๐‘‘ ร— ๐‘‘ represents the information matrix. Using the properties of sub-Gaussian random variables, we can derive a confidence bound for the OLS estimator. This bound, denoted as ๐ต ๐‘ก , ๐›ฟ , is detailed in Proposition 2.3. The confidence region for the parameter ๐œฝ at time step ๐‘ก is given by

๐’ž ๐‘ก , ๐›ฟ

{ ๐œฝ : โ€– ๐œฝ ^ ๐‘ก โˆ’ ๐œฝ โ€– ๐‘ฝ ๐‘ก โ‰ค ๐ต ๐‘ก , ๐›ฟ } .

(10) Proposition 2.3 (Lattimore and Szepesvรกri (2020))

For any fixed sampling policy and any given vector ๐ฑ โˆˆ โ„ ๐‘‘ , with probability at least 1 โˆ’ ๐›ฟ , the following holds.

| ๐’™ โŠค โ€‹ ( ๐œฝ ^ ๐‘ก โˆ’ ๐œฝ ) | โ‰ค โ€– ๐’™ โ€– ๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ ๐ต ๐‘ก , ๐›ฟ ,

(11)

where the anytime confidence bound ๐ต ๐‘ก , ๐›ฟ is given by ๐ต ๐‘ก , ๐›ฟ

2 โ€‹ 2 โ€‹ ( ๐‘‘ โ€‹ log โก ( 6 ) + log โก ( 1 ๐›ฟ ) ) .

In many practical scenarios, the observed data are not predetermined. To handle this, a martingale-based method can be employed, as described by Abbasi-Yadkori et al. (2011), to define an adaptive confidence bound for the OLS estimator. This accounts for the variability introduced by random rewards and adaptive sampling policies. The confidence interval in Proposition 2.3 highlights the connection between arm allocation policies in linear bandits and experimental design theory (Pukelsheim 2006). This connection serves as a fundamental component in constructing our algorithm.

Feature Vector Projection. At any time step where estimation needs to be made after sampling, if the feature vectors of the sampled arms do not span โ„ ๐‘‘ , we substitute them with dimensionality-reduced feature vectors (Yang and Tan 2022). Specifically, we project all feature vectors onto the subspace spanned by ๐’œ . Let ๐‘ฉ โˆˆ โ„ ๐‘‘ ร— ๐‘‘ โ€ฒ be an orthonormal basis for this subspace, where ๐‘‘ โ€ฒ < ๐‘‘ is the dimension of the subspace. The new feature vector ๐’‚ โ€ฒ is then given by ๐’‚ โ€ฒ

๐‘ฉ โŠค โ€‹ ๐’‚ . In this transformation, ๐‘ฉ โ€‹ ๐‘ฉ โŠค is a projection matrix, ensuring

โŸจ ๐œฝ , ๐’‚ โŸฉ

โŸจ ๐œฝ , ๐‘ฉ โ€‹ ๐‘ฉ โŠค โ€‹ ๐’‚ โŸฉ

โŸจ ๐‘ฉ โŠค โ€‹ ๐œฝ , ๐‘ฉ โŠค โ€‹ ๐’‚ โŸฉ

โŸจ ๐œฝ โ€ฒ , ๐’‚ โ€ฒ โŸฉ .

(12)

Equation (12) ensures that the mean values of all arms remain unchanged under the projection. The first equality holds because ๐‘ฉ โ€‹ ๐‘ฉ โŠค is a projection matrix and ๐’‚ lies in the subspace spanned by ๐‘ฉ (i.e., ๐‘ฉ โ€‹ ๐‘ฉ โŠค โ€‹ ๐’‚

๐’‚ ). The second equality follows from the same matrix form ๐œฝ โŠค โ€‹ ๐‘ฉ โ€‹ ๐‘ฉ โŠค โ€‹ ๐’‚ . The third equality holds by definition.

Optimal Design Criteria. In contrast to stochastic bandits, where the mean values of the arms are estimated through repeated sampling of each arm, the linear bandit setting allows these values to be inferred from accurate estimation of the underlying parameter vector ๐œฝ . As a result, pulling a single arm provides information about all arms.

A key sampling strategy in this context is the G-optimal design, which minimizes the maximum variance of the predicted responses across all arms by optimizing the fraction of times each arm is selected. Formally, the G-optimal design problem seeks a probability distribution ๐œ‹ on ๐’œ , where ๐œ‹ : ๐’œ โ†’ [ 0 , 1 ] and โˆ‘ ๐’‚ โˆˆ ๐’œ ๐œ‹ โ€‹ ( ๐’‚ )

1 , that minimizes

๐‘” โ€‹ ( ๐œ‹ )

max ๐’‚ โˆˆ ๐’œ โก โ€– ๐’‚ โ€– ๐‘ฝ โ€‹ ( ๐œ‹ ) โˆ’ 1 2 ,

(13)

where ๐‘ฝ โ€‹ ( ๐œ‹ )

โˆ‘ ๐’‚ โˆˆ ๐’œ ๐œ‹ โ€‹ ( ๐’‚ ) โ€‹ ๐’‚ โ€‹ ๐’‚ โŠค is the weighted information matrix, analogous to ๐‘ฝ ๐‘ก in equation (9). The G-optimal design (13) ensures a tight confidence interval for mean value estimation. However, comparing the relative differences of mean values across different arms is more critical in identifying the best arms, rather than making the best estimation.

Therefore, we consider an alternative design criterion, the ๐’ณ โ€‹ ๐’ด -optimal design, that directly targets the estimation of these gaps. Consider ๐’ฎ โІ ๐’œ as a subset of the arm space. We define

๐’ด โ€‹ ( ๐’ฎ ) โ‰” { ๐’‚ โˆ’ ๐’‚ โ€ฒ : โˆ€ ๐’‚ , ๐’‚ โ€ฒ โˆˆ ๐’ฎ , ๐’‚ โ‰  ๐’‚ โ€ฒ }

(14)

as the set of vectors representing the differences between each pair of arms in ๐’ฎ . The ๐’ณ โ€‹ ๐’ด -optimal design minimizes

๐‘” ๐’ณ โ€‹ ๐’ด โ€‹ ( ๐œ‹ )

max ๐’š โˆˆ ๐’ด โ€‹ ( ๐’œ ) โก โ€– ๐’š โ€– ๐‘ฝ โ€‹ ( ๐œ‹ ) โˆ’ 1 2 .

(15)

As mentioned previously, the ๐’ณ โ€‹ ๐’ด -optimal design focuses on minimizing the maximum variance when estimating the differences (gaps) between pairs of arms. By doing so, it ensures differentiation between arms, rather than estimating each arm individually. This criterion is particularly useful when the goal is to identify relative performance rather than absolute quality.

3Lower Bound and Problem Complexity

In this section, we present a novel information-theoretic lower bound for the problem of identifying all ๐œ€ -best arms in linear bandits. Building on the approach of Soare et al. (2014), we extend the lower bound for best arm identification (BAI) to this more general setting. Figure 1 visualizes the structure of the stopping condition, with additional graphical insights provided in Section EC.4.1. These visualizations offer geometric intuition for the challenges involved in identifying all ๐œ€ -best arms in linear bandits.

Figure 1:Illustration of the Stopping Condition: Best Arm Identification vs. All ๐œ€ -Best Arms Identification

Note. (a) Stopping occurs when the confidence region ๐’ž ๐‘ก , ๐›ฟ for the estimated parameter ๐›‰ ^ ๐‘ก contracts entirely within one of the three decision regions ๐‘€ ๐‘– in a certain time step ๐‘ก . The boundaries between regions are defined by the hyperplanes ๐œ— โŠค โ€‹ ( ๐š ๐‘– โˆ’ ๐š ๐‘— )

0 . Each dot represents an arm. (b) In the case of identifying all ๐œ€ -best arms, the regions overlap. (c) These overlaps partition the space into seven distinct decision regions, increasing the difficulty of identification.

The sample complexity of an algorithm is quantified by the number of samples, denoted ๐œ ๐›ฟ , required to terminate the process. The goal of the algorithm design is to minimize the expected sample complexity ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] across the entire set of algorithms โ„‹ . As introduced in Kaufmann et al. (2016), for ๐›ฟ โˆˆ ( 0 , 1 ) , the non-asymptotic problem complexity of an instance ๐ can be defined as

๐œ… โ€‹ ( ๐ ) โ‰” inf ๐ด โ€‹ ๐‘™ โ€‹ ๐‘” โ€‹ ๐‘œ โˆˆ โ„‹ ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] log โก ( 1 2.4 โ€‹ ๐›ฟ ) ,

(16)

which is the smallest possible constant such that the expected sample complexity ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] grows asymptotically in line with log โก ( 1 2.4 โ€‹ ๐›ฟ ) . The lower bound of the sample complexity ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] can be represented in a general form by the following proposition. Building on the analytical framework of Proposition 3.1, we formulate the ๐œ€ -best arms identification problem in the linear bandit setting. This enables us to derive both lower and upper bounds on the sample complexity, thereby establishing the near-optimality of our algorithm.

Proposition 3.1 (Qin and You (2025))

For any ๐› โˆˆ ๐‘€ , there exists a set ๐’ณ

๐’ณ โ€‹ ( ๐› ) and functions { ๐ถ ๐‘ฅ } ๐‘ฅ โˆˆ ๐’ณ with ๐ถ ๐‘ฅ : ๐’ฎ ๐พ ร— ๐’ณ โ†’ โ„ + such that

๐œ… โ€‹ ( ๐ ) โ‰ฅ ( ฮ“ ๐ โˆ— ) โˆ’ 1 ,

(17)

where

ฮ“ ๐ โˆ—

max ๐’‘ โˆˆ ๐’ฎ ๐พ โก min ๐‘ฅ โˆˆ ๐’ณ โก ๐ถ ๐‘ฅ โ€‹ ( ๐’‘ ; ๐ ) .

(18)

In Proposition 3.1, ๐’ฎ ๐พ denotes the ๐พ -dimensional probability simplex, and ๐’ณ

๐’ณ โ€‹ ( ๐ ) is referred to as the culprit set. This set comprises critical subqueries (or comparisons) that must be correctly resolved. An error in any of these comparisons may hinder the identification of the correct set.

For example, in the case of identifying the best arm, the culprit set is given by ๐’ณ

{ ๐‘– : ๐‘– โˆˆ [ ๐พ ] โˆ– ๐‘– โˆ— } , where ๐‘– โˆ— denotes the unique best arm, and each subquery involves distinguishing every arm from the best arm. In the threshold bandit problem, the culprit set consists of all arms, ๐’ณ

{ ๐‘– : ๐‘– โˆˆ [ ๐พ ] } , where each subquery requires accurately determining whether each arm exceeds the threshold. For the task of identifying the best ๐‘š arms, the culprit set is ๐’ณ

{ ( ๐‘– , ๐‘— ) : ๐‘– โˆˆ โ„ , ๐‘— โˆˆ โ„ ๐‘ } , where โ„ represents the set of the best ๐‘š arms, and each subquery entails comparing the mean of each arm in โ„ with those in the complement set โ„ ๐‘ .

The function ๐ถ ๐‘ฅ โ€‹ ( ๐’‘ ; ๐ ) represents the population version of the sequential generalized likelihood ratio statistic, which provides an information-theoretic measure of how easily each subquery, corresponding to the culprit ๐‘ฅ โˆˆ ๐’ณ , can be answered.

In equation (18), the minimum aligns with the intuition that the instance posing the greatest challenge corresponds to the hardest subquery. The outer maximization seeks the optimal allocation of arms ๐’‘ to effectively address this subquery. For a more detailed introduction to this general pure exploration model, please refer to Section EC.2. For our setting, we establish the below lower bound.

Theorem 3.2 (Lower Bound)

Consider a set of arms where arm ๐‘– follows a normal distribution ๐’ฉ โ€‹ ( ๐œ‡ ๐‘– , 1 ) , where ๐œ‡ ๐‘–

๐š ๐‘– โŠค โ€‹ ๐›‰ . Any ๐›ฟ -PAC algorithm for identifying all ๐œ€ -best arms in the linear bandit setting must satisfy

inf ๐ด โ€‹ ๐‘™ โ€‹ ๐‘” โ€‹ ๐‘œ โˆˆ โ„‹ ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] log โก ( 1 2.4 โ€‹ ๐›ฟ ) โ‰ฅ ( ฮ“ ๐ โˆ— ) โˆ’ 1

min ๐’‘ โˆˆ ๐’ฎ ๐พ โก max ( ๐‘– , ๐‘— , ๐‘š ) โˆˆ ๐’ณ โก max โก { 2 โ€‹ โ€– ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’‚ ๐‘– โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’‚ 1 โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 } ,

(19)

where ๐’ณ

{ ( ๐‘– , ๐‘— , ๐‘š ) : ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐› ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐› ) } , ๐’ฎ ๐พ is the ๐พ -dimensional probability simplex.

The detailed proof of the above theorem is presented in Section EC.4.3. For the lower bound derivation, we assume normally distributed rewards to obtain a closed-form expression. A similar bound can be derived under sub-Gaussian rewards, though the form is less explicit.

Remark 3.3 (Generality of the Lower Bound)

We note that the stochastic multiโ€‘armed bandit problem is a special case of the linear bandit problem. By setting ๐’œ

{ ๐ž 1 , ๐ž 2 , โ€ฆ , ๐ž ๐‘‘ } , where ๐ž ๐‘– denotes the unit vector, the linear bandit model reduces to a stochastic setting. This relationship allows us to recover the lower bound result for identifying all ๐œ€ -best arms in stochastic bandits (Al Marjani et al. 2022). Furthermore, the lower bound in Theorem 3.2 extends the lower bound for best arm identification in linear bandits (Fiez et al. 2019). This result is recovered by setting ๐œ€

0 and redefining the culprit set as ๐’ณ โ€‹ ( ๐› )

{ ๐‘– : ๐‘– โˆˆ [ ๐พ ] โˆ– ๐‘– โˆ— } , where ๐‘– โˆ— represents the best arm in the context of best arms identification.

4Algorithm and Upper Bound

In this section, we propose the LinFACT algorithm (Linear Fast Arm Classification with Threshold estimation) to identify all ๐œ€ -best arms in linear bandits efficiently. We then establish upper bounds on the expected sample complexity to demonstrate the optimality of the LinFACT algorithm. Specifically, the upper bound derived from the ๐’ณ โ€‹ ๐’ด -optimal sampling policy is shown to be instance optimal up to logarithmic factors.2

4.1Algorithm

LinFACT is a phase-based, semi-adaptive algorithm in which the sampling rule remains fixed within each round and is updated only at the end based on the accumulated observations. As the algorithm proceeds through round ๐‘Ÿ , LinFACT progressively refines two sets of arms:

โ€ข

๐บ ๐‘Ÿ : Arms empirically classified as ๐œ€ -best (good).

โ€ข

๐ต ๐‘Ÿ : Arms empirically classified as not ๐œ€ -best (bad).

This classification process continues until all arms have been assigned to either ๐บ ๐‘Ÿ or ๐ต ๐‘Ÿ . Once complete, the decision rule returns ๐บ ๐‘Ÿ as the final set of ๐œ€ -best arms.

Sampling Rule. To minimize the sampling budget, we select arms that provide the maximum information about the mean values or the gaps between them. Unlike stochastic multi-armed bandits, where mean values are obtained exclusively by sampling specific arms, linear bandits allow these mean values to be inferred from the estimated parameters. In each round, arms are selected based on the G-optimal design (13) or the ๐’ณ โ€‹ ๐’ด -optimal design (15).

For G-optimal design, LinFACT-G refines an estimate of the true parameter ๐œฝ and uses this estimate to maintain an anytime confidence interval, such that for each armโ€™s empirical mean value ๐œ‡ ^ ๐‘– , we have

โ„™ โ€‹ ( โ‹‚ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘Ÿ โˆˆ โ„• | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | โ‰ค ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) ) โ‰ฅ 1 โˆ’ ๐›ฟ .

(20)

The active set ๐’œ โ€‹ ( ๐‘Ÿ ) is defined as the set of uneliminated arms, as we continuously eliminate arms as round ๐‘Ÿ progresses. ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ ) denotes the set of indices corresponding to ๐’œ โ€‹ ( ๐‘Ÿ ) .

This confidence bound indicates that the algorithm maintains a probabilistic guarantee that the true mean value ๐œ‡ ๐‘– is within a certain range of the estimated mean value ๐œ‡ ^ ๐‘– for each arm ๐‘– , uniformly over all rounds. The bound shrinks as more data is collected (since the confidence radius ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) decreases with more samples), thereby reducing uncertainty. The anytime confidence width ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) is maintained by the design of the sample budget in each round. We set ๐ถ ๐›ฟ / ๐พ ( ๐‘Ÿ )

2 โˆ’ ๐‘Ÿ

: ๐œ€ ๐‘Ÿ , which is halved with each iteration of the rounds.

In LinFACT-G, the initial budget allocation policy is based on the G-optimal design and is defined as follows

{ ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โŒˆ 2 โ€‹ ๐‘‘ โ€‹ ๐œ‹ ๐‘Ÿ โ€‹ ( ๐’‚ ) ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โŒ‰

๐‘‡ ๐‘Ÿ

โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) ,

(21)

where ๐‘‡ ๐‘Ÿ denotes the total sampling budget allocated in round ๐‘Ÿ , and ๐œ‹ ๐‘Ÿ is the selection probability distribution over the remaining active arms ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) from the previous round, obtained via the G-optimal design as defined in equation (13). The sampling procedure for each round ๐‘Ÿ is described in Algorithm 4.1.

 

Algorithm 1 Subroutine: G-Optimal Sampling

  1:Input: Projected active set ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , round ๐‘Ÿ , ๐›ฟ . 2:Obtain ๐œ‹ ๐‘Ÿ โˆˆ ๐’ซ โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) with support size Supp โ€‹ ( ๐œ‹ ๐‘Ÿ ) โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 according to equation (13). 3:for all ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) do โŠณ Sampling 4:โ€ƒโ€‚Sample arm ๐’‚ for ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) times in round ๐‘Ÿ , as specified in equation (21).  

We also adopt the ๐’ณ โ€‹ ๐’ด -optimal design because the G-optimal design is less effective for distinguishing between arms. While the G-optimal design minimizes the maximum variance in estimating individual arm means, it does not explicitly focus on the pairwise gaps that are critical for identifying good arms. In contrast, the ๐’ณ โ€‹ ๐’ด -optimal design is tailored to directly reduce the uncertainty in estimating these gaps.

We now introduce a sampling rule based on the ๐’ณ โ€‹ ๐’ด -optimal design, as defined in equation (15). Let ๐‘ž โ€‹ ( ๐œ– ) denote the error introduced by the rounding procedure. We have:

{ ๐‘‡ ๐‘Ÿ

max โก { โŒˆ 2 โ€‹ ๐‘” ๐’ณ โ€‹ ๐’ด โ€‹ ( ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) ) โ€‹ ( 1 + ๐œ– ) ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โŒ‰ , ๐‘ž โ€‹ ( ๐œ– ) }

๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

Round โ€‹ ( ๐œ‹ ๐‘Ÿ , ๐‘‡ ๐‘Ÿ ) .

(22)

In contrast to the G-optimal design, the ๐’ณ โ€‹ ๐’ด -optimal design focuses on bounding the confidence region of the pairwise differences between arms. The following inequality characterizes the corresponding high-probability event.

โ„™ โ€‹ ( โ‹‚ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , ๐‘— โ‰  ๐‘– โ‹‚ ๐‘Ÿ โˆˆ โ„• | ( ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | โ‰ค 2 โ€‹ ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) ) โ‰ฅ 1 โˆ’ ๐›ฟ .

(23)

The rounding operation, denoted as Round, uses a ( 1 + ๐œ– ) approximation algorithm proposed by Allen-Zhu et al. (2017). The complete sampling procedure is outlined in Algorithm 4.1.

 

Algorithm 2 Subroutine: ๐’ณ โ€‹ ๐’ด -Optimal Sampling

  1:Input: Projected active set ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , round ๐‘Ÿ , ๐›ฟ . 2:Obtain ๐œ‹ ๐‘Ÿ โˆˆ ๐’ซ โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) according to equation (15). 3:for all ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) do โŠณ Sampling 4:โ€ƒโ€‚Sample arm ๐’‚ for ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) times in round ๐‘Ÿ , as specified in equation  (22).  

Estimation. At the end of each round, after drawing ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) samples from the active set, we compute the empirical estimate of the parameter using standard ordinary least squares (OLS)

๐œฝ ^ ๐‘Ÿ

๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐’‚ ๐ด ๐‘  โ€‹ ๐‘‹ ๐‘  ,

(24)

where ๐‘ฝ ๐‘Ÿ

โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) โ€‹ ๐’‚ โ€‹ ๐’‚ โŠค is the information matrix. The estimator for the mean value of each arm ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) is then

๐œ‡ ^ ๐‘–

๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ )

๐’‚ ๐‘– โŠค โ€‹ ๐œฝ ^ ๐‘Ÿ .

(25)

Stopping Rule and Decision Rule. As round ๐‘Ÿ progresses, LinFACT dynamically updates two sets of arms: ๐บ ๐‘Ÿ and ๐ต ๐‘Ÿ , representing arms that are empirically considered ๐œ€ -best (good) and those that are not (bad), respectively. The algorithm filters arms by maintaining an upper confidence bound ๐‘ˆ ๐‘Ÿ and a lower confidence bound ๐ฟ ๐‘Ÿ around the unknown threshold ๐œ‡ 1 โˆ’ ๐œ€ , along with individual upper and lower confidence bounds for each arm.

The stopping rule and final decision procedure are described in Algorithm 4.1. For each arm ๐‘– in the active set, LinFACT eliminates the arm if its upper confidence bound falls below the threshold ๐ฟ ๐‘Ÿ (line 6). Conversely, if the lower confidence bound of an arm exceeds ๐‘ˆ ๐‘Ÿ (line 8), the arm is added to the set ๐บ ๐‘Ÿ . Additionally, any arm already in ๐บ ๐‘Ÿ will be removed from the active set if its upper bound falls below the empirically largest lower bound among all active arms (line 10). This ensures that the best arm is always retained in the active set, which is necessary for estimating the threshold ๐œ‡ 1 โˆ’ ๐œ€ . The classification process continues until all arms are categorized, that is, when ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ

[ ๐พ ] . At termination, the set ๐บ ๐‘Ÿ is returned as the output of LinFACT, representing the arms identified as ๐œ€ -best.

 

Algorithm 3 Subroutine: Stopping Rule and Decision Rule

  1:Input: Projected active set ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , estimator ( ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , round ๐‘Ÿ , ๐œ€ , confidence radius ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) . 2:Let ๐‘ˆ ๐‘Ÿ

max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– + ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ€ . 3:Let ๐ฟ ๐‘Ÿ

max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– โˆ’ ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ€ . 4:for all ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) do โŠณ Arm Classification and Elimination 5:โ€ƒโ€‚if ๐œ‡ ^ ๐‘– + ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) < ๐ฟ ๐‘Ÿ then 6:โ€ƒโ€ƒโ€ƒAdd ๐‘– to ๐ต ๐‘Ÿ and eliminate ๐‘– from ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) . โ€ƒโ€‚ 7:โ€ƒโ€‚if ๐œ‡ ^ ๐‘– โˆ’ ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) > ๐‘ˆ ๐‘Ÿ then 8:โ€ƒโ€ƒโ€ƒAdd ๐‘– to ๐บ ๐‘Ÿ . โ€ƒโ€‚ 9:โ€ƒโ€‚if ๐‘– โˆˆ ๐บ ๐‘Ÿ and ๐œ‡ ^ ๐‘– + ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โ‰ค max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) then 10:โ€ƒโ€ƒโ€ƒEliminate ๐‘– from ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) . โ€ƒโ€‚ 11:if ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ

[ ๐พ ] then โŠณ Stopping Condition and Recommendation 12:โ€ƒโ€‚Output: the set ๐บ ๐‘Ÿ .  

The Complete LinFACT Algorithm. The complete LinFACT algorithm is presented in Algorithm 4.1. The procedure proceeds as follows: based on the collected data, the decision-maker updates the parameter estimates and checks whether the stopping condition ๐œ ๐›ฟ is satisfied. If so, the set ๐บ ๐‘Ÿ is returned as the estimated set of all ๐œ€ -best arms. If the stopping condition is not met, the process continues with further sampling and updates.

 

Algorithm 4 LinFACT Algorithm

  1:Input: ๐œ€ , ๐›ฟ , bandit instance. 2:Initialize ๐บ 0

โˆ… , the set of good arms, and ๐ต 0

โˆ… , the set of bad arms. 3:Initialize the active set ๐’œ โ€‹ ( 0 )

๐’œ , and ๐’œ ๐ผ โ€‹ ( 0 )

[ ๐พ ] . 4:for ๐‘Ÿ

1 , 2 , โ€ฆ do 5:โ€ƒโ€‚Set ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ )

๐œ€ ๐‘Ÿ

2 โˆ’ ๐‘Ÿ . 6:โ€ƒโ€‚Set ๐บ ๐‘Ÿ

๐บ ๐‘Ÿ โˆ’ 1 and ๐ต ๐‘Ÿ

๐ต ๐‘Ÿ โˆ’ 1 . 7:โ€ƒโ€‚Project ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) to a ๐‘‘ ๐‘Ÿ -dimensional subspace that ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) spans. โŠณ Projection 8:โ€ƒโ€‚if Using G-optimal Sampling then โŠณ Sampling 9:โ€ƒโ€ƒโ€ƒCall Algorithm 4.1. 10:โ€ƒโ€‚else if Using ๐’ณ โ€‹ ๐’ด -optimal Sampling then 11:โ€ƒโ€ƒโ€ƒCall Algorithm 4.1. โ€ƒโ€‚ 12:โ€ƒโ€‚Estimate ( ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) using equations (24) and (25). โŠณ Estimation 13:โ€ƒโ€‚Call Algorithm 4.1. โŠณ Stopping Condition and Decision Rule   4.2Upper Bounds of the LinFACT Algorithm

Theorems 4.1 and 4.2 establish upper bounds on the sample complexity of the proposed LinFACT algorithm.

Let ๐‘‡ ๐บ and ๐‘‡ ๐’ณ โ€‹ ๐’ด denote the number of samples required under the G-optimal and ๐’ณ โ€‹ ๐’ด -optimal designs, respectively. The formal statements of these theorems are given below.

Theorem 4.1 (Upper Bounds, G-Optimal Design)

For ๐œ‰

min โก ( ๐›ผ ๐œ€ , ๐›ฝ ๐œ€ ) / 16 , there exists an event โ„ฐ such that โ„™ โ€‹ ( โ„ฐ ) โ‰ฅ 1 โˆ’ ๐›ฟ . On this event, the LinFACT algorithm with the G-optimal sampling policy achieves an expected sample complexity upper bound given by

๐”ผ โ€‹ [ ๐‘‡ ๐บ โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log 2 โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(26)

The detailed proof of Theorem 4.1 is presented in Section EC.5. However, the LinFACT algorithm based on the G-optimal design does not yield an upper bound that aligns with the lower bound. This limitation is discussed further in Section EC.6. In contrast, we will show that the algorithm using the ๐’ณ โ€‹ ๐’ด -optimal design achieves an upper bound that matches the lower bound up to a logarithmic factor.

Theorem 4.2 (Upper Bound, ๐’ณ โ€‹ ๐’ด -Optimal Design)

Assume that an instance of arms satisfies min ๐‘– โˆˆ ๐บ ๐œ€ โˆ– { 1 } โก โ€– ๐š 1 โˆ’ ๐š ๐‘– โ€– 2 โ‰ฅ ๐ฟ 2 and max ๐‘– โˆˆ [ ๐พ ] โก | ๐œ‡ 1 โˆ’ ๐œ€ โˆ’ ๐œ‡ ๐‘– | โ‰ค 2 . There exists an event โ„ฐ such that โ„™ โ€‹ ( โ„ฐ ) โ‰ฅ 1 โˆ’ ๐›ฟ . On this event, the LinFACT algorithm with the ๐’ณ โ€‹ ๐’ด -optimal sampling policy achieves an expected sample complexity upper bound given by

๐”ผ โ€‹ [ ๐‘‡ ๐’ณ โ€‹ ๐’ด โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ( ฮ“ โˆ— ) โˆ’ 1 โ€‹ ๐‘‘ โ€‹ ๐œ‰ โˆ’ 1 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ ๐œ– 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) ,

(27)

where ๐œ‰

min โก ( ๐›ผ ๐œ€ , ๐›ฝ ๐œ€ ) / 16 is the minimum gap of the problem instance, ๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } , and ( ฮ“ โˆ— ) โˆ’ 1 is the lower bound term defined in Theorem 3.2.

The proof of the near-optimal upper bound in Theorem 4.2 is presented in Section EC.7 of the online appendix, where we make it clear how the lower bound helps to establish our upper bound.

5Model Misspecification

In this section, we address the challenge of model misspecification, recognizing that real-world problems may deviate from perfect linearity. To account for such deviations, we propose an orthogonal parameterization-based algorithm, i.e., LinFACT-MIS3, a refined version of LinFACT to address model misspecification. We establish new upper bounds in the misspecified setting and provide insights into how such deviations impact algorithm performance.

Under model misspecification, we refine the linear model in equation (3) as

๐‘‹ ๐‘ก

๐’‚ ๐ด ๐‘ก โŠค โ€‹ ๐œฝ + ๐œ‚ ๐‘ก + ฮ” ๐‘š โ€‹ ( ๐’‚ ๐ด ๐‘ก ) ,

(28)

where ฮ” ๐‘š : โ„ ๐‘‘ โ†’ โ„ is a misspecification function quantifying the deviation from the true model.

{assumption}

Assume โ€– ๐ โ€– โˆž โ‰ค ๐ฟ โˆž and โ€– ๐šซ ๐‘š โ€– โˆž โ‰ค ๐ฟ ๐‘š , where โˆฅ โ‹… โˆฅ โˆž denotes the infinity norm and the bold ๐พ -dimensional vector ๐šซ ๐‘š represents the bias term of the misspecified model.

Therefore, with this assumption, the set of realizable models is defined as

๐‘€ โ‰” { ๐ โˆˆ โ„ ๐พ | โˆƒ ๐œฝ โˆˆ โ„ ๐‘‘ , โˆƒ ๐šซ ๐‘š โˆˆ โ„ ๐พ , ๐

๐šฟ โ€‹ ๐œฝ + ๐šซ ๐‘š โˆง โ€– ๐ โ€– โˆž โ‰ค ๐ฟ โˆž โˆง โ€– ๐šซ ๐‘š โ€– โˆž โ‰ค ๐ฟ ๐‘š } .

(29)

The key distinction in the analysis under model misspecification lies in how the estimator ๐ ^ ๐‘ก is maintained. Specifically, we construct this estimator by projecting the empirical mean vector ๐ ~ ๐‘ก at time ๐‘ก onto the set of realizable models ๐‘€ via the following optimization

๐ ^ ๐‘ก โ‰” arg โก min ๐œ— โˆˆ ๐‘€ โก โ€– ๐œ— โˆ’ ๐ ~ ๐‘ก โ€– ๐‘ซ ๐‘ต ๐‘ก 2 ,

(30)

where ๐‘ต ๐‘ก

[ ๐‘ ๐‘ก โ€‹ 1 , ๐‘ ๐‘ก โ€‹ 2 , โ€ฆ , ๐‘ ๐‘ก โ€‹ ๐พ ] โŠค โˆˆ โ„ ๐พ is the vector of sample counts for each arm at time ๐‘ก , and ๐‘ซ ๐‘ต ๐‘ก โˆˆ โ„ ๐พ ร— ๐พ is the diagonal matrix with ๐‘ ๐‘ก โ€‹ 1 , ๐‘ ๐‘ก โ€‹ 2 , โ€ฆ , ๐‘ ๐‘ก โ€‹ ๐พ as its diagonal entries.

Figure 2:Difference Between Standard OLS and Misspecification-Adjusted Projection Estimates

Note. The left diagram shows the projection onto the span of pulled arms under a perfect linear model. The right diagram depicts the adjustment required under misspecification, where the projection must account for the deviation.

In the absence of misspecification, this projection simplifies to the ordinary least squares (OLS) estimator. However, as shown in Figure 2, under model misspecification, the estimator can no longer be computed as a simple projection onto a hyperplane and falls outside the scope of standard OLS. Instead, it must be formulated as the optimization problem in equation (30), which minimizes a weighted quadratic objective over ๐พ + ๐‘‘ variables, subject to the constraints โ€– ๐ โ€– โˆž โ‰ค ๐ฟ โˆž and โ€– ๐šซ ๐‘š โ€– โˆž โ‰ค ๐ฟ ๐‘š .

5.1Upper Bound with Misspecification

In this subsection, we present the upper bound on the expected sample complexity for LinFACT-MIS. This analysis highlights the influence of misspecification on the theoretical performance of the algorithm. Let ๐‘‡ ๐บ mis denote the number of samples taken under model misspecification.

Theorem 5.1 (Upper Bound, Misspecification)

Fix ๐œ€ > 0 and suppose that the magnitude of misspecification satisfies ๐ฟ ๐‘š < min โก { ๐›ผ ๐œ€ 2 โ€‹ ๐‘‘ , ๐›ฝ ๐œ€ 2 โ€‹ ๐‘‘ } . For ๐œ‰

min โก ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ , ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) / 16 , there exists an event โ„ฐ such that โ„™ โ€‹ ( โ„ฐ ) โ‰ฅ 1 โˆ’ ๐›ฟ . On this event, LinFACT-MIS terminates and returns the correct solution with an expected sample complexity upper bound given by

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ mis โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(31)

The proof of this theorem is provided in Section EC.8. The upper bound in Theorem 4.1, which assumes no model misspecification, can be viewed as a special case of Theorem 5.1 by setting ๐ฟ ๐‘š

0 . However, the bound in Theorem 5.1 becomes invalid when the misspecification magnitude ๐ฟ ๐‘š is too large, as the logarithmic terms may involve negative arguments, violating the assumptions required for the bound to hold. If the variation in confidence radius across arms due to misspecification is not accounted for, Theorem 5.1 suggests that the sample complexity will increase. In particular, compared to the bound in Theorem 4.1, this result is looser because its denominator terms about ๐œ€ decrease from ๐›ผ ๐œ€ and ๐›ฝ ๐œ€ to ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ and ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ , respectively.

5.2Orthogonal Parameterization

In this section, we present an alternative version of LinFACT-MIS based on orthogonal parameterization, designed to improve computational efficiency.

Figure 3:Orthogonal Parameterization and Projection.

Note. The estimator ๐› ^ ๐‘ก is obtained from the empirical mean ๐› ~ ๐‘ก by solving an optimization problem. While the true mean vector ๐› can be expressed as the sum of a linear component ๐šฟ โ€‹ ๐›‰ and a non-linear model deviation ๐šซ ๐‘š , it can also be decomposed at each time step ๐‘ก via orthogonal projection into a linear part ๐šฟ โ€‹ ๐›‰ ๐‘ก on the hyperplane and a residual term ๐šซ ๐‘š โ€‹ ( ๐‘ก ) orthogonal to it.

Orthogonal Parameterization. Under model misspecification, traditional confidence bounds for the mean estimator based on โ€– ๐œฝ ^ ๐‘ก โˆ’ ๐œฝ โ€– ๐‘ฝ ๐‘ก 2 , derived using either martingale-based methods (Abbasi-Yadkori et al. 2011) or covering arguments (Lattimore and Szepesvรกri 2020), are no longer directly applicable due to the presence of an additional misspecification term. To improve the concentration of the estimator in this setting, a key strategy is to adopt an orthogonal parameterization of the mean vectors within the realizable model ๐‘€ (Rรฉda et al. 2021).

Rather than centering the confidence region around the true parameter ๐œฝ , we focus on the quantity โ€– ๐œฝ ^ ๐‘ก โˆ’ ๐œฝ ๐‘ก โ€– ๐‘ฝ ๐‘ก 2 , where ๐œฝ ๐‘ก is the orthogonal projection of the true mean vector onto the feature space spanned by the pulled arms at time ๐‘ก . This ๐œฝ ๐‘ก -centered form corresponds to a self-normalized martingale and thus satisfies the same concentration bounds as in the classical linear bandit setting without misspecification. This approach offers an advantage over prior methods (Lattimore et al. 2020, Zanette et al. 2020), which require inflating the confidence radius between ๐œฝ ^ ๐‘ก and ๐œฝ by a factor of ๐ฟ ๐‘š 2 โ€‹ ๐‘ก , leading to overly conservative bounds in misspecified settings where ๐ฟ ๐‘š โ‰ซ 0 .

Specifically, we show that any mean vector ๐

๐šฟ โ€‹ ๐œฝ + ๐šซ ๐‘š can be equivalently expressed at any time ๐‘ก as ๐

๐šฟ โ€‹ ๐œฝ ๐‘ก + ๐šซ ๐’Ž โ€‹ ( ๐’• ) , where

๐œฝ ๐‘ก

( ๐šฟ ๐‘ ๐‘ก โŠค โ€‹ ๐šฟ ๐‘ ๐‘ก ) โˆ’ 1 โ€‹ ๐šฟ ๐‘ ๐‘ก โŠค โ€‹ ๐‘ซ ๐‘ ๐‘ก 1 / 2 โ€‹ ๐

๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘ก ๐œ‡ ๐ด ๐‘  โ€‹ ๐’‚ ๐ด ๐‘ 

(32)

is the orthogonal projection of ๐ onto the feature space spanned by the columns of ๐šฟ ๐‘ ๐‘ก , and ๐šซ ๐’Ž โ€‹ ( ๐’• )

๐ โˆ’ ๐šฟ โ€‹ ๐œฝ ๐‘ก is the residual. Here, ๐šฟ ๐‘ ๐‘ก

๐‘ซ ๐‘ ๐‘ก 1 / 2 โ€‹ ๐šฟ is the matrix of feature vectors weighted by the number of times each arm has been sampled up to time ๐‘ก , and ๐‘ซ ๐‘ ๐‘ก 1 / 2 is a diagonal matrix with entries corresponding to the square roots of the number of samples for each arm.

Upper Bound. For LinFACT-MIS, the orthogonal parameterization involves updating the sampling rule in Algorithm 4.1 to the sampling rule in Algorithm 5.2. The estimation is no longer based on OLS but is achieved by calculating the estimator ( ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) with observed data by solving the optimization problem described in equation (30) directly.

When model misspecification is accounted for and orthogonal parameterization is applied, the sampling policy is given by:

{ ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โŒˆ 8 โ€‹ ๐‘‘ โ€‹ ๐œ‹ ๐‘Ÿ โ€‹ ( ๐’‚ ) ๐œ€ ๐‘Ÿ 2 โ€‹ ( ๐‘‘ โ€‹ log โก ( 6 ) + log โก ( ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) ) โŒ‰

๐‘‡ ๐‘Ÿ

โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) .

(33)

Let ๐‘‡ ๐บ op denote the total number of samples required when orthogonal parameterization is used. The corresponding upper bound is stated in the theorem below.

 

Algorithm 5 Subroutine: Sampling With Orthogonal Parameterization

  1:Input: Projected active set ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , round ๐‘Ÿ , ๐›ฟ . 2:Find the G-optimal design ๐œ‹ ๐‘Ÿ โˆˆ ๐’ซ โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) with Supp ( ๐œ‹ ๐‘Ÿ ) โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 according to equation (13). 3:for all ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) do โŠณ Sampling 4:โ€ƒโ€‚Sample arm ๐’‚ for ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) times in round ๐‘Ÿ , as specified in equation (33).   Theorem 5.2 (Upper Bound, Orthogonal Parameterization)

Fix ๐œ€ > 0 and suppose that the magnitude of misspecification satisfies ๐ฟ ๐‘š < min โก { ๐›ผ ๐œ€ 2 โ€‹ ( ๐‘‘ + 2 ) , ๐›ฝ ๐œ€ 2 โ€‹ ( ๐‘‘ + 2 ) } . For ๐œ‰

min โก ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) , ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) / 16 , there exists an event โ„ฐ such that โ„™ โ€‹ ( โ„ฐ ) โ‰ฅ 1 โˆ’ ๐›ฟ . On this event, LinFACT-MIS terminates and returns the correct solution with an expected sample complexity upper bound given by

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ op โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(34)

This theorem establishes that the upper bound remains of the same order as in Theorem 5.1, with the detailed proof provided in Section EC.9. As in Theorem 5.1, the use of orthogonal parameterization does not eliminate the expansion of the upper bound, which remains unavoidable. We provide further intuition in Section 5.3, arguing that without prior knowledge of the misspecification, it is not possible to recover full performance through algorithmic refinement alone.

5.3Insights for Model Misspecification

Lower Bounds in Linear and Stochastic Settings. The lower bound becomes equivalent to the unstructured lower bound as soon as the misspecification upper bound ๐ฟ ๐‘š

๐ฟ ๐ , where ๐ฟ ๐ is an instance-dependent finite constant. This observation is formalized in Proposition 5.3, whose proof follows the same logic as Lemma 2 in Rรฉda et al. (2021).

Proposition 5.3

There exists ๐ฟ ๐› โˆˆ โ„ with ๐ฟ ๐› โ‰ค max ๐‘˜ โก ๐œ‡ ๐‘˜ โˆ’ min ๐‘˜ โก ๐œ‡ ๐‘˜ such that if ๐ฟ ๐‘š

๐ฟ ๐› , then for any pure exploration task, the lower bound in the linear setting is equal to the unstructured lower bound.

Improvement with Unknown Misspecification is Not Possible. Knowing that a problem is misspecified without access to an upper bound ๐ฟ ๐‘š on โ€– ๐šซ ๐‘š โ€– โˆž is effectively equivalent to having no structural knowledge of the problem. As a result, improving algorithmic performance under such unknown model misspecification is infeasible. In particular, as shown in cumulative regret settings, sublinear regret guarantees are no longer achievable (Ghosh et al. 2017, Lattimore et al. 2020); similarly, in pure exploration, the theoretical lower bound cannot be attained.

Prior Knowledge for the Misspecification. When the upper bound ๐ฟ ๐‘š on model misspecification is known in advance, LinFACT-MIS can be modified to account for this deviation. Specifically, we adjust the confidence radius ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) used in computing the lower and upper bounds (i.e., ๐ฟ ๐‘Ÿ and ๐‘ˆ ๐‘Ÿ ) in Algorithm 4.1. With this modification, the number of rounds required to complete classification under misspecification, ๐‘… upper โ€ฒ and ๐‘… upper โ€ฒโ€ฒ , coincides with ๐‘… upper , the corresponding number of rounds under a perfectly linear model. This is achieved by replacing the confidence radius ๐œ€ ๐‘Ÿ with an inflated version ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ to compensate for the worst-case deviation due to misspecification. This adjustment preserves the validity of the original analysis and ensures that the same theoretical guarantees are retained. The following proposition formalizes this observation and is proved in Section EC.10.

Proposition 5.4

Suppose the misspecification magnitude ๐ฟ ๐‘š

0 is known in advance. Adjusting the confidence radius ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) in Algorithm 4.1 at each round ๐‘Ÿ from its original value ๐œ€ ๐‘Ÿ to

๐œ€ ๐‘Ÿ โ€ฒ

๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ ,

(35)

ensures that the total number of rounds required under misspecification matches that under perfect linearity.

6Generalized Linear Model

In this section, we extend the linear bandits to a generalized linear model (GLM). In this setting, the reward function no longer follows the standard linear form in equation (3), but instead satisfies

๐”ผ โ€‹ [ ๐‘‹ ๐‘ก โˆฃ ๐ด ๐‘ก ]

๐œ‡ link โ€‹ ( ๐’‚ ๐ด ๐‘ก โŠค โ€‹ ๐œฝ ) ,

(36)

where ๐œ‡ link : โ„ โ†’ โ„ is the inverse link function. GLMs encompass a class of models that include, but are not limited to, linear models, allowing for various reward distributions beyond the Gaussian. For example, for binary-valued rewards, a suitable choice of ๐œ‡ link is ๐œ‡ link โ€‹ ( ๐‘ฅ )

exp โก ( ๐‘ฅ ) / ( 1 + exp โก ( ๐‘ฅ ) ) , i.e., sigmoid function, leading to the logistic regression model. For integer-valued rewards, ๐œ‡ link โ€‹ ( ๐‘ฅ )

exp โก ( ๐‘ฅ ) leads to the Poisson regression model.

To keep this paper self-contained, we briefly review the main properties of GLMs (McCullagh 2019). A univariate probability distribution belongs to a canonical exponential family if its density with respect to a reference measure is given by

๐‘ ๐œ” โ€‹ ( ๐‘ฅ )

exp โก ( ๐‘ฅ โ€‹ ๐œ” โˆ’ ๐‘ โ€‹ ( ๐œ” ) + ๐‘ โ€‹ ( ๐‘ฅ ) ) ,

(37)

where ๐œ” is a real parameter, ๐‘ โ€‹ ( โ‹… ) is a real normalization function, and ๐‘ โ€‹ ( โ‹… ) is assumed to be twice continuously differentiable. This family includes the Gaussian and Gamma distributions when the reference measure is the Lebesgue measure, and the Poisson and Bernoulli distributions when the reference measure is the counting measure on the integers. For a random variable ๐‘‹ with density defined in (37), ๐”ผ โ€‹ ( ๐‘‹ )

๐‘ ห™ โ€‹ ( ๐œ” ) and Var โ€‹ ( ๐‘‹ )

๐‘ ยจ โ€‹ ( ๐œ” ) , where ๐‘ ห™ and ๐‘ ยจ denote the first and second derivatives of ๐‘ , respectively. Since the variance is always positive and ๐œ‡ link

๐‘ ห™ represents the inverse link function, ๐‘ is strictly convex and ๐œ‡ link is increasing.

The canonical GLM assumes that ๐‘ ๐œฝ โ€‹ ( ๐‘‹ โˆฃ ๐’‚ ๐‘– )

๐‘ ๐’‚ ๐‘– โŠค โ€‹ ๐œฝ โ€‹ ( ๐‘‹ ) for all arms ๐‘– . The maximum likelihood estimator ๐œฝ ^ ๐‘ก , based on ๐œŽ -algebra โ„ฑ ๐‘ก

๐œŽ โ€‹ ( ๐ด 1 , ๐‘‹ 1 , ๐ด 2 , ๐‘‹ 2 , โ€ฆ , ๐ด ๐‘ก , ๐‘‹ ๐‘ก ) , is defined as the maximizer of the function

โˆ‘ ๐‘ 

1 ๐‘ก log โก ๐‘ ๐œฝ โ€‹ ( ๐‘‹ ๐‘  | ๐’‚ ๐ด ๐‘  )

โˆ‘ ๐‘ 

1 ๐‘ก ๐‘‹ ๐‘  โ€‹ ๐’‚ ๐ด ๐‘  โŠค โ€‹ ๐œฝ โˆ’ ๐‘ โ€‹ ( ๐’‚ ๐ด ๐‘  โŠค โ€‹ ๐œฝ ) + ๐‘ โ€‹ ( ๐‘‹ ๐‘  ) .

(38)

This function is strictly concave in ๐œฝ . By differentiating, we obtain that ๐œฝ ^ ๐‘ก is the unique solution of the following estimating equation at time ๐‘ก ,

โˆ‘ ๐‘ 

1 ๐‘ก ( ๐‘‹ ๐‘  โˆ’ ๐œ‡ link โ€‹ ( ๐’‚ ๐ด ๐‘  โŠค โ€‹ ๐œฝ ) ) โ€‹ ๐’‚ ๐ด ๐‘ 

๐ŸŽ .

(39)

In practice, while the solution to equation (39) does not have a closed-form solution, it can be efficiently found using methods such as iteratively reweighted least squares (IRLS) (Wolke and Schwetlick 1988), which employs Newtonโ€™s method. Here, ๐œฝ ห‡ ๐‘ก is a convex combination of ๐œฝ and its maximum likelihood estimate ๐œฝ ^ ๐‘ก at time ๐‘ก . The existence of ๐‘ min can be ensured by performing forced exploration at the beginning of the algorithm, incurring a sampling cost of ๐‘‚ โ€‹ ( ๐‘‘ ) (Kveton et al. 2023).

{assumption}

The derivative of the inverse link function, ๐œ‡ link , is bounded, i.e., ๐‘ min โ‰ค ๐œ‡ ห™ link โ€‹ ( ๐’‚ โŠค โ€‹ ๐œฝ ห‡ ๐‘ก ) , for some ๐‘ min โˆˆ โ„ + and all arms.

Assumption 6 is standard in the GLM literature (Li et al. 2017, Azizi et al. 2021b), ensuring that the reward function is sufficiently smooth, with ๐‘ min

0 typically determined by the choice of link function.

6.1Algorithm with GLM

In this section, we present a refined algorithm for the generalized linear model, referred to as LinFACT-GLM. This refinement involves modifying the sampling rule in Algorithm 4.1 and adjusting the estimation method. The designed sampling policy is described by

{ ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โŒˆ 2 โ€‹ ๐‘‘ โ€‹ ๐œ‹ ๐‘Ÿ โ€‹ ( ๐’‚ ) ๐œ€ ๐‘Ÿ 2 โ€‹ ๐‘ min 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โŒ‰

๐‘‡ ๐‘Ÿ

โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) ,

(40)

where ๐‘ min is the known constant controlling the first-order derivative of the inverse link function.

 

Algorithm 6 Subroutine: G-Optimal Sampling with GLM

  1:Input: Projected active set ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , round ๐‘Ÿ , ๐›ฟ . 2:Find the G-optimal design ๐œ‹ ๐‘Ÿ โˆˆ ๐’ซ โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) with support size Supp โ€‹ ( ๐œ‹ ๐‘Ÿ ) โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 according to equation (13). 3:for all ๐’‚ โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) do โŠณ Sampling 4:โ€ƒโ€‚Sample arm ๐’‚ for ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) times in round ๐‘Ÿ , as specified in equation (40).  

In the GLM setting, Ordinary Least Squares (OLS) is also not applicable. Instead, the estimator for each ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) is obtained by solving the optimization problem described in equation (38), using the estimating equation in (39) derived from the observed data.

6.2Upper Bound for the GLM-Based LinFACT

Let ๐‘‡ ๐บ GLM denote the number of samples collected under the GLM setting. The following theorem provides an upper bound on the expected sample complexity of LinFACT-GLM.

Theorem 6.1 (Upper Bound, Generalized Linear Model)

For ๐œ‰

min โก ( ๐›ผ ๐œ€ , ๐›ฝ ๐œ€ ) / 16 , there exists an event โ„ฐ such that โ„™ โ€‹ ( โ„ฐ ) โ‰ฅ 1 โˆ’ ๐›ฟ . On this event, the LinFACT-GLM algorithm achieves an expected sample complexity upper bound given by

๐”ผ โ€‹ [ ๐‘‡ GLM โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ ๐‘ min 2 โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log 2 โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(41)

The upper bound presented in Theorem 6.1, which generalizes the model to the GLM setting, can be viewed as an extension of Theorem 4.1. The detailed proof is provided in Section EC.11 of the online appendix.

7Numerical Experiments

In the numerical experiments, we compare our algorithm, LinFACT, with several baseline methods. These include the Bayesian optimization algorithm based on the knowledge-gradient acquisition function with correlated beliefs for best arm identification (KGCB) proposed by Negoescu et al. (2011); the gap-based algorithm for best arm identification (BayesGap) introduced by Hoffman et al. (2013); the track-and-stop algorithm for threshold bandits (Lazy TTS) developed by Rivera and Tewari (2024); and two gap-based algorithms for top ๐‘š arm identification, LinGIFA and ๐‘š -LinGapE, presented by Rรฉda et al. (2021a), which represent the state-of-the-art for returning multiple candidates.

Identifying all ๐œ€ -best arms is often more challenging than identifying the top ๐‘š arms or arms above a given threshold. To address this, we adopt a random setting where both the number of ๐œ€ -best arms and the ๐œ€ -threshold are randomly sampled. In this setting, top ๐‘š algorithms and threshold bandit algorithms only have access to the expected reward values, ensuring a fair comparison. Since BayesGap and KGCB operate under a fixed-budget setting, we use the average sample complexity of LinFACT as the budget for comparison and evaluate their performance accordingly. For BAI algorithms, we select arms whose empirical means are within ๐œ€ of the empirical best arm once the budget is exhausted.

(a)Synthetic I - Adaptive Setting (b)Synthetic II - Static Setting Figure 4:Illustration of the Synthetic Experiment Settings

Note. In the adaptive setting (a), we randomly sample the threshold ๐‘‹ ~ from a distribution with mean 1, and independently sample the number of ๐œ€ -best arms ๐‘š ~ from a distribution with mean ๐‘š . The arms above and below the threshold are then uniformly drawn from the intervals [ ๐‘‹ ~ , ๐‘‹ ~ + ๐œ€ ] and [ ๐‘‹ ~ โˆ’ ๐œ€ , ๐‘‹ ~ ) , respectively. In the static setting (b), we fix the threshold at ฮ” / 2 and randomly sample ๐‘š ~

๐œ€ -best arms with reward ฮ” .

7.1Synthetic Experiments

Following Soare et al. (2014), Xu et al. (2018), and Azizi et al. (2021b), we categorize synthetic data into two types: adaptive and static settings. Figure 4 illustrates the construction of these synthetic datasets, with detailed configurations provided in Section EC.12. A summary of all settings is presented in Table 1.

In the adaptive setting, arms are divided into three categories: (1) the arms to be selected (i.e., the all ๐œ€ -best arms), (2) disturbing arms that are slightly worse, and (3) base arms with zero rewards. The primary challenge for algorithms is to distinguish between the arms in categories (1) and (2), while the base arms in category (3) can be ignored. Adaptive algorithms that effectively leverage shared information to explore similar arms perform well in this setting.

In the static setting, arms are divided into two categories: the all ๐œ€ -best arms and the base arms with zero rewards. In this case, algorithms must distinguish between all arms. Static algorithms that uniformly explore all arms are well-suited for this setting.

Table 1:Synthetic Experiment Settings Setting Index Setting Category Setting Details 1 Adaptive ( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 8 , 4 ) , ๐œ€

0.1

2 Adaptive ( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 8 , 4 ) , ๐œ€

0.2

3 Adaptive ( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 8 , 4 ) , ๐œ€

0.3

4 Adaptive ( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 12 , 4 ) , ๐œ€

0.1

5 Adaptive ( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 12 , 4 ) , ๐œ€

0.2

6 Adaptive ( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 12 , 4 ) , ๐œ€

0.3

7 Static ( ๐‘‘ , ฮ” )

( 8 , 1 )

8 Static ( ๐‘‘ , ฮ” )

( 12 , 1 )

9 Static ( ๐‘‘ , ฮ” )

( 16 , 1 ) Remark 7.1

A common misconception is that adaptive algorithms universally outperform static ones. While adaptive algorithms are typically efficient at focusing exploration on promising arms in many settings, they can be less effective in static environments. In such cases, adaptive methods may inefficiently allocate samples between both candidate and baseline arms, leading to redundant exploration. In contrast, static algorithms can achieve the objective more efficiently by uniformly allocating samples across all arms, avoiding bias and over-exploration.

Experiment Setup. We benchmark our algorithms, LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด , against BayesGap, KGCB, LinGIFA, m-LinGapE, and Lazy TTS, focusing on both sample complexity and the ๐น โ€‹ 1 score.

We conduct each experiment using different data types (adaptive or static), arm dimensions ( ๐‘‘ ), and numbers of arms ( ๐พ ). For each configuration, we generate 10 values of ๐‘š from a normal distribution centered at the expected value ๐”ผ โ€‹ [ ๐‘š ] with a variance of 3.0, where ๐‘š is the input for a top- ๐‘š algorithm. For each sampled pair ( ๐‘š ~ , ๐‘‹ ~ ) , where ๐‘š ~ denotes the number of ๐œ€ -best arms and ๐‘‹ ~ represents the value of the best arm minus ๐œ€ , we repeat the experiment 100 times. We then compute the average ๐น โ€‹ 1 score across the 10 ( ๐‘š ~ , ๐‘‹ ~ ) pairs, resulting in 1,000 total executions per algorithm.

In practice, we observe that KGCB, LinGIFA, m-LinGapE, and Lazy TTS are computationally intensive when the sampling budget is high. The time-consuming nature of KGCB has already been noted in the literature (Negoescu et al. 2011). For Lazy TTS, the algorithm requires repeatedly evaluating an objective function within an optimization problem, where each evaluation has a time complexity of ๐‘‚ โ€‹ ( ๐พ โ€‹ ๐‘‘ 2 + ๐‘‘ 3 ) . As the optimization process involves a non-negligible number of iterations ( ๐‘› ), the total time complexity becomes ๐‘‚ โ€‹ ( ๐‘› โ€‹ ๐พ โ€‹ ๐‘‘ 2 ) , making the algorithm inefficient.

For the two top- ๐‘š algorithms, the computational burden arises from performing matrix inversions for all arms, leading to a total time complexity of ๐‘‚ โ€‹ ( ๐พ โ€‹ ๐‘‘ 3 ) . In contrast, LinFACT achieves a significantly lower total time complexity of ๐‘‚ โ€‹ ( ๐พ โ€‹ ๐‘‘ 2 ) , as the optimization problem within our algorithm can be efficiently solved using a fixed-step gradient descent method. Table 2 presents the runtime for synthetic data, demonstrating that our algorithm is at least five times faster than all other methods. Notably, this performance gap is even more pronounced when using real data.

Table 2:Running Time (seconds) for Different Synthetic Experiments Among Algorithms Algorithm Settings Adaptive Settings Static Settings

( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 8 , 4 )
( ๐‘‘ , ๐”ผ โ€‹ [ ๐‘š ] )

( 12 , 4 )
ฮ”

1

๐œ€

0.1
๐œ€

0.2
๐œ€

0.3
๐œ€

0.1
๐œ€

0.2
๐œ€

0.3
๐‘‘

8
๐‘‘

12
๐‘‘

16

LinFACT-G 0.028

0.030

0.028

0.078

0.078

0.076

0.005

0.007

0.009

LinFACT- ๐’ณ โ€‹ ๐’ด

0.037

0.038

0.038

0.163

0.155

0.145

0.015

0.028

0.049

BayesGap 0.112

0.110

0.110

0.306

0.304

0.307

0.036

0.071

0.112

LinGIFA 0.313

0.265

0.236

1.943

1.484

1.424

0.160

0.529

1.297

m-LinGapE 0.195

0.166

0.157

1.362

1.053

0.984

0.116

0.352

0.932

Lazy TTS 0.991

3.904

12.512

24.583

26.941

25.622

0.198

0.871

2.164

KGCB 1.990

1.936

1.670

6.245

6.611

6.082

0.303

0.745

1.386

Note. The best result is in bold and the second best is underlined.

Experiment Results. Our experimental results are presented in Figures 5 and  6. In Figure 5, the vertical axis denotes the ๐น โ€‹ 1 score, with higher values indicating better algorithm performance. The first row of six plots shows the results under adaptive settings. As ๐œ€ increases, the non-optimal (disturbing) arms in the adaptive setting move progressively farther from the optimal arms, making them easier to distinguish. Consequently, the ๐น โ€‹ 1 score increases from left to right. An exception is BayesGap, which performs best when ๐œ€

0.2 . This occurs because best-arm identification algorithms, such as BayesGap, struggle to differentiate optimal arms from disturbing ones when they are close ( ๐œ€

0.1 ) and fail to fully explore the optimal arms when they are not closely clustered with the best arm ( ๐œ€

0.3 ).

Our LinFACT algorithms consistently outperform top- ๐‘š algorithms, BayesGap, and KGCB. While our algorithms perform slightly worse than Lazy TTS for some cases, they have much lower sample complexity, as shown in Figure 6, meaning that Lazy TTS requires substantially more samples to achieve these results. When comparing LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด , we observe that in adaptive settings, the ๐น โ€‹ 1 scores are similar, but LinFACT- ๐’ณ โ€‹ ๐’ด achieves lower sample complexity. In static settings, however, LinFACT-G attains a higher ๐น โ€‹ 1 score with a reduced sample complexity. This difference stems from the distinct focus of the two designs: the ๐’ณ โ€‹ ๐’ด -optimal design prioritizes pulling arms to obtain better estimates along the directions representing differences between arms, while the G-optimal design aims to improve estimates along the directions representing all arms.

Figure 5: ๐น โ€‹ 1 Scores for Different Synthetic Experiments Among Algorithms

Note. The y-axis reports the F1 score, which reflects how accurately each algorithm identifies ๐œ€ -best arms. LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด consistently achieve high F1 scores. While Lazy TTS occasionally attains higher scores, we demonstrate in the next figure that it requires significantly more samples to do so. Detailed configurations of each experimental setting are provided in Table 1.

Figure 6:Sample Complexity for Different Synthetic Experiments Among Algorithms

Note. LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด demonstrate sample complexities comparable to LinGIFA and m-LinGapE while achieving higher F1 scores in the adaptive settings (Settings 1 to 6), and consistently outperform all other algorithms in the static settings (Settings 7 to 9). BayesGap and KGCB are excluded from this comparison as they are designed for the fixed-budget setting, and thus their sample complexity is not well-defined.

7.2Experiments with Real Data - Drug Discovery

Experiment Setup. We adopt the Free-Wilson model (Katz et al. 1977) and use real data from a drug discovery task. The Free-Wilson model is a linear framework in which the overall efficacy of a compound is expressed as the sum of the contributions from each substituent on the base molecule, along with the effect of the base molecule itself (Negoescu et al. 2011).

Figure 7:An Example of Molecule and Substituent Locations with Site 1 , โ€ฆ , Site 5

Note. We begin with a base molecule containing multiple attachment sites for chemical substituents. By varying these substituents, we generate a diverse set of compounds and aim to identify those with desirable properties.

Each compound is modeled as an arm represented by a binary indicator vector. Suppose there are ๐‘ modification sites, with site ๐‘› โˆˆ [ ๐‘ ] offering ๐‘™ ๐‘› alternative substituents. Then each arm ๐’‚ lies in โ„ 1 + โˆ‘ ๐‘› โˆˆ [ ๐‘ ] ๐‘™ ๐‘› , where the initial entry corresponds to the base molecule (i.e., the intercept term). For each site, the corresponding segment in the vector has exactly one entry set to 1 (indicating the chosen substituent), with the remaining ๐‘™ ๐‘› โˆ’ 1 entries set to 0. This results in a total of โˆ ๐‘› โˆˆ [ ๐‘ ] ๐‘™ ๐‘› unique compound configurations.

We conducted our experiments using data as described in Katz et al. (1977), retaining only the non-zero entries. The base molecule, illustrated in Figure 7, contains five sites where substituents can be attached. Each site offers 4, 5, 4, 3, and 4 candidate substituents, respectively, resulting in 960 possible compounds. Each arm ๐’‚ is represented as a vector in โ„ 21 .

We benchmarked our algorithms, LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด , against LinGIFA and m-LinGapE4, evaluating how precision, recall, ๐น โ€‹ 1 score, and sample complexity vary with the failure probability ๐›ฟ . In this study, the good set was defined to include 20 ๐œ€ -best arms, which corresponds to ๐œ€

4.325 . All methods were tested over 10 trials, with ๐›ฟ ranging from 0.1 to 0.9 in increments of 0.1.

Experiment results. The experimental results are shown in Figure 8. LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด consistently deliver high precision, recall, and ๐น โ€‹ 1 scores across various failure probabilities ๐›ฟ , as shown in Figures 8aโ€“8c. In particular, their precision remains close to 1.0, indicating that nearly all selected arms are truly ๐œ€ -best. Their recall is also robust across ๐›ฟ , suggesting strong coverage of the good set with minimal omission. This balance between high precision and recall leads to ๐น โ€‹ 1 scores that remain consistently near optimal, even as ๐›ฟ varies from 0.1 to 0.9. In contrast, LinGIFA and m-LinGapE exhibit larger fluctuations and overall lower values in all three metrics, with especially degraded recall and ๐น โ€‹ 1 performance at moderate values of ๐›ฟ .

In addition, as illustrated in Figure 8d, LinFACT- ๐’ณ โ€‹ ๐’ด achieves the lowest sample complexity, followed closely by LinFACT-G, with both outperforming all baseline methods. These performance disparities highlight the reliability and robustness of LinFACT methods. Finally, LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด also demonstrate strong computational efficiency: both complete the task within one minute, whereas LinGIFA and m-LinGapE take about four times longer, and Lazy TTS requires over 30 minutes.

(a)Precision (b)Recall (c) ๐น โ€‹ 1 Score (d)Sample Complexity Figure 8:Precision, Recall, and ๐น โ€‹ 1 Score for Various Failure Probabilities ๐›ฟ

Note. LinFACT-G and LinFACT- ๐’ณ โ€‹ ๐’ด consistently demonstrate the best performance across all metrics. LinFACT- ๐’ณ โ€‹ ๐’ด achieves the lowest sample complexity while maintaining high accuracy. BayesGap and KGCB are excluded as they operate under a fixed-budget setting. Lazy TTS is also omitted due to excessive runtime.

8Conclusion

In this paper, we address the challenge of identifying all ๐œ€ -best arms in linear bandits, motivated by applications such as drug discovery. We establish the first information-theoretic lower bound to characterize the problemโ€™s complexity and derive a matching upper bound. Our LinFACT algorithm achieves instance-optimal performance up to a logarithmic factor under the ๐’ณ โ€‹ ๐’ด -optimal design criterion.

We further extend our analysis to settings with model misspecification and generalized linear models (GLMs), deriving new upper bounds and providing insights into algorithmic behavior under these broader conditions. These results generalize and recover the guarantees from the perfectly linear case as special instances. Our numerical experiments confirm that LinFACT outperforms existing methods in both sample and computational efficiency, while maintaining high accuracy in identifying all ๐œ€ -best arms.

Future Research Directions. First, while our current work focuses on the fixed-confidence setting, many real-world applications operate under a fixed sampling budget, where the objective is to achieve the best outcome within a limited number of trials. Extending the algorithm to this fixed-budget setting and establishing corresponding theoretical guarantees remains an important avenue for exploration. Moreover, deriving fundamental lower bounds in the fixed-budget regime is still an open question. Second, although we have proposed extensions for both misspecified linear bandits and generalized linear models (GLMs), future work could benefit from developing separate algorithms tailored to each setting. Such targeted designs may offer improved performance.

\c@NAT@ctr References Abbasi-Yadkori et al. (2011) โ†‘ Abbasi-Yadkori Y, Pรกl D, Szepesvรกri C (2011) Improved algorithms for linear stochastic bandits. Advances in neural information processing systems 24. Abe and Long (1999) โ†‘ Abe N, Long PM (1999) Associative reinforcement learning using linear probabilistic concepts. ICML, 3โ€“11 (Citeseer). Abernethy et al. (2016) โ†‘ Abernethy JD, Amin K, Zhu R (2016) Threshold bandits, with and without censored feedback. Advances In Neural Information Processing Systems 29. Al Marjani et al. (2022) โ†‘ Al Marjani A, Kocak T, Garivier A (2022) On the complexity of all ๐œ€ -best arms identification. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 317โ€“332 (Springer). Alaei et al. (2022) โ†‘ Alaei S, Makhdoumi A, Malekian A, Pekeฤ S (2022) Revenue-sharing allocation strategies for two-sided media platforms: Pro-rata vs. user-centric. Management Science 68(12):8699โ€“8721. Allen-Zhu et al. (2017) โ†‘ Allen-Zhu Z, Li Y, Singh A, Wang Y (2017) Near-optimal discrete optimization for experimental design: A regret minimization approach. Allen-Zhu et al. (2021) โ†‘ Allen-Zhu Z, Li Y, Singh A, Wang Y (2021) Near-optimal discrete optimization for experimental design: A regret minimization approach. Mathematical Programming 186:439โ€“478. Azizi et al. (2021) โ†‘ Azizi MJ, Kveton B, Ghavamzadeh M (2021) Fixed-budget best-arm identification in structured bandits. arXiv preprint arXiv:2106.04763 . Bechhofer (1954) โ†‘ Bechhofer RE (1954) A single-sample multiple decision procedure for ranking means of normal populations with known variances. The Annals of Mathematical Statistics 16โ€“39. Boyd and Bilegan (2003) โ†‘ Boyd EA, Bilegan IC (2003) Revenue management and e-commerce. Management science 49(10):1363โ€“1386. Bubeck et al. (2012) โ†‘ Bubeck S, Cesa-Bianchi N, et al. (2012) Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trendsยฎ in Machine Learning 5(1):1โ€“122. Bubeck et al. (2013) โ†‘ Bubeck S, Wang T, Viswanathan N (2013) Multiple identifications in multi-armed bandits. International Conference on Machine Learning, 258โ€“265 (PMLR). Chen et al. (2000) โ†‘ Chen CH, Lin J, Yรผcesan E, Chick SE (2000) Simulation budget allocation for further enhancing the efficiency of ordinal optimization. Discrete Event Dynamic Systems 10:251โ€“270. Das et al. (2021) โ†‘ Das P, Sercu T, Wadhawan K, Padhi I, Gehrmann S, Cipcigan F, Chenthamarakshan V, Strobelt H, Dos Santos C, Chen PY, et al. (2021) Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations. Nature Biomedical Engineering 5(6):613โ€“623. Elmaghraby and Keskinocak (2003) โ†‘ Elmaghraby W, Keskinocak P (2003) Dynamic pricing in the presence of inventory considerations: Research overview, current practices, and future directions. Management science 49(10):1287โ€“1309. Even-Dar et al. (2006) โ†‘ Even-Dar E, Mannor S, Mansour Y, Mahadevan S (2006) Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of machine learning research 7(6). Fan et al. (2016) โ†‘ Fan W, Hong LJ, Nelson BL (2016) Indifference-zone-free selection of the best. Operations Research 64(6):1499โ€“1514. Feng et al. (2025) โ†‘ Feng Q, Ma T, Zhu R (2025) Satisficing regret minimization in bandits. Proceedings of the 13th International Conference on Learning Representations. Feng et al. (2022) โ†‘ Feng Y, Caldentey R, Ryan CT (2022) Robust learning of consumer preferences. Operations Research 70(2):918โ€“962. Fiez et al. (2019) โ†‘ Fiez T, Jain L, Jamieson KG, Ratliff L (2019) Sequential experimental design for transductive linear bandits. Advances in neural information processing systems 32. Frazier et al. (2009) โ†‘ Frazier P, Powell W, Dayanik S (2009) The knowledge-gradient policy for correlated normal beliefs. INFORMS journal on Computing 21(4):599โ€“613. Frazier et al. (2008) โ†‘ Frazier PI, Powell WB, Dayanik S (2008) A knowledge-gradient policy for sequential information collection. SIAM Journal on Control and Optimization 47(5):2410โ€“2439. Free and Wilson (1964) โ†‘ Free SM, Wilson JW (1964) A mathematical contribution to structure-activity studies. Journal of medicinal chemistry 7(4):395โ€“399. Gabillon et al. (2012) โ†‘ Gabillon V, Ghavamzadeh M, Lazaric A (2012) Best arm identification: A unified approach to fixed budget and fixed confidence. Advances in Neural Information Processing Systems 25. Ghosh et al. (2017) โ†‘ Ghosh A, Chowdhury SR, Gopalan A (2017) Misspecified linear bandits. Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. Godinho de Matos et al. (2018) โ†‘ Godinho de Matos M, Ferreira P, Smith MD (2018) The effect of subscription video-on-demand on piracy: Evidence from a household-level randomized experiment. Management Science 64(12):5610โ€“5630. Hoffman et al. (2013) โ†‘ Hoffman MW, Shahriari B, de Freitas N (2013) Exploiting correlation and budget constraints in bayesian multi-armed bandit optimization. arXiv preprint arXiv:1303.6746 . Hong et al. (2021) โ†‘ Hong LJ, Fan W, Luo J (2021) Review on ranking and selection: A new perspective. Frontiers of Engineering Management 8(3):321โ€“343. Huang et al. (2007) โ†‘ Huang Z, Zeng DD, Chen H (2007) Analyzing consumer-product graphs: Empirical findings and applications in recommender systems. Management science 53(7):1146โ€“1164. Kalyanakrishnan and Stone (2010) โ†‘ Kalyanakrishnan S, Stone P (2010) Efficient selection of multiple bandit arms: Theory and practice. ICML, volume 10, 511โ€“518. Kalyanakrishnan et al. (2012) โ†‘ Kalyanakrishnan S, Tewari A, Auer P, Stone P (2012) Pac subset selection in stochastic multi-armed bandits. ICML, volume 12, 655โ€“662. Katz et al. (1977) โ†‘ Katz R, Osborne SF, Ionescu F (1977) Application of the free-wilson technique to structurally related series of homologs. quantitative structure-activity relationship studies of narcotic analgetics. Journal of Medicinal Chemistry 20(11):1413โ€“1419. Kaufmann et al. (2016) โ†‘ Kaufmann E, Cappรฉ O, Garivier A (2016) On the complexity of best arm identification in multi-armed bandit models. Journal of Machine Learning Research 17:1โ€“42. Kaufmann and Kalyanakrishnan (2013) โ†‘ Kaufmann E, Kalyanakrishnan S (2013) Information complexity in bandit subset selection. Conference on Learning Theory, 228โ€“251 (PMLR). Kim and Nelson (2001) โ†‘ Kim SH, Nelson BL (2001) A fully sequential procedure for indifference-zone selection in simulation. ACM Transactions on Modeling and Computer Simulation (TOMACS) 11(3):251โ€“273. Kim and Nelson (2006) โ†‘ Kim SH, Nelson BL (2006) Selecting the best system. Handbooks in operations research and management science 13:501โ€“534. Koenig and Law (1985) โ†‘ Koenig LW, Law AM (1985) A procedure for selecting a subset of size m containing the l best of k independent normal populations, with applications to simulation. Communications in Statistics-Simulation and Computation 14(3):719โ€“734. Komiyama et al. (2023) โ†‘ Komiyama J, Ariu K, Kato M, Qin C (2023) Rate-optimal bayesian simple regret in best arm identification. Mathematics of Operations Research . Kveton et al. (2023) โ†‘ Kveton B, Zaheer M, Szepesvari C, Li L, Ghavamzadeh M, Boutilier C (2023) Randomized exploration in generalized linear bandits. Lattimore and Szepesvรกri (2020) โ†‘ Lattimore T, Szepesvรกri C (2020) Bandit algorithms (Cambridge University Press). Lattimore et al. (2020) โ†‘ Lattimore T, Szepesvari C, Weisz G (2020) Learning with good feature representations in bandits and in rl with a generative model. Li et al. (2017) โ†‘ Li L, Lu Y, Zhou D (2017) Provably optimal algorithms for generalized linear contextual bandits. International Conference on Machine Learning, 2071โ€“2080 (PMLR). Li et al. (2024) โ†‘ Li Z, Fan W, Hong LJ (2024) The (surprising) sample optimality of greedy procedures for large-scale ranking and selection. Management Science . Locatelli et al. (2016) โ†‘ Locatelli A, Gutzeit M, Carpentier A (2016) An optimal algorithm for the thresholding bandit problem. International Conference on Machine Learning, 1690โ€“1698 (PMLR). Mannor and Tsitsiklis (2004) โ†‘ Mannor S, Tsitsiklis JN (2004) The sample complexity of exploration in the multi-armed bandit problem. Journal of Machine Learning Research 5(Jun):623โ€“648. Mason et al. (2020) โ†‘ Mason B, Jain L, Tripathy A, Nowak R (2020) Finding all ๐œ– -good arms in stochastic bandits. Advances in Neural Information Processing Systems 33:20707โ€“20718. McCullagh (2019) โ†‘ McCullagh P (2019) Generalized linear models (Routledge). Negoescu et al. (2011) โ†‘ Negoescu DM, Frazier PI, Powell WB (2011) The knowledge-gradient algorithm for sequencing experiments in drug discovery. INFORMS Journal on Computing 23(3):346โ€“363. Peukert et al. (2023) โ†‘ Peukert C, Sen A, Claussen J (2023) The editor and the algorithm: Recommendation technology in online news. Management science . Pukelsheim (2006) โ†‘ Pukelsheim F (2006) Optimal design of experiments (SIAM). Qin and You (2025) โ†‘ Qin C, You W (2025) Dual-directed algorithm design for efficient pure exploration. Operations Research . Rรฉda et al. (2021a) โ†‘ Rรฉda C, Kaufmann E, Delahaye-Duriez A (2021a) Top-m identification for linear bandits. International Conference on Artificial Intelligence and Statistics, 1108โ€“1116 (PMLR). Rรฉda et al. (2021b) โ†‘ Rรฉda C, Tirinzoni A, Degenne R (2021b) Dealing with misspecification in fixed-confidence linear top-m identification. Advances in Neural Information Processing Systems 34:25489โ€“25501. Rivera and Tewari (2024) โ†‘ Rivera EO, Tewari A (2024) Optimal thresholding linear bandit. arXiv preprint arXiv:2402.09467 . Russo (2020) โ†‘ Russo D (2020) Simple bayesian algorithms for best-arm identification. Operations Research 68(6):1625โ€“1647. Ryzhov et al. (2012) โ†‘ Ryzhov IO, Powell WB, Frazier PI (2012) The knowledge gradient algorithm for a general class of online learning problems. Operations Research 60(1):180โ€“195. Shen et al. (2021) โ†‘ Shen H, Hong LJ, Zhang X (2021) Ranking and selection with covariates for personalized decision making. INFORMS Journal on Computing 33(4):1500โ€“1519. Shin et al. (2018) โ†‘ Shin D, Broadie M, Zeevi A (2018) Tractable sampling strategies for ordinal optimization. Operations Research 66(6):1693โ€“1712. Simchi-Levi et al. (2024) โ†‘ Simchi-Levi D, Wang C, Xu J (2024) On experimentation with heterogeneous subgroups: An asymptotic optimal ๐›ฟ -weighted-pac design. SSRN Electronic Journal . Soare et al. (2014) โ†‘ Soare M, Lazaric A, Munos R (2014) Best-arm identification in linear bandits. Advances in Neural Information Processing Systems 27. Thompson (1933) โ†‘ Thompson WR (1933) On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25(3-4):285โ€“294. Thornton et al. (2013) โ†‘ Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013) Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 847โ€“855. Wolke and Schwetlick (1988) โ†‘ Wolke R, Schwetlick H (1988) Iteratively reweighted least squares: algorithms, convergence analysis, and numerical comparisons. SIAM journal on scientific and statistical computing 9(5):907โ€“921. Xu et al. (2018) โ†‘ Xu L, Honda J, Sugiyama M (2018) A fully adaptive algorithm for pure exploration in linear bandits. International Conference on Artificial Intelligence and Statistics, 843โ€“851 (PMLR). Yang and Tan (2022) โ†‘ Yang J, Tan V (2022) Minimax optimal fixed-budget best arm identification in linear bandits. Koyejo S, Mohamed S, Agarwal A, Belgrave D, Cho K, Oh A, eds., Advances in Neural Information Processing Systems, volume 35, 12253โ€“12266 (Curran Associates, Inc.). Zanette et al. (2020) โ†‘ Zanette A, Lazaric A, Kochenderfer M, Brunskill E (2020) Learning near optimal policies with low inherent bellman error. International Conference on Machine Learning, 10978โ€“10989 (PMLR).

E-Companion โ€“ Identifying All ๐œ€ -Best Arms In Linear Bandits With Misspecification

Appendix EC.1Additional Literature EC.1.1Misspecified Linear Bandits.

The linear bandit (LB) problem, introduced by Abe and Long (1999), extends the multi-armed bandits (MABs) framework by incorporating structural relationships among different arms. In the context of best arm identification, Garivier and Kaufmann (2016) established a classical lower bound, which was later extended to linear bandits by Fiez et al. (2019) using transportation inequalities.

The foundational study of linear bandits in the pure exploration framework was conducted by Hoffman et al. (2014), who addressed the best arm identification (BAI) problem in a fixed-budget setting while considering correlations among arm distributions. They proposed BayesGap, a Bayesian variant of the gap-based exploration algorithm (Gabillon et al. 2012). Although BayesGap outperformed methods that ignore correlations and structural relationships, its limitation of ceasing to pull arms deemed sub-optimal hindered its effectiveness in linear bandit pure exploration.

A key distinction between stochastic MABs and linear bandits is that, in MABs, once an armโ€™s sub-optimality is confirmed with high probability, it is no longer pulled. In linear bandits, however, even sub-optimal arms can offer valuable information about the parameter vector, improving confidence in estimates and aiding the discrimination of near-optimal arms. This insight has led to the adoption of optimal linear experiment design as a crucial framework for linear bandit pure exploration (Abbasi-Yadkori et al. 2011, Soare et al. 2014, Fiez et al. 2019, Rรฉda et al. 2021, Yang and Tan 2021, Azizi et al. 2021a).

When applying linear models to real data, misspecification inevitably arises in situations where the data deviates from perfect linearity. The concept of misspecified bandit models was introduced in the context of cumulative regret by Ghosh et al. (2017), who demonstrated a significant limitation: any linear bandit algorithm (e.g., OFUL (Abbasi-Yadkori et al. 2011) or LinUCB (Li et al. 2010)), which achieves optimal regret bounds on perfectly linear instances, can suffer linear regret on certain misspecified models. To address this, they proposed a hypothesis-test-based algorithm that avoids linear regret and achieves UCB-type sublinear regret for models with non-sparse deviations from linearity. Lattimore et al. (2020) further analyzed misspecification, showing that elimination-based algorithms with G-optimal design perform well under misspecification but incur an additional linear regret term proportional to the misspecification magnitude over the horizon.

In the pure exploration setting, misspecified linear models were first studied in the context of identifying the top ๐‘š best arms by Rรฉda et al. (2021), who introduced the MisLid algorithm, leveraging orthogonal parameterization to address misspecification. Subsequent research examined misspecification in ordinal optimization (Ahn et al. 2024), proposing prospective sampling methods that reduce the impact of misspecification as the sample size increases. Building on the definition of model misspecification and the optimization approach based on orthogonal parameterization from Rรฉda et al. (2021), we develop new algorithms for identifying all ๐œ€ -best arms in misspecified linear bandits and establish new upper bounds.

EC.1.2Generalized Linear Bandits.

The generalized linear bandit (GLB) model (Filippi et al. 2010, Ahn and Shin 2020, Kveton et al. 2023) extends the multi-armed bandit framework by incorporating generalized linear models (GLMs) (McCullagh 2019) to model expected rewards. Specifically, the expected reward of each arm is given by a known link function applied to the inner product of a feature vector and an unknown parameter vector. Most existing algorithms for generalized linear bandits employ the upper confidence bound (UCB) approach, with randomized GLM algorithms (Chapelle and Li 2011, Russo et al. 2018, Kveton et al. 2023) demonstrating superior performance.

In the context of pure exploration, Azizi et al. (2021b) introduced the first practical algorithm for best arm identification in generalized linear bandits, supported by theoretical analysis. Their work extends the best arm identification problem from linear models to more complex settings where the relationship between features and rewards follows generalized linear models (GLMs). Building on this foundation, we extend the pure exploration setting from best arm identification (BAI) to identifying all ๐œ€ -best arms, providing analogous analyses and theoretical results for GLMs.

Appendix EC.2General Pure Exploration Model

In this section, we present a brief discussion about the general pure exploration problem. A more comprehensive explanation can be found in Qin and You (2025).

The decision-maker seeks to answer a query concerning the mean parameters ๐ by adaptively allocating the sampling budget across the available arms. This query typically involves identifying a subset of arms that satisfy certain criteria, and the goal is to determine the correct answer with high probability. Let โ„

โ„ โ€‹ ( ๐ ) denote the correct answer, which in our setting corresponds to the set of all ๐œ€ -best arms. Define ๐‘€ ~ as the set of parameters that yield a unique answer. Let ฮž represent the collection of all possible answers, and for each โ„ โ€ฒ โˆˆ ฮž , define ๐‘€ โ„ โ€ฒ โ‰” { ๐œ— โˆˆ ๐‘€ ~ : โ„ โ€‹ ( ๐œ— )

โ„ โ€ฒ } as the set of parameters for which โ„ โ€ฒ is the correct answer. The overall parameter space of interest is then given by ๐‘€ โ‰” โ‹ƒ โ„ โ€ฒ โˆˆ ฮž ๐‘€ โ„ โ€ฒ .

Recall that an algorithm is defined as a triplet โ„‹

( ๐ด ๐‘ก , ๐œ ๐›ฟ , ๐‘Ž ^ ๐œ ) . The algorithmโ€™s sample complexity is quantified by the number of samples, denoted as ๐œ ๐›ฟ , at the point of termination. The objective is to formulate algorithms that minimize the expected sample complexity ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] across the set โ„‹ . As stated in Kaufmann et al. (2016), when ๐›ฟ โˆˆ ( 0 , 1 ) , the non-asymptotic problem complexity of an instance ๐ can be defined as

๐œ… โ€‹ ( ๐ ) โ‰” inf ๐ด โ€‹ ๐‘™ โ€‹ ๐‘” โ€‹ ๐‘œ โˆˆ โ„‹ ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] log โก ( 1 / 2.4 โ€‹ ๐›ฟ ) .

(EC.2.1)

This instance-dependent complexity indicates the smallest possible constant such that the expected sample complexity ๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] scales in alignment with log โก ( 1 / 2.4 โ€‹ ๐›ฟ ) . The problem complexity ๐œ… โ€‹ ( ๐ ) is subject to an information-theoretic lower bound. This lower bound can be expressed as the optimal solution of an allocation problem, which we present in Proposition 3.1. To build this framework, we next introduce three important concepts: culprits, alternative sets, and ๐ถ ๐‘ฅ function.

EC.2.1Culprits and Alternative Sets

Let ๐’ณ โ€‹ ( ๐ ) denote the set of culprits under the true mean vector ๐ . These culprits are responsible for deviations from the correct answer โ„ โ€‹ ( ๐ ) . The structure of ๐’ณ โ€‹ ( ๐ ) varies depending on the specific exploration task, and identifying these culprits is essential for characterizing the problemโ€™s complexity and guiding the design of effective algorithms.

To identify the correct answer, an algorithm must distinguish among different instances within the parameter space ๐‘€ . Accordingly, for any instance ๐ โˆˆ ๐‘€ , the instance-dependent problem complexity ๐œ… โ€‹ ( ๐ ) is determined by the structure of the corresponding alternative set

โˆช ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) Alt ๐‘ฅ โ€‹ ( ๐ )

Alt โ€‹ ( ๐ ) โ‰” { ๐œ— โˆˆ ๐‘€ : โ„ โ€‹ ( ๐œ— ) โ‰  โ„ โ€‹ ( ๐ ) } ,

(EC.2.2)

which represents the set of parameters that return a solution that is different from the correct solution โ„ โ€‹ ( ๐ ) .

As an example, consider the task of identifying the single best arm. In this case, the culprit set is ๐’ณ โ€‹ ( ๐ )

[ ๐พ ]
{ ๐ผ โˆ— โ€‹ ( ๐ ) } , consisting of all arms except the current best arm. For each culprit ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) , if there exists a parameter ๐œ— under which arm ๐‘ฅ has a higher mean than ๐ผ โˆ— โ€‹ ( ๐œ— ) , then ๐œ— leads to an incorrect identification caused by ๐‘ฅ . Each such culprit is associated with an alternative setโ€”namely, the set of parameters that yield a wrong answer due to ๐‘ฅ โ€”given by Alt ๐‘ฅ โ€‹ ( ๐ )

{ ๐œ— โˆˆ ๐‘€ : ๐œ— ๐‘ฅ โ‰ฅ ๐œ— ๐ผ โˆ— โ€‹ ( ๐œ— ) } for ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) .

EC.2.2 ๐‘ช ๐’™ unction

The task of identifying the correct answer can be formulated as a sequential hypothesis testing problem, which can be addressed using the Sequential Generalized Likelihood Ratio (SGLR) test (Kaufmann et al. 2016, Kaufmann and Koolen 2021). The SGLR statistic is defined to test a potentially composite null hypothesis ๐ป 0 : ( ๐ โˆˆ ฮฉ 0 ) against a potentially composite alternative hypothesis ๐ป 1 : ( ๐ โˆˆ ฮฉ 1 ) , and is given by:

SGLR ๐‘ก

sup ๐œ— โˆˆ ฮฉ 0 โˆช ฮฉ 1 ๐ฟ โ€‹ ( ๐‘‹ 1 , ๐‘‹ 2 , โ€ฆ , ๐‘‹ ๐‘ก ; ๐œ— ) sup ๐œ— โˆˆ ฮฉ 0 ๐ฟ โ€‹ ( ๐‘‹ 1 , ๐‘‹ 2 , โ€ฆ , ๐‘‹ ๐‘ก ; ๐œ— ) ,

(EC.2.3)

where ๐‘‹ 1 , ๐‘‹ 2 , โ€ฆ , ๐‘‹ ๐‘ก are the observed values from arm pulls, and ๐ฟ โ€‹ ( โ‹… ) denotes the likelihood function based on these observations and an unknown parameter ๐œ— . The set ฮฉ 0 corresponds to the restricted parameter space under the null hypothesis, while ฮฉ 0 โˆช ฮฉ 1 defines the full parameter space under consideration, encompassing both the null and alternative hypotheses. These sets correspond to the alternative regions introduced in the previous section. A large value of SGLR ๐‘ก indicates stronger evidence against the null hypothesis and supports rejecting it.

We consider distributions from a single-parameter exponential family parameterized by their means, following the formulation in Garivier and Kaufmann (2016). This family includes the Bernoulli, Poisson, and Gamma distributions with known shape parameters, as well as the Gaussian distribution with known variance. For each culprit ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ^ ) , where ๐ ^ is the empirical mean based on observed data, we test the hypotheses: ๐ป 0 , ๐‘ฅ : ๐ โˆˆ Alt ๐‘ฅ โ€‹ ( ๐ ^ ) versus ๐ป 1 , ๐‘ฅ : ๐ โˆ‰ Alt ๐‘ฅ โ€‹ ( ๐ ^ ) . When ๐ ^ โ€‹ ( ๐‘ก ) โˆˆ ฮฉ 0 โˆช ฮฉ 1 , the generalized likelihood ratio statistic in equation (EC.2.3) can be expressed in terms of a self-normalized sum. This leads to a formal expression of the SGLR statistic in Proposition EC.1, derived through maximum likelihood estimation and a reformulation of the KL divergence.

Proposition EC.1 (Kaufmann and Koolen (2021))

The generalized likelihood ratio statistic for each culprit ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐› ) at time step ๐‘ก is defined as

ฮ› ^ ๐‘ก , ๐‘ฅ

ln โก ( SGLR ๐‘ก )

inf ๐œ— โˆˆ Alt ๐‘ฅ โ€‹ ( ๐ ^ โ€‹ ( ๐‘ก ) ) โˆ‘ ๐‘– โˆˆ [ ๐พ ] ๐‘ ๐‘– โ€‹ ( ๐‘ก ) โ€‹ KL โ€‹ ( ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘ก ) , ๐œ— ๐‘– ) ,

(EC.2.4)

where KL ( โ‹… , โ‹… ) represents the KL divergence of the two distributions parameterized by their means, and ๐‘ ๐‘– โ€‹ ( ๐‘ก )

๐‘ก โ‹… ๐‘ ๐‘– is the expected number of observations allocated to arm ๐‘– โˆˆ [ ๐พ ] up to time ๐‘ก .

This proposition links the SGLR test to information-theoretic methods. To quantify the information and confidence required to assert that the true mean does not lie in Alt ๐‘ฅ for all ๐‘ฅ โˆˆ ๐’ณ , we define the ๐ถ ๐‘ฅ function as the population version of the SGLR statistic, sharing the same form as equation (EC.2.4).

๐ถ ๐‘ฅ โ€‹ ( ๐’‘ )

๐ถ ๐‘ฅ โ€‹ ( ๐’‘ ; ๐ ) โ‰” inf ๐œ— โˆˆ Alt ๐‘ฅ โˆ‘ ๐‘– โˆˆ [ ๐พ ] ๐‘ ๐‘– โ€‹ KL โ€‹ ( ๐œ‡ ๐‘– , ๐œ— ๐‘– ) .

(EC.2.5)

With the introduction of culprits and the ๐ถ ๐‘ฅ function, we arrive at the optimal allocation problem that defines the lower bound stated in Proposition 3.1. However, computing the lower bound can still be hard since it requires the solution of the minimax problem in equation (18). While the KL divergence in equation (EC.2.5) is convex for Gaussians, it can be non-convex to minimize the ๐ถ ๐‘ฅ function over the culprit set ๐’ณ โ€‹ ( ๐ ) . To solve this problem, based on the following Proposition EC.2, we can write ๐’ณ โ€‹ ( ๐ ) as a union of several convex sets. The following three equivalent expressions represent different ways of describing the lower bound, making the minimax problem in equation (18) tractable for every ๐‘ฅ โˆˆ ๐’ณ .

ฮ“ ๐ โˆ—

max ๐’‘ โˆˆ ๐’ฎ ๐พ โ€‹ inf ๐œ— โˆˆ Alt โก ( ๐ ) โˆ‘ ๐‘– โˆˆ [ ๐พ ] ๐‘ ๐‘– โ€‹ KL โ€‹ ( ๐œ‡ ๐‘– , ๐œ— ๐‘– )

max ๐’‘ โˆˆ ๐’ฎ ๐พ โก min ๐‘ฅ โˆˆ ๐’ณ โ€‹ inf ๐œ— โˆˆ Alt ๐‘ฅ โˆ‘ ๐‘– โˆˆ [ ๐พ ] ๐‘ ๐‘– โ€‹ KL โ€‹ ( ๐œ‡ ๐‘– , ๐œ— ๐‘– )

(EC.2.6)

= max ๐’‘ โˆˆ ๐’ฎ ๐พ โก min ๐‘ฅ โˆˆ ๐’ณ โ€‹ โˆ‘ ๐‘– โˆˆ [ ๐พ ] ๐‘ ๐‘– โ€‹ KL โ€‹ ( ๐œ‡ ๐‘– , ๐œ— ๐‘– ๐‘ฅ )

(EC.2.7)

= max ๐’‘ โˆˆ ๐’ฎ ๐พ โก min ๐‘ฅ โˆˆ ๐’ณ โก ๐ถ ๐‘ฅ โ€‹ ( ๐’‘ ) ,

(EC.2.8)

where we utilized the existence of a finite union set and a unique minimizer ๐œ— ๐‘ฅ in Proposition EC.2.

Proposition EC.2

Assume that the distribution of each arm belongs to a canonical single-parameter exponential family, parameterized by its mean. Then, for each culprit ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐› ) ,

1.

(Wang et al. 2021) For each problem instance ๐ โˆˆ ๐‘€ , the alternative set Alt โ€‹ ( ๐ ) is a finite union of convex sets. Namely, there exists a finite collection of convex sets { Alt ๐‘ฅ โ€‹ ( ๐ ) : ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) } such that Alt โ€‹ ( ๐ )

โˆช ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) Alt ๐‘ฅ โ€‹ ( ๐ ) .

2.

Given a specific simplex distribution ๐’‘ , there exists a unique ๐œ— ๐‘ฅ โˆˆ Alt ๐‘ฅ โ€‹ ( ๐ ) that achieves the infimum in equation (EC.2.5).

Proof EC.3

Proof. The proof proceeds in two parts. For the first part, the alternative set for any given culprit ๐‘ฅ in our setting is given by

Alt ๐‘ฅ โ€‹ ( ๐ )

Alt ๐‘– , ๐‘— โ€‹ ( ๐ ) โˆช Alt ๐‘š โ€‹ ( ๐ ) ,

(EC.2.9)

where Alt ๐‘– , ๐‘— โ€‹ ( ๐› ) and Alt ๐‘š โ€‹ ( ๐› ) are defined in (EC.4.10) and (EC.4.19), respectively. Each Alt ๐‘ฅ โ€‹ ( ๐› ) is convex, as convexity is preserved under unions of convex sets. This can be verified by confirming that any convex combination of two points in Alt ๐‘ฅ โ€‹ ( ๐› ) remains in the set.

For the second part, when the reward distribution belongs to a single-parameter exponential family, the KL divergence KL ( ๐œ’ , ๐œ’ โ€ฒ ) is continuous and strictly convex in ( ๐œ’ , ๐œ’ โ€ฒ ) . This ensures that the infimum in equation (EC.2.5) is achieved uniquely by a single ๐œ— . \Halmos

EC.2.3Stopping Rule

In this section, we introduce the stopping rule, which suggests when to stop the algorithm and returns an answer that gives all ๐œ€ -best arms with a probability of at least 1 โˆ’ ๐›ฟ . This stopping rule is based on the deviation inequalities that are linked to the generalized likelihood ratio test (Kaufmann and Koolen 2021).

For each ๐‘ฅ ๐‘– โˆˆ ๐’ณ โ€‹ ( ๐ ) with ๐‘– โˆˆ [ | ๐’ณ | ] , let ๐‘€ ๐‘– โ€‹ ( ๐ )

Alt ๐‘ฅ ๐‘– โ€‹ ( ๐ ) denote a partition of the realizable parameter space ๐‘€ introduced in Section EC.2.1, where each partition ๐‘€ ๐‘– โ€‹ ( ๐ ) is associated with a distinct culprit in ๐’ณ โ€‹ ( ๐ ) . This implies that for any ๐ โˆˆ ๐‘€ , the parameter space ๐‘€ can be uniquely partitioned to support a hypothesis testing framework. Let ๐‘€ 0 โ€‹ ( ๐ ) denote the subset of parameters where ๐ resides. If ๐ โˆˆ ๐‘€ , define ๐‘– โˆ— โ€‹ ( ๐ ) as the index of the unique element in the partition where the true mean value ๐ belongs, and here ๐‘– โˆ— โ€‹ ( ๐ )

0 . In other words, we have ๐ โˆˆ ๐‘€ 0 and Alt โ€‹ ( ๐ )

๐‘€
๐‘€ 0 . Since the ordering among suboptimal arms is irrelevant, the sets ๐‘€ ๐‘– โ€‹ ( ๐ ) for ๐‘– โˆˆ { 0 , 1 , 2 , โ€ฆ , | ๐’ณ | } form a valid partition of ๐‘€ for each ๐ . Accordingly, the alternative set can be further defined as

Alt โ€‹ ( ๐ )

โˆช ๐‘– : ๐ โˆ‰ ๐‘€ ๐‘– โ€‹ ( ๐ ) ๐‘€ ๐‘– โ€‹ ( ๐ )

๐‘€
๐‘€ ๐‘– โˆ— โ€‹ ( ๐ )

๐‘€
๐‘€ 0 .

(EC.2.10)

Given a bandit instance ๐ , we consider a total of | ๐’ณ โ€‹ ( ๐ ) | + 1 hypotheses, defined as

๐ป 0

( ๐ โˆˆ ๐‘€ 0 โ€‹ ( ๐ ) ) , ๐ป 1

( ๐ โˆˆ ๐‘€ 1 โ€‹ ( ๐ ) ) , โ€ฆ , ๐ป | ๐’ณ |

( ๐ โˆˆ ๐‘€ | ๐’ณ | โ€‹ ( ๐ ) ) .

(EC.2.11)

By substituting the true mean vector ๐ with its empirical estimate ๐ ^ โ€‹ ( ๐‘ก ) , the SGLR test becomes data-dependent, relying on the empirical means at each time step ๐‘ก . Consequently, the hypotheses tested at time ๐‘ก are also data-dependent. If ๐ ^ โ€‹ ( ๐‘ก ) โˆˆ ๐‘€ , we define ๐‘– ^ โ€‹ ( ๐‘ก )

๐‘– โˆ— โ€‹ ( ๐ ^ โ€‹ ( ๐‘ก ) ) as the index of the partition to which ๐ ^ โ€‹ ( ๐‘ก ) belongsโ€”that is, ๐ ^ โ€‹ ( ๐‘ก ) โˆˆ ๐‘€ ๐‘– ^ โ€‹ ( ๐‘ก ) . If instead ๐ ^ โ€‹ ( ๐‘ก ) โˆ‰ ๐‘€ , we set ฮ› ^ ๐‘ก , ๐‘ฅ

0 for all culprits ๐‘ฅ , meaning no hypothesis test is conducted at that time step, and the process continues. In practice, when ๐ ^ โ€‹ ( ๐‘ก ) โˆ‰ ๐‘€ , the algorithm can revert to uniform exploration. Since the true mean vector ๐ โˆˆ ๐‘€ and ๐‘€ is assumed to be an open set, the law of large numbers guarantees that ๐ ^ โ€‹ ( ๐‘ก ) will eventually re-enter the parameter space, i.e., ๐ ^ โ€‹ ( ๐‘ก ) โˆˆ ๐‘€ after sufficient samples.

We run | ๐’ณ | time-varying SGLR tests in parallel, each testing ๐ป 0 against ๐ป ๐‘– for ๐‘– โˆˆ [ | ๐’ณ โ€‹ ( ๐ ^ โ€‹ ( ๐‘ก ) ) | ] . The procedure stops when any of these tests rejects ๐ป 0 , indicating that the corresponding alternative set is empirically the easiest to reject. At this point, the accepted hypothesis for ๐ ^ โ€‹ ( ๐‘ก ) โˆˆ ๐‘€ is identified as the most likely to be correct. Given a sequence of exploration rates ( ๐›ฝ ^ ๐‘ก โ€‹ ( ๐›ฟ ) ) ๐‘ก โˆˆ โ„• , the SGLR stopping rule in the pure exploration setting is defined as follows:

๐œ ๐›ฟ โ‰” inf { ๐‘ก โˆˆ โ„• : min ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ^ โ€‹ ( ๐‘ก ) ) โก ฮ› ^ ๐‘ก , ๐‘ฅ > ๐›ฝ ^ ๐‘ก โ€‹ ( ๐›ฟ ) }

inf { ๐‘ก โˆˆ โ„• : ๐‘ก โ‹… min ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ^ โ€‹ ( ๐‘ก ) ) โก ๐ถ ๐‘ฅ โ€‹ ( ๐’‘ ๐‘ก ; ๐ ^ โ€‹ ( ๐‘ก ) )

๐›ฝ ^ ๐‘ก โ€‹ ( ๐›ฟ ) } ,

(EC.2.12)

where the SGLR statistic ฮ› ^ ๐‘ก , ๐‘ฅ is defined in equation (EC.2.4). The testing process closely resembles the classical approach, except that the hypotheses are data-dependent and evolve over time.

We also provide insights from the perspective of the confidence region. Specifically, the event { min ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ^ โ€‹ ( ๐‘ก ) ) โก ฮ› ^ ๐‘ก , ๐‘ฅ > ๐›ฝ ^ ๐‘ก โ€‹ ( ๐›ฟ ) }

{ ๐’ž ๐‘ก , ๐›ฟ โІ ๐‘€ ๐‘– ^ โ€‹ ( ๐‘ก ) } , where ๐’ž ๐‘ก , ๐›ฟ denotes the confidence region of the mean vector, given by

๐’ž ๐‘ก , ๐›ฟ โ‰” { ๐œ— : โˆ‘ ๐‘–

1 ๐พ ๐‘ ๐‘– โ€‹ ( ๐‘ก ) โ€‹ KL โ€‹ ( ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘ก ) , ๐œ— ๐‘– ) โ‰ค ๐›ฝ ^ ๐‘ก โ€‹ ( ๐›ฟ ) } .

(EC.2.13)

Notably, although the confidence region ๐’ž ๐‘ก , ๐›ฟ is defined for the mean vector ๐ , under the assumption of linear structure, it is equivalent to the confidence region for the parameter ๐œฝ , as discussed in Sections 2.2 and EC.4.1. This equivalence allows the stopping rule to be interpreted as follows: the algorithm halts once the confidence region for the mean vector fully lies within a single partition region, aligning with the graphical interpretation of optimal allocation given in Section EC.4.2.

Appendix EC.3Difference between G-Optimal Design and ๐’ณ โ€‹ ๐’ด -Optimal Design Figure EC.3.1:Key Distinction Between G-Optimal and ๐’ณ โ€‹ ๐’ด -Optimal Designs in Terms of Stopping Criteria

Note. The contraction behavior and rate of the confidence region for the parameter ๐›‰ ^ ๐‘ก differ: under G-optimal sampling (left), the region shrinks uniformly in all directions, whereas under ๐’ณ โ€‹ ๐’ด -optimal design (right), it contracts more strategically along directions critical for classification, allowing the confidence region to enter a decision region more rapidly and trigger earlier stopping.

Returning to the intuition and visual explanation in Section EC.4.1 and Figure EC.4.1, we see that G-optimal sampling inevitably leads to inefficient sampling. Figure EC.3.1 illustrates the advantage of adopting the ๐’ณ โ€‹ ๐’ด -optimal design from the perspective of the stopping rule: the algorithm terminates once the yellow confidence region for ๐œฝ ^ enters one of the decision regions ๐‘€ ๐‘– ( ๐‘–

1 , โ€ฆ , 7 ). Unlike the isotropic shrinkage of the confidence region under G-optimal design, the ๐’ณ โ€‹ ๐’ด -optimal design guides the region to contract more aggressively in directions critical for distinguishing arms. Rather than uniformly estimating ๐œฝ , it prioritizes reducing uncertainty along directions that matter most for classification, leading to more efficient exploration.

Appendix EC.4Lower Bound for All ๐œ€ -Best Arms Identification in Linear Bandits

This section provides both geometric insights regarding the stopping condition and formal proofs establishing the lower bound for identifying all ๐œ€ -best arms in linear bandit settings.

EC.4.1Visual Illustration of the Stopping Condition Figure EC.4.1:Visual Illustration of Identifying the Best Arm vs. Identifying All ๐œ€ -Best Arms.

Note. (a) Stopping occurs when the confidence region ๐’ž ๐‘ก , ๐›ฟ for the estimated parameter ๐›‰ ^ ๐‘ก contracts entirely within one of the three decision regions ๐‘€ ๐‘– in a certain time step ๐‘ก . The boundaries between regions are defined by the hyperplanes ๐œ— โŠค โ€‹ ( ๐š ๐‘– โˆ’ ๐š ๐‘— )

0 . Each dot represents an arm. (b) In the case of identifying all ๐œ€ -best arms, the regions overlap. (c) Due to these overlaps, the space is partitioned into seven distinct decision regions, increasing the difficulty of identification.

The stopping condition is formulated as a hypothesis test conducted as data is collected, which can be interpreted as the process of the parameter confidence region contracting into one of the decision regions (i.e., the set of parameters that yield the same decision). A more detailed version of Figure 1 is shown in Figure EC.4.1, which illustrates the core idea of the stopping condition for identifying all ๐œ€ -best arms in linear bandits. The key distinction between identifying all ๐œ€ -best arms and identifying the single best arm lies in how the decision regions are partitioned.

Figure EC.4.1(a) illustrates the best arm identification process in linear bandits, where ๐’‚ โˆ—

๐’‚ โˆ— โ€‹ ( ๐ ) represents the arm with the largest mean value for each bandit instance ๐ . Let ๐‘€ ๐‘–

{ ๐œ— โˆˆ โ„ ๐‘‘ โˆฃ ๐’‚ ๐‘–

๐’‚ โˆ— } be the set of parameters ๐œฝ for which ๐’‚ ๐‘– ( ๐‘–

1 , 2 , 3 ) is the optimal arm. Each ๐‘€ ๐‘– forms a cone defined by the intersection of half-spaces.

Figure EC.4.1(b) represents an intermediate step, demonstrating the transition from best arm identification to identifying all ๐œ€ -best arms. Let ๐‘€ ๐’‚ ๐‘–

{ ๐œ— โˆˆ โ„ ๐‘‘ โˆฃ ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) } be the set of parameters ๐œฝ include arm ๐‘– in the set ๐บ ๐œ€ โ€‹ ( ๐ ) . ๐‘€ ๐’‚ ๐‘– is similarly defined by the intersection of half-spaces. The overlap of these three regions forms the decision regions ๐‘€ ๐‘– ( ๐‘–

1 , 2 , โ€ฆ , 7 ) in Figure EC.4.1(c), which correspond to the seven distinct types of ๐œ€ -best arms sets. Besides, the BAI process in (a) is a special case of the ๐œ€ -best arms identification in (c), occurring when the gap ๐œ€ approaches 0. The following statement provides a detailed explanation of how the decision regions in Figure EC.4.1(c) are constructed.

In a ๐‘‘ -dimensional Euclidean space โ„ ๐‘‘ , hyperplanes ๐‘™ ๐‘– , ๐‘— can be defined for any pair of arms, partitioning the space into the following half-spaces:

๐ป ๐‘– , ๐‘— +

{ ๐œ— โˆˆ โ„ ๐‘‘ โˆฃ ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ๐œ—

๐œ€ } ,

(EC.4.1)

๐ป ๐‘– , ๐‘— โˆ’

{ ๐œ— โˆˆ โ„ ๐‘‘ โˆฃ ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ๐œ— โ‰ค ๐œ€ } .

(EC.4.2)

The hyperplane ๐‘™ ๐‘– , ๐‘— , which separates the half-spaces ๐ป ๐‘– , ๐‘— + and ๐ป ๐‘– , ๐‘— โˆ’ , is perpendicular to the direction vector ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— . The intersection of these half-spaces, represented by โ‹‚ ๐‘– , ๐‘— โˆˆ [ ๐พ ] , ๐‘— โ‰  ๐‘– ๐ป ๐‘– , ๐‘— , partitions the space into distinct regions. Each region corresponds to a solution set, representing all ๐œ€ -best arms if the true parameter ๐œฝ lies within that region. As the gap ๐œ€ approaches 0, the hyperplanes ๐‘™ ๐‘– , ๐‘— on both sides of the decision boundaries move closer together, causing some decision regions in Figure EC.4.1(c) to shrink until they vanish. The relationship between the true parameter ๐œฝ and the half-spaces determines the belonging of arms in the good set ๐บ ๐œ€ โ€‹ ( ๐ ) .

For the case of three arms, the space is divided into three overlapping regions, as shown in Figure EC.4.1(b). These regions further generate seven ( 2 3 โˆ’ 1

7 ) decision regions, denoted ๐‘€ 1 through ๐‘€ 7 , which are summarized in Table EC.4.1.

Table EC.4.1:Three Overlapping Regions and Seven Decision Regions in the Case of Three Arms Decision Region Set Expression ๐œ€ -Best Arms

๐‘€ 1

๐‘€ ๐‘Ž 1 โˆฉ ๐‘€ ๐‘Ž 2 ๐‘ โˆฉ ๐‘€ ๐‘Ž 3 ๐‘

{ 1 }

๐‘€ 2

๐‘€ ๐‘Ž 1 โˆฉ ๐‘€ ๐‘Ž 2 ๐‘ โˆฉ ๐‘€ ๐‘Ž 3

{ 1 , 3 }

๐‘€ 3

๐‘€ ๐‘Ž 1 ๐‘ โˆฉ ๐‘€ ๐‘Ž 2 ๐‘ โˆฉ ๐‘€ ๐‘Ž 3

{ 3 }

๐‘€ 4

๐‘€ ๐‘Ž 1 ๐‘ โˆฉ ๐‘€ ๐‘Ž 2 โˆฉ ๐‘€ ๐‘Ž 3

{ 2 , 3 }

๐‘€ 5

๐‘€ ๐‘Ž 1 ๐‘ โˆฉ ๐‘€ ๐‘Ž 2 โˆฉ ๐‘€ ๐‘Ž 3 ๐‘

{ 2 }

๐‘€ 6

๐‘€ ๐‘Ž 1 โˆฉ ๐‘€ ๐‘Ž 2 โˆฉ ๐‘€ ๐‘Ž 3 ๐‘

{ 1 , 2 }

๐‘€ 7

๐‘€ ๐‘Ž 1 โˆฉ ๐‘€ ๐‘Ž 2 โˆฉ ๐‘€ ๐‘Ž 3

{ 1 , 2 , 3 }

The stopping condition verifies whether the confidence region ๐’ž ๐‘ก , ๐›ฟ is entirely contained within a specific decision region ๐‘€ ๐‘– . It is important to note that, due to the definition of the confidence region and the property that ๐œฝ ^ ๐‘ก โ†’ ๐œฝ as ๐‘ก โ†’ โˆž , any algorithm that continually samples all arms will eventually meet the stopping condition.

EC.4.2Optimal Allocation

The goal of the sampling policy is to construct an allocation sequence that drives the confidence set ๐ถ ๐‘ก , ๐›ฟ into the optimal region ๐‘€ โˆ— as efficiently as possible. Geometrically, this entails selecting arms that cause ๐ถ ๐‘ก , ๐›ฟ to contract into the optimal cone ๐‘€ โˆ— with minimal sampling effort. The condition ๐’ž ๐‘ก , ๐›ฟ โІ ๐‘€ โˆ— can be expressed as

For all  โ€‹ ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐ ) โ€‹  and  โ€‹ โˆ€ ๐œ— โˆˆ ๐’ž ๐‘ก , ๐›ฟ ,  we have  โ€‹ ๐œ— โˆˆ ๐ป ๐‘— , ๐‘– โˆ’ โ€‹  and  โ€‹ ๐œ— โˆˆ ๐ป 1 , ๐‘š + .

(EC.4.3)

In words, every parameter vector ๐œ— that remains plausible must preserve all required pairwise orderings: no ๐œ€ -optimal arm ๐‘– can be overtaken by any rival ๐‘— , and the best arm 1 must stay ahead of every suboptimal arm ๐‘š . Equivalently, no ๐œ— โˆˆ ๐’ž ๐‘ก , ๐›ฟ is allowed to flip these comparisons.

The relationships ๐œ— โˆˆ ๐ป ๐‘— , ๐‘– โˆ’ and ๐œ— โˆˆ ๐ป 1 , ๐‘š + are equivalent to the following inequalities by adding terms on both sides of the inequalities (EC.4.1) and (EC.4.2) and reorganizing:

{ ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ( ๐œฝ โˆ’ ๐œ— ) โ‰ค ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ๐œฝ + ๐œ€

( ๐’‚ 1 โˆ’ ๐’‚ ๐‘š ) โŠค โ€‹ ( ๐œฝ โˆ’ ๐œ— ) < ( ๐’‚ 1 โˆ’ ๐’‚ ๐‘š ) โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ .

(EC.4.4)

Now, we focus on the confidence region, which can be constructed following Cauchyโ€™s inequality and the definition of the confidence ellipse for parameter ๐œฝ in equation (10).

๐’ž ๐‘ก , ๐›ฟ

{ ๐œ— โˆˆ โ„ ๐‘‘ | โˆ€ ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐ ) , { ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ( ๐œฝ โˆ’ ๐œ— ) โ‰ค โ€– ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โ€– ๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ ๐ต ๐‘ก , ๐›ฟ

( ๐’‚ 1 โˆ’ ๐’‚ ๐‘š ) โŠค โ€‹ ( ๐œฝ โˆ’ ๐œ— ) โ‰ค โ€– ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โ€– ๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ ๐ต ๐‘ก , ๐›ฟ

} ,

(EC.4.5)

where ๐‘ฝ ๐‘ก is the information matrix as defined in equation (9) and the confidence bound for parameter ๐œฝ , i.e., ๐ต ๐‘ก , ๐›ฟ , can either be a fixed confidence bound as shown in Proposition 2.3 or a looser adaptive confidence bound introduced in Abbasi-Yadkori et al. (2011). The stopping condition ๐’ž ๐‘ก , ๐›ฟ โІ ๐‘€ โˆ— can thus be reformulated. For each ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐ ) , we have

{ โ€– ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โ€– ๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ ๐ต ๐‘ก , ๐›ฟ โ‰ค ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ๐œฝ + ๐œ€

โ€– ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โ€– ๐‘ฝ ๐‘ก โˆ’ 1 โ€‹ ๐ต ๐‘ก , ๐›ฟ โ‰ค ( ๐’‚ 1 โˆ’ ๐’‚ ๐‘š ) โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ .

(EC.4.6)

If equation (EC.4.6) holds, then for any ๐œฝ โˆˆ ๐’ž ๐‘ก , ๐›ฟ , equation (EC.4.4) also holds, implying that ๐’ž ๐‘ก , ๐›ฟ โІ ๐‘€ โˆ— . Given that ( ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— ) โŠค โ€‹ ๐œฝ + ๐œ€

0 and ( ๐’‚ 1 โˆ’ ๐’‚ ๐‘š ) โŠค โ€‹ ๐œฝ โˆ’ ๐œ€

0 , and rearranging (EC.4.6), the oracle allocation strategy is determined as follows:

{ ๐’‚ ๐ด ๐‘› } โˆ—

arg โก min { ๐’‚ ๐ด ๐‘› } โก max ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐ ) โก max โก { 2 โ€‹ โ€– ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โ€– ๐‘ฝ ๐‘ก โˆ’ 1 2 ( ๐’‚ ๐‘– โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โ€– ๐‘ฝ ๐‘ก โˆ’ 1 2 ( ๐’‚ 1 โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 } ,

(EC.4.7)

where { ๐’‚ ๐ด ๐‘› }

( ๐’‚ ๐ด 1 , ๐’‚ ๐ด 1 , โ€ฆ , ๐’‚ ๐ด ๐‘› ) โˆˆ ๐’œ ๐‘› is a sequence of sampled arms and { ๐’‚ ๐ด ๐‘› } โˆ— is the oracle allocation strategy5. However, it is more convenient to demonstrate the sample complexity of the problem in terms of the continuous allocation proportion ๐’‘ instead of the discrete allocation sequence { ๐’‚ ๐ด ๐‘› } . Then, we can have the following optimal allocation proportion.

๐’‘ โˆ—

arg โก min ๐’‘ โˆˆ ๐’ฎ ๐พ โก max ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐ ) โก max โก { 2 โ€‹ โ€– ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’‚ ๐‘– โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’‚ 1 โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 } ,

(EC.4.8)

where ๐’ฎ ๐พ denotes the ๐พ -dimensional probability simplex, and ๐‘ฝ ๐’‘

โˆ‘ ๐‘–

1 ๐พ ๐‘ ๐‘– โ€‹ ๐’‚ ๐‘– โ€‹ ๐’‚ ๐‘– โŠค is the weighted information matrix, analogous to the Fisher information matrix (Chaloner and Verdinelli 1995). The intuition behind this optimal allocation strategy is as follows: to satisfy the inequality in (EC.4.6) as quickly as possible, the ratio of the left-hand side to the right-hand side should be minimized, leading to the outer minimization operation over the allocation probability ๐’‘ . The middle maximization operation accounts for the fact that the inequalities in (EC.4.6) must hold for each possible triple ( ๐‘– , ๐‘— , ๐‘š ) , which specifies the culprit set ๐’ณ

๐’ณ โ€‹ ( ๐ ) (see Proposition 3.1 for a formal definition). Furthermore, since both inequalities in (EC.4.6) need to be satisfied, we take inner maximization operations to enforce this requirement.

EC.4.3Proof of the Lower Bound in Theorem 3.2 Proof EC.1

Proof. Deriving the lower bound requires constructing the largest possible alternative set and identifying the most challenging instance for distinguishing the arms. Specifically, to construct an alternative problem instance that possesses a distinct set of ๐œ€ -best arms from the original problem instance ๐› , we systematically perturb the expected rewards of specific arms. This process can be achieved through two ways of construction, as further detailed in Figure EC.4.2:

โ€ข

Lowering an ๐œ€ -Best Arm: This approach decreases the expected reward of a currently ๐œ€ -best arm ๐‘– , while simultaneously increasing the expected reward of another arm ๐‘— .

โ€ข

Elevating a Non- ๐œ€ -Best Arm: This approach increases the expected reward of a non- ๐œ€ -best arm ๐‘š , while concurrently reducing the expected rewards of the โ„“ top-performing arms.

Figure EC.4.2:Illustration of Constructing Alternative Set of All ๐œ€ -Best Arms Identification Lower Bound

Note. (a) Transforming an arm within ๐บ ๐œ€ โ€‹ ( ๐ ) into a non- ๐œ€ -best arm by decreasing the mean of an ๐œ€ -best arm ๐‘– while increasing the mean of another arm ๐‘— . (b) Transforming an arm outside ๐บ ๐œ€ โ€‹ ( ๐ ) into an ๐œ€ -best arm by increasing the mean of a non- ๐œ€ -best arm ๐‘š and simultaneously decreasing the means of higher-performing arms.

For all ๐œ€ -best arms identification in linear bandits, we establish the correct answer, the culprit set, and the alternative set as follows. The general idea of the proof is to builds these alternatives explicitly and shows which one is the hardest to distinguish. Recall that ๐œ‡ ๐‘–

โŸจ ๐›‰ , ๐š ๐‘– โŸฉ . For notational simplicity, we use ๐› to represent the mean vector induced by true parameter ๐›‰ .

โ€ข

The correct answer ๐บ ๐œ€ โ€‹ ( ๐ )

{ ๐‘– : โŸจ ๐œฝ , ๐’‚ ๐‘– โŸฉ โ‰ฅ max ๐‘— โก โŸจ ๐œฝ , ๐’‚ ๐‘— โŸฉ โˆ’ ๐œ€ } for all ๐‘– โˆˆ [ ๐พ ] . These are the ๐œ€ -best arms under the true parameter.

โ€ข

The culprit set ๐’ณ โ€‹ ( ๐ )

{ ( ๐‘– , ๐‘— , ๐‘š , โ„“ ) : ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐ ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐ ) , โ„“ โˆˆ { 1 , 2 , โ€ฆ , ๐‘š โˆ’ 1 } } . The culprit set identifies potential sources of error. Intuitively, each tuple corresponds to two types of mistakes that can affect the output as mentioned above.

โ€ข

The alternative set Alt โ€‹ ( ๐ )

โˆช ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) Alt ๐‘ฅ โ€‹ ( ๐ ) and the form of Alt ๐‘ฅ โ€‹ ( ๐ ) is given by

Alt ๐‘ฅ โ€‹ ( ๐ )

Alt ๐‘– , ๐‘— โ€‹ ( ๐ ) โˆช Alt ๐‘š , โ„“ โ€‹ ( ๐ ) โ€‹  for all  โ€‹ ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) .

(EC.4.9)

So for each culprit ๐‘ฅ , we build two kinds of alternative instances, each capable of flipping the answer. The two parts of the alternative sets can be expressed as

Alt ๐‘– , ๐‘— โ€‹ ( ๐ )

{ ๐œ— : โŸจ ๐œ— , ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โŸฉ < โˆ’ ๐œ€ } โ€‹  for all  โ€‹ ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ )

(EC.4.10)

and

Alt ๐‘š , โ„“ โ€‹ ( ๐ )

{ ๐œ— : ๐œ— โŠค โ€‹ ๐’‚ 1

โ‹ฏ

๐œ— โŠค โ€‹ ๐’‚ โ„“ โ‰ฅ ๐œ— โŠค โ€‹ ๐’‚ ๐‘š + ๐œ€

๐œ— โŠค โ€‹ ๐’‚ โ„“ + 1 } โ€‹  for all  โ€‹ ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) ,

(EC.4.11)

which are the same as shown in the figure.

The first part Alt ๐‘– , ๐‘— โ€‹ ( ๐› ) forces a good arm ๐‘– to become at least ๐œ€ worse than arm ๐‘— , causing ๐‘– to fall out of the ๐œ€ -best set. In contrast, the second part ensures that a suboptimal arm is no more than ๐œ€ worse than the top โ„“ arms with identical mean values, thereby including it in the ๐œ€ -best set.

For a given culprit ๐‘ฅ

( ๐‘– , ๐‘— , ๐‘š , โ„“ ) , the first component of the alternative set can be written as

Alt ๐‘– , ๐‘— โ€‹ ( ๐ )

{ ๐œ— ๐‘– , ๐‘— โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ ) โˆฃ ๐œ— ๐‘– , ๐‘— โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ )

๐œฝ โˆ’ ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ + ๐›ผ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 โ€‹ ๐‘ฝ ๐’‘ โˆ’ 1 โ€‹ ๐’š ๐‘– , ๐‘— } ,

(EC.4.12)

where ๐• ๐ฉ

โˆ‘ ๐‘–

1 ๐พ ๐‘ ๐‘– โ€‹ ๐š ๐‘– โ€‹ ๐š ๐‘– โŠค is related to the Fisher information matrix (Chaloner and Verdinelli 1995), ๐ฒ ๐‘– , ๐‘—

๐š ๐‘– โˆ’ ๐š ๐‘— , and ๐›ผ

0 is a perturbation parameter. The form of this alternative set is derived from the solution to the following optimization problem:

arg โก min ๐œ— โˆˆ โ„ ๐‘‘

โ€– ๐œ— โˆ’ ๐œฝ โ€– ๐‘ฝ ๐’‘ 2

(EC.4.13)

s.t.	

๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œ—

โˆ’ ๐œ€ โˆ’ ๐›ผ .

(EC.4.14)

Here, ๐›ผ is introduced to construct the specific alternative set, providing an explicit expression for the alternative parameter ๐œ— . By letting ๐›ผ โ†’ 0 , we realize the infimum in equation (EC.2.5) and equation (EC.2.6). Then, we have

๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œ— ๐‘– , ๐‘— โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ )

โˆ’ ๐œ€ โˆ’ ๐›ผ < โˆ’ ๐œ€ ,

(EC.4.15)

which satisfies the condition for being a parameter in an alternative set as defined in equation (EC.4.10). Under the Gaussian distribution assumption, i.e., ๐œ‡ ๐‘– โˆผ ๐’ฉ โ€‹ ( ๐š ๐‘– โŠค โ€‹ ๐›‰ , 1 ) , the KL divergence between the mean value associated with the true parameter ๐›‰ and that associated with the alternative parameter ๐œ— ๐‘– , ๐‘— is

KL โ€‹ ( ๐’‚ ๐‘– โŠค โ€‹ ๐œฝ , ๐’‚ ๐‘– โŠค โ€‹ ๐œ— ๐‘– , ๐‘— )

( ๐’‚ ๐‘– โŠค โ€‹ ( ๐œฝ โˆ’ ๐œ— ๐‘– , ๐‘— โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ ) ) ) 2 2 โ€‹ ( 1 ) 2

๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐‘ฝ ๐’‘ โˆ’ 1 โ€‹ ( ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ + ๐›ผ ) 2 โ€‹ ๐’‚ ๐‘– โ€‹ ๐’‚ ๐‘– โŠค 2 โ€‹ ( โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ) 2 โ€‹ ๐‘ฝ ๐’‘ โˆ’ 1 โ€‹ ๐’š ๐‘– , ๐‘— .

(EC.4.16)

The last equation follows from substituting the expression for ๐œ— ๐‘– , ๐‘— given in (EC.4.12). Then, by Proposition 3.1 and the definition of the ๐ถ ๐‘ฅ function in equation (EC.2.5), the lower bound can be expressed as

๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] log โก ( 1 / 2.4 โ€‹ ๐›ฟ )
โ‰ฅ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐œ— โˆˆ Alt โ€‹ ( ๐ ) โก 1 โˆ‘ ๐‘›

1 ๐พ ๐‘ ๐‘› โ€‹ KL โ€‹ ( ๐’‚ ๐‘› โŠค โ€‹ ๐œฝ , ๐’‚ ๐‘› โŠค โ€‹ ๐œ— )

โ‰ฅ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘ฅ โˆˆ ๐’ณ โ€‹ sup ๐›ผ > 0 1 โˆ‘ ๐‘›

1 ๐พ ๐‘ ๐‘› โ€‹ KL โ€‹ ( ๐’‚ ๐‘› โŠค โ€‹ ๐œฝ , ๐’‚ ๐‘› โŠค โ€‹ ๐œ— ๐‘– , ๐‘— โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ ) )

min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘ฅ โˆˆ ๐’ณ โก 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 .

(EC.4.17)

Note that we let ๐›ผ โ†’ 0 establish the result by realizing sup ๐›ผ

0 . From this lower bound, we can also define the ๐ถ ๐‘ฅ function for all ๐œ€ -best arms identification in linear bandits, which is

๐ถ ๐‘– , ๐‘— โ€‹ ( ๐’‘ )

( ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 .

(EC.4.18)

For the second part, i.e., Alt ๐‘š , โ„“ โ€‹ ( ๐› ) , we assume that the mean value of arm 1 (i.e., the best arm) remains fixed. This assumption partially sacrifices the completeness of the alternative set but allows the alternative set to be expressed in an explicit and symmetric form. This form remains tight.

Consequently, the culprit set is redefined as ๐’ณ โ€‹ ( ๐› )

{ ( ๐‘– , ๐‘— , ๐‘š ) : ๐‘– โˆˆ ๐บ ๐œ€ โ€‹ ( ๐› ) , ๐‘— โ‰  ๐‘– , ๐‘š โˆ‰ ๐บ ๐œ€ โ€‹ ( ๐› ) } and the alternative set can be decomposed as Alt ๐‘ฅ โ€‹ ( ๐› )

Alt ๐‘– , ๐‘— โ€‹ ( ๐› ) โˆช Alt ๐‘š โ€‹ ( ๐› ) for a given culprit ๐‘ฅ

( ๐‘– , ๐‘— , ๐‘š ) . Different from equation (EC.4.11), the second part of the alternative set becomes

Alt ๐‘š โ€‹ ( ๐ )

{ ๐œ— : โŸจ ๐œ— , ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โŸฉ < โˆ’ ๐œ€ } โ€‹  for all  โ€‹ ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐ ) .

(EC.4.19)

Similarly, Alt ๐‘š โ€‹ ( ๐› ) can be constructed as the set in equation (EC.4.12), given by

Alt ๐‘š โ€‹ ( ๐ )

{ ๐œ— ๐‘š โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ ) โˆฃ ๐œ— ๐‘š โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ )

๐œฝ โˆ’ ๐’š ๐‘š โŠค โ€‹ ๐œฝ + ๐œ€ + ๐›ผ โ€– ๐’š ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 โ€‹ ๐‘ฝ ๐’‘ โˆ’ 1 โ€‹ ๐’š ๐‘š } ,

(EC.4.20)

where ๐’š ๐‘š

๐’‚ 1 โˆ’ ๐’‚ ๐‘š and ๐›ผ

0 . Then we have

๐’š ๐‘š โŠค โ€‹ ๐œ— ๐‘š โ€‹ ( ๐œ€ , ๐’‘ , ๐›ผ )

๐œ€ โˆ’ ๐›ผ < ๐œ€ .

(EC.4.21)

Hence, by following a derivation analogous to that of the first part, we obtain

๐ถ ๐‘š โ€‹ ( ๐’‘ )

( ๐’š ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 2 โ€‹ โ€– ๐’š ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 .

(EC.4.22)

Deriving the tightest lower bound focuses on constructing alternative bandit instances that thoroughly challenge each ๐œ€ -best arm configuration.

The minimum of the information function ๐ถ ๐‘ฅ over all culprits ๐‘ฅ โˆˆ ๐’ณ โ€‹ ( ๐› ) is then given by:

min โก { min ๐‘ฅ โˆˆ ๐’ณ โก ๐ถ ๐‘– , ๐‘— โ€‹ ( ๐’‘ ) , min ๐‘ฅ โˆˆ ๐’ณ โก ๐ถ ๐‘š โ€‹ ( ๐’‘ ) } .

(EC.4.23)

Then, by Proposition 3.1 and the definition of ๐ถ ๐‘ฅ function, combining equation (EC.4.18) and equation (EC.4.22), the final lower bound can be expressed as

๐”ผ ๐ โ€‹ [ ๐œ ๐›ฟ ] log โก ( 1 / 2.4 โ€‹ ๐›ฟ )

โ‰ฅ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ( ๐‘– , ๐‘— , ๐‘š ) โˆˆ ๐’ณ โก max โก { 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’š ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 }

(EC.4.24)

= min ๐’‘ โˆˆ ๐‘† ๐พ โก max ( ๐‘– , ๐‘— , ๐‘š ) โˆˆ ๐’ณ โก max โก { 2 โ€‹ โ€– ๐’‚ ๐‘– โˆ’ ๐’‚ ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’‚ ๐‘– โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’‚ 1 โˆ’ ๐’‚ ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’‚ 1 โŠค โ€‹ ๐œฝ โˆ’ ๐’‚ ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 } .

(EC.4.25) \Halmos Appendix EC.5Proof of Theorem 4.1

The proof of Theorem 4.1 consists of two parts. First, we establish the upper bound given in equation (EC.5.4). Then, we derive the final refined bound in equation (26), removing the summation in our first result.

To establish the result involving the summation in equation (EC.5.4), we define ๐‘… max

min โก { ๐‘Ÿ : ๐บ ๐‘Ÿ

๐บ ๐œ€ } as the round in which the last ๐œ€ -best arm is added to ๐บ ๐‘Ÿ . We divide the total number of samples into two phases: samples collected up to round ๐‘… max and those collected from round ๐‘… max + 1 until termination (if the algorithm does not stop at ๐‘… max ). The proof proceeds in eight steps, as outlined below.

EC.5.1Preliminary: Clean Events โ„ฐ 1 and โ„ฐ 2

We begin by defining two high-probability events, denoted as โ„ฐ 1 and โ„ฐ 2 , which we refer to as clean events.

โ„ฐ 1

{ โ‹‚ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘Ÿ โˆˆ โ„• | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | โ‰ค ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) } .

(EC.5.1)

This event captures the condition that, for each round ๐‘Ÿ , the estimated means ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) of all arms ๐‘– in the active set ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) lie within a confidence bound ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) of their true means ๐œ‡ ๐‘– . It ensures uniform control over estimation errors across all arms and rounds with a prescribed confidence level.

Then, we introduce the following lemma to provide the confidence region for the estimated parameter ๐œฝ ^ .

Lemma EC.1 (Lattimore and Szepesvรกri (2020))

Let confidence level ๐›ฟ โˆˆ ( 0 , 1 ) , for each arm ๐š โˆˆ ๐’œ , we have

โ„™ โ€‹ { | โŸจ ๐œฝ ^ โˆ’ ๐œฝ , ๐’‚ โŸฉ | โ‰ฅ 2 โ€‹ โ€– ๐’‚ โ€– ๐‘ฝ ๐‘ก โˆ’ 1 2 โ€‹ log โก ( 2 ๐›ฟ ) } โ‰ค ๐›ฟ ,

(EC.5.2)

where ๐• ๐‘ก represents the information matrix defined in Section 2.2.

Let ๐œ‹ ๐‘Ÿ denote the optimal allocation proportion computed for round ๐‘Ÿ . The corresponding sampling budget is then determined according to equation (21). Thus, we obtain the information matrix in each round, denoted as ๐‘ฝ ๐‘Ÿ 6.

๐‘ฝ ๐‘Ÿ

โˆ‘ ๐’‚ โˆˆ Supp โ€‹ ( ๐œ‹ ๐‘Ÿ ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) โ€‹ ๐’‚ โ€‹ ๐’‚ โŠค โชฐ 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โ€‹ ๐‘ฝ โ€‹ ( ๐œ‹ ) .

(EC.5.3)

Then, by applying Lemma EC.1 with ๐›ฟ replaced by ๐›ฟ / ( ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ) , we obtain that for any arm ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , with probability at least 1 โˆ’ ๐›ฟ / ( ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ) , we have

| โŸจ ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ , ๐’‚ โŸฉ |
โ‰ค 2 โ€‹ โ€– ๐’‚ โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ )

2 โ€‹ ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’‚ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ )

โ‰ค 2 โ€‹ ๐’‚ โŠค โ€‹ ( ๐œ€ ๐‘Ÿ 2 2 โ€‹ ๐‘‘ โ€‹ 1 log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โ€‹ ๐‘ฝ โ€‹ ( ๐œ‹ ) โˆ’ 1 ) โ€‹ ๐’‚ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ )

โ‰ค ๐œ€ ๐‘Ÿ ,

(EC.5.4)

where the third line follows from the matrix inequality in (EC.5.3), using the auxiliary result in Lemma EC.1, and the final line is derived from Lemma EC.5.

Thus, by applying the standard result of the G-optimal design in equation (EC.5.1), we define the confidence radius associated with the clean event โ„ฐ 1 as

๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โ‰” ๐œ€ ๐‘Ÿ .

(EC.5.5)

Then, we have

โ„™ โ€‹ ( โ„ฐ 1 ๐‘ )

โ„™ โ€‹ { โ‹ƒ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹ƒ ๐‘Ÿ โˆˆ โ„• | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | > ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) }

โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž โ„™ โ€‹ { โ‹ƒ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | > ๐œ€ ๐‘Ÿ }

โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž โˆ‘ ๐‘–

1 ๐พ ๐›ฟ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 )

๐›ฟ ,

(EC.5.6)

where the second line follows from the union bound, and the third line combines the union bound with equation (EC.5.1). Therefore, we obtain

๐‘ƒ โ€‹ ( โ„ฐ 1 ) โ‰ฅ 1 โˆ’ ๐›ฟ .

(EC.5.7)

Now consider another event, โ„ฐ 2 , which characterizes the gaps between different arms, defined as

โ„ฐ 2

{ โ‹‚ ๐‘– โˆˆ ๐บ ๐œ€ โ‹‚ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘Ÿ โˆˆ โ„• | ( ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ } .

(EC.5.8)

This event ensures the gap between the estimated mean rewards of arms ๐‘— and ๐‘– is uniformly close to their true gap for all rounds ๐‘Ÿ , arms ๐‘– โˆˆ ๐บ ๐œ€ , and arms ๐‘— in the active set ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) .

By (EC.5.1), for ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , we have

โ„™ โ€‹ { | ( ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | > 2 โ€‹ ๐œ€ ๐‘Ÿ โˆฃ โ„ฐ 1 }
โ‰ค โ„™ โ€‹ { | ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ๐‘— | + | ๐œ‡ ^ ๐‘– โˆ’ ๐œ‡ ๐‘– | > 2 โ€‹ ๐œ€ ๐‘Ÿ โˆฃ โ„ฐ 1 }

โ‰ค โ„™ โ€‹ { | ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ๐‘— | > ๐œ€ ๐‘Ÿ โˆฃ โ„ฐ 1 } + โ„™ โ€‹ { | ๐œ‡ ^ ๐‘– โˆ’ ๐œ‡ ๐‘– | > ๐œ€ ๐‘Ÿ โˆฃ โ„ฐ 1 }

0 ,

(EC.5.9)

which means

โ„™ โ€‹ ( โ„ฐ 2 โˆฃ โ„ฐ 1 )

1 .

(EC.5.10) EC.5.2Step 1: Correctness

Recall that ๐บ ๐œ€ denotes the true set of ๐œ€ -best arms, and ๐บ ๐‘Ÿ is the empirical good set identified by LinFACT-G in round ๐‘Ÿ . Under event โ„ฐ 1 , we first show that if there exists a round ๐‘Ÿ such that ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ

[ ๐พ ] , then it must be that ๐บ ๐‘Ÿ

๐บ ๐œ€ . This implies that under the clean event โ„ฐ 1 , the stopping condition of LinFACT-G ensures correct identification of ๐บ ๐œ€ .

Lemma EC.2

Under event โ„ฐ 1 , we have ๐บ ๐‘Ÿ โІ ๐บ ๐œ€ for all rounds ๐‘Ÿ โˆˆ โ„• .

Proof EC.3

Proof. We first show that 1 โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ ) for all ๐‘Ÿ โˆˆ โ„• ; that is, the best arm is never eliminated from the active set ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) in any round ๐‘Ÿ on the event โ„ฐ 1 . For any arm ๐‘– , we have

๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) + ๐œ€ ๐‘Ÿ โ‰ฅ ๐œ‡ 1 โ‰ฅ ๐œ‡ ๐‘– โ‰ฅ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ€ ๐‘Ÿ

๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ ,

(EC.5.11)

which implies that ๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) + ๐œ€ ๐‘Ÿ > max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€

๐ฟ ๐‘Ÿ and ๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) + ๐œ€ ๐‘Ÿ โ‰ฅ max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ€ ๐‘Ÿ . These inequalities confirm that arm 1 will not be removed from the active set ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) .

Secondly, we show that at all rounds ๐‘Ÿ , ๐œ‡ 1 โˆ’ ๐œ€ โˆˆ [ ๐ฟ ๐‘Ÿ , ๐‘ˆ ๐‘Ÿ ] . Since arm 1 never exists ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ,

๐‘ˆ ๐‘Ÿ

max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– + ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ โ‰ฅ ๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) + ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ โ‰ฅ ๐œ‡ 1 โˆ’ ๐œ€ ,

(EC.5.12)

and for any arm ๐‘– ,

๐œ‡ 1 โˆ’ ๐œ€ โ‰ฅ ๐œ‡ ๐‘– โˆ’ ๐œ€ โ‰ฅ ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ .

(EC.5.13)

Hence, taking the maximum over ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , we obtain

๐œ‡ 1 โˆ’ ๐œ€ โ‰ฅ max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€

๐ฟ ๐‘Ÿ .

(EC.5.14)

Next, we show that ๐บ ๐‘Ÿ โІ ๐บ ๐œ€ for all ๐‘Ÿ โ‰ฅ 1 . By contradiction, if ๐บ ๐‘Ÿ โŠˆ ๐บ ๐œ€ , then it means that โˆƒ ๐‘Ÿ โˆˆ โ„• and โˆƒ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆฉ ๐บ ๐‘Ÿ such that,

๐œ‡ ๐‘– โ‰ฅ ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ โ‰ฅ ๐‘ˆ ๐‘Ÿ โ‰ฅ ๐œ‡ 1 โˆ’ ๐œ€

๐œ‡ ๐‘– ,

(EC.5.15)

which forms a contradiction. \Halmos

Lemma EC.4

Under event โ„ฐ 1 , we have ๐ต ๐‘Ÿ โІ ๐บ ๐œ€ ๐‘ for all rounds ๐‘Ÿ โˆˆ โ„• .

Proof EC.5

Proof. Similarly, we proceed by contradiction. Consider the case that a good arm from ๐บ ๐œ€ is added to ๐ต ๐‘Ÿ for some round ๐‘Ÿ . By definition, ๐ต 0

โˆ… and ๐ต ๐‘Ÿ โˆ’ 1 โІ ๐ต ๐‘Ÿ for all ๐‘Ÿ . Then there must exist some ๐‘Ÿ โˆˆ โ„• and an ๐‘– โˆˆ ๐บ ๐œ€ such that ๐‘– โˆˆ ๐ต ๐‘Ÿ and ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 . Following line 6 of Algorithm 4.1, this occurs if and only if

max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘–

2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ .

(EC.5.16)

On the clean event โ„ฐ 1 , the above implies โˆƒ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) such that

๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– + 2 โ€‹ ๐œ€ ๐‘Ÿ โ‰ฅ ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ,

(EC.5.17)

which yields ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– โ‰ฅ ๐œ€ , contradicting that ๐‘– โˆˆ ๐บ ๐œ€ . \Halmos

The above Lemma EC.2 and Lemma EC.4 show that under โ„ฐ 1 , ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ

[ ๐พ ] can lead to the result ๐บ ๐‘Ÿ

๐บ ๐œ€ and ๐ต ๐‘Ÿ

๐บ ๐œ€ ๐‘ . Since โ„™ โ€‹ { โ„ฐ 1 } โ‰ฅ 1 โˆ’ ๐›ฟ , if LinFACT terminates, it can correctly provide the correct decision rule with a probability of least 1 โˆ’ ๐›ฟ . Up to now, we have demonstrated the correctness of the algorithmโ€™s stopping rule. Then we will focus on bounding the sample complexity in the following parts.

EC.5.3Step 2: Total Sample Count

To bound the expected sampling budget, we decompose the total number of samples into two parts: the budget used before the round when the last arm is added to the good set ๐บ ๐‘Ÿ , and the budget used after this round until termination. The total number of samples drawn by the algorithm can thus be represented as

๐‘‡
โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โˆ‘ ๐‘Ÿ

1 โˆž ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โ‰  ๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

(EC.5.18)

  • โˆ‘ ๐‘Ÿ

    1 โˆž ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1

    ๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) .

(EC.5.19)

In the following parts, we will bound these two terms separately. We begin by analyzing a single arm ๐‘– โˆˆ ๐บ ๐œ€ , tracking which sets it belongs to as the rounds progress.

For ๐‘– โˆˆ ๐บ ๐œ€ , let ๐‘… ๐‘– denote the number of rounds in which arm ๐‘– is sampled before being added to ๐บ ๐‘Ÿ in line 8 of Algorithm 4.1. For ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ , let ๐‘… ๐‘– denote the number of rounds in which arm ๐‘– is sampled before being removed from ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) and added to ๐ต ๐‘Ÿ in line 6 of Algorithm 4.1. Then, by definition, ๐‘… ๐‘– can be expressed as

๐‘… ๐‘–

min โก { ๐‘Ÿ : { ๐‘– โˆˆ ๐บ ๐‘Ÿ

if  โ€‹ ๐‘– โˆˆ ๐บ ๐œ€

๐‘– โˆ‰ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ )
if  โ€‹ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ }

min โก { ๐‘Ÿ : { ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ โ‰ฅ ๐‘ˆ ๐‘Ÿ

if  โ€‹ ๐‘– โˆˆ ๐บ ๐œ€

๐œ‡ ^ ๐‘– + ๐œ€ ๐‘Ÿ โ‰ค ๐ฟ ๐‘Ÿ

if  โ€‹ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ } .

(EC.5.20) Bound ๐‘… ๐‘– .

We define a helper function โ„Ž โ€‹ ( ๐‘ฅ )

log 2 โก ( 1 / | ๐‘ฅ | ) to facilitate the proof. It can be observed that in round ๐‘Ÿ , if ๐‘Ÿ โ‰ฅ โ„Ž โ€‹ ( ๐‘ฅ ) , then ๐œ€ ๐‘Ÿ

2 โˆ’ ๐‘Ÿ โ‰ค | ๐‘ฅ | .

Lemma EC.6

For any ๐‘– โˆˆ ๐บ ๐œ€ , we have ๐‘… ๐‘– โ‰ค โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ( ๐œ€ โˆ’ ฮ” ๐‘– ) ) โŒ‰ , where ฮ” ๐‘–

๐œ‡ 1 โˆ’ ๐œ‡ ๐‘– .

Proof EC.7

Proof. Note that for ๐‘– โˆˆ ๐บ ๐œ€ , the inequality 4 โ€‹ ๐œ€ ๐‘Ÿ < ๐œ‡ ๐‘– โˆ’ ( ๐œ‡ 1 โˆ’ ๐œ€ ) holds when ๐‘Ÿ

โ„Ž โ€‹ ( 0.25 โ€‹ ( ๐œ€ โˆ’ ฮ” ๐‘– ) ) . This implies that for all ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ,

๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ

โ‰ฅ ๐œ‡ ๐‘– โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ

๐œ‡ 1 + 2 โ€‹ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€

โ‰ฅ ๐œ‡ ๐‘— + 2 โ€‹ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€

โ‰ฅ ๐œ‡ ^ ๐‘— + ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ .

(EC.5.21)

Thus, in particular, ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ > max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘ก ) + ๐œ€ ๐‘Ÿ โˆ’ ๐œ€

๐‘ˆ ๐‘Ÿ . Therefore, we conclude that ๐‘… ๐‘– โ‰ค โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ( ๐œ€ โˆ’ ฮ” ๐‘– ) ) โŒ‰ . \Halmos

After determining the latest round in which any arm ๐‘– โˆˆ ๐บ ๐œ€ is added to ๐บ ๐‘Ÿ , we define ๐‘… max as the round by which all good arms have been added to ๐บ ๐‘Ÿ , i.e., ๐‘… max โ‰” min โก { ๐‘Ÿ : ๐บ ๐‘Ÿ

๐บ ๐œ€ }

max ๐‘– โˆˆ ๐บ ๐œ€ โก ๐‘… ๐‘– .

Lemma EC.8

๐‘… max โ‰ค

โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ๐›ผ ๐œ€ ) โŒ‰ .

Proof EC.9

Proof. Recall that ๐›ผ ๐œ€

min ๐‘– โˆˆ ๐บ ๐œ€ โก ๐œ‡ ๐‘– โˆ’ ๐œ‡ 1 + ๐œ€

min ๐‘– โˆˆ ๐บ ๐œ€ โก ๐œ€ โˆ’ ฮ” ๐‘– . By Lemma EC.6, ๐‘… ๐‘– โ‰ค โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ( ๐œ€ โˆ’ ฮ” ๐‘– ) ) โŒ‰ for ๐‘– โˆˆ ๐บ ๐œ€ . Furthermore, โ„Ž โ€‹ ( โ‹… ) is monotonically decreasing if ๐‘– โˆˆ ๐บ ๐œ€ . Then for any ๐›ฟ > 0 , ๐‘… max

max ๐‘– โˆˆ ๐บ ๐œ€ โก ๐‘… ๐‘– โ‰ค max ๐‘– โˆˆ ๐บ ๐œ€ โก โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ( ๐œ€ โˆ’ ฮ” ๐‘– ) ) โŒ‰

โŒˆ โ„Ž โ€‹ ( min ๐‘– โˆˆ ๐บ ๐œ€ โก ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰

โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ๐›ผ ๐œ€ ) โŒ‰ . \Halmos

Bound the Total Samples up to ๐‘… ๐‘š โ€‹ ๐‘Ž โ€‹ ๐‘ฅ .

The total number of samples up to round ๐‘… max is โˆ‘ ๐‘Ÿ

1 ๐‘… max โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) . By line 4 of the Algorithm 4.1, we have

๐‘‡ ๐‘Ÿ โ‰ค 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 .

(EC.5.22)

Hence,

โˆ‘ ๐‘Ÿ

1 ๐‘… max ๐‘‡ ๐‘Ÿ

โˆ‘ ๐‘Ÿ

1 ๐‘… max โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… max ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… max 2 2 โ€‹ ๐‘Ÿ + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max

โ‰ค ๐‘ โ€‹ 2 2 โ€‹ ๐‘… max + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max ,

(EC.5.23)

where ๐‘ is a universal constant, and recall that ๐‘… max โ‰ค

โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ๐›ผ ๐œ€ ) โŒ‰ . The second line follows from equation (EC.5.22). The third line applies a scaling argument based on the range of the summation. The last line replaces the term 2 โ€‹ ๐‘Ÿ + 1 with the largest round ๐‘… max and introduces a finite constant ๐‘ .

Next, we bound two terms in equation (EC.5.18) and equation (EC.5.19) separately. The first term, which represents the samples taken before round ๐‘… max , can be bounded in the following step.

Bound (EC.5.18).

Recall that ๐‘… max โ‰ค โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ๐›ผ ๐œ€ ) โŒ‰ is the round where ๐บ ๐‘… max

๐บ ๐œ€ . We have

โˆ‘ ๐‘Ÿ

1 โˆž ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โ‰  ๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โˆ‘ ๐‘Ÿ

1 ๐‘… max ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค ๐‘ โ€‹ 2 2 โ€‹ ๐‘… max + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max ,

(EC.5.24)

where the second line follows from the definition of ๐‘… max , as no additional samples are collected after round ๐‘… max . The third line follows directly from equation (EC.5.23).

Bound (EC.5.19).

Next, we have

โˆ‘ ๐‘Ÿ

1 โˆž ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1

๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โˆ‘ ๐‘Ÿ

๐‘… max + 1 โˆž ๐Ÿ™ โ€‹ [ ๐ต ๐‘Ÿ โˆ’ 1 โ‰  ๐บ ๐œ€ ๐‘ ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘Ÿ

๐‘… max + 1 โˆž | ๐บ ๐œ€ ๐‘
๐ต ๐‘Ÿ โˆ’ 1 | โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โˆ‘ ๐‘Ÿ

๐‘… max + 1 โˆž โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆ‘ ๐‘Ÿ

๐‘… max + 1 โˆž ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆ‘ ๐‘Ÿ

1 โˆž ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆ‘ ๐‘Ÿ

1 โˆž ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ) ,

(EC.5.25)

where the second line follows from the definition of ๐‘… max , as ๐บ ๐‘Ÿ

๐บ ๐œ€ for all rounds beyond ๐‘… max . The third line uses the fact that as long as ๐บ ๐œ€ ๐‘
๐ต ๐‘Ÿ โˆ’ 1 is not the empty set, the corresponding indicator function is 1. The fourth line considers the indicator function for each arm in ๐บ ๐œ€ . The fifth and sixth lines exchange the order of the double summation and enlarge the summation range over the rounds ๐‘Ÿ .

EC.5.4Step 3: Bound the Expected Total Samples of LinFACT

We now take expectations over the total number of samples drawn for the given bandit instance ๐ . These expectations are conditioned on the high-probability event โ„ฐ 1 .

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ โˆฃ โ„ฐ 1 ]
โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โ‰  ๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

+ โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1

๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค ( EC.5.24 ) ๐‘ โ€‹ 2 2 โ€‹ ๐‘… max + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max

+ โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1

๐บ ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆ’ 1 โˆช ๐ต ๐‘Ÿ โˆ’ 1 โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค ( EC.5.25 ) ๐‘ โ€‹ 2 2 โ€‹ ๐‘… max + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max

+ โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 ] ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ) .

(EC.5.26)

The first line follows from the stopping condition of LinFACT-G and the additivity of the expectation. The second line applies the decomposition established in Step 2. The subsequent lines use the results from equation (EC.5.24) and equation (EC.5.25).

Next, we bound the last term. For a given ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and round ๐‘Ÿ , we first bound the probability that ๐‘– โˆ‰ ๐ต ๐‘Ÿ . By the Borel-Cantelli lemma, this implies that the probability of ๐‘– never being added to any ๐ต ๐‘Ÿ is zero.

Lemma EC.10

For ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ , we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

0 .

Proof EC.11

Proof.

First, we have for any ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ ,

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 , ๐‘– โˆ‰ ๐ต ๐‘š โ€‹ ( ๐‘š

{ 1 , 2 , โ€ฆ , ๐‘Ÿ โˆ’ 1 } ) ]

โ‰ค ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 ] ,

(EC.5.27)

where the first line follows from the fact that arm ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ will never be added into ๐ต ๐‘Ÿ from round 1 to round ๐‘Ÿ โˆ’ 1 if ๐‘– โˆ‰ ๐ต ๐‘Ÿ , and the second line accounts for the conditional expectation.

If ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 , then ๐‘– โˆˆ ๐ต ๐‘Ÿ by definition. Otherwise, if ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , then under event โ„ฐ 1 , for ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ , we have

max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ ฮ” ๐‘– โˆ’ 2 โˆ’ ๐‘Ÿ + 1 โ‰ฅ ๐œ€ + 2 โ€‹ ๐œ€ ๐‘Ÿ ,

(EC.5.28)

which implies that ๐‘– โˆˆ ๐ต ๐‘Ÿ by line 6 of the Algorithm 4.1. In other words, we have established the correctness of the algorithm when line 6 is triggered in Step 1. We now specify the exact condition under which line 6 will occur. In particular, under event โ„ฐ 1 , if ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , then for all ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ , we have

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– > 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ]

1 .

(EC.5.29)

Therefore, for all ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ , we have

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 )

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

0 ,

(EC.5.30)

where the second line follows from the additivity of expectation. The fourth line follows the deterministic result that ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 ]

0 . The fifth line is the decomposition based on the conditional expectation. The eighth line uses the fact that the expectation of the indicator function is simply the probability. The last line follows the result in equation (EC.5.29). The lemma then follows by combining this result with equation (EC.11). \Halmos

Lemma EC.12

For ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ , we have

โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ log 2 โก ( 8 ฮ” ๐‘– โˆ’ ๐œ€ ) + ๐œ‰ โ€‹ 256 โ€‹ ๐‘‘ ( ฮ” ๐‘– โˆ’ ๐œ€ ) 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ฮ” ๐‘– โˆ’ ๐œ€ ) .

(EC.5.31) Proof EC.13

Proof.

โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โˆ‘ ๐‘Ÿ

1 โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

+ โˆ‘ ๐‘Ÿ

โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ + 1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โˆ‘ ๐‘Ÿ

1 โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ) + 0

โ‰ค โˆ‘ ๐‘Ÿ

1 โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ ( ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + 1 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ log 2 โก ( 8 ฮ” ๐‘– โˆ’ ๐œ€ ) + 2 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ ) โ€‹ โˆ‘ ๐‘Ÿ

1 โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ 2 2 โ€‹ ๐‘Ÿ + 4 โ€‹ ๐‘‘ โ€‹ โˆ‘ ๐‘Ÿ

1 โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ 2 2 โ€‹ ๐‘Ÿ โ€‹ log โก ( ๐‘Ÿ + 1 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ log 2 โก ( 8 ฮ” ๐‘– โˆ’ ๐œ€ ) + ๐œ‰ โ€‹ 256 โ€‹ ๐‘‘ ( ฮ” ๐‘– โˆ’ ๐œ€ ) 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ฮ” ๐‘– โˆ’ ๐œ€ ) ,

(EC.5.32)

Here, ๐œ‰ is a sufficiently large universal constant. The second line follows from decomposing the summation across all rounds. The fourth line uses Lemma EC.10. The fifth line bounds the expectation of the indicator function to its maximum value of 1. The sixth and seventh lines replace the round ๐‘Ÿ with its maximum value โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ . \Halmos

Summarizing the aforementioned results, we have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ โˆฃ โ„ฐ 1 ]
โ‰ค ๐‘ โ€‹ 2 2 โ€‹ ๐‘… max + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max

+ โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 ] ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ) ,

(EC.5.33)

where

๐‘… max โ‰ค โŒˆ โ„Ž โ€‹ ( 0.25 โ€‹ ๐›ผ ๐œ€ ) โŒ‰

log 2 โก 8 ๐›ผ ๐œ€ .

(EC.5.34)

Also, we have

โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ log 2 โก ( 8 ฮ” ๐‘– โˆ’ ๐œ€ ) + ๐œ‰ โ€‹ 256 โ€‹ ๐‘‘ ( ฮ” ๐‘– โˆ’ ๐œ€ ) 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ฮ” ๐‘– โˆ’ ๐œ€ ) .

(EC.5.35)

Then, we arrive at the final result as follows,

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ โˆฃ โ„ฐ 1 ]
โ‰ค ๐‘ โ€‹ 2 2 โ€‹ ๐‘… max + 1 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘… max โ€‹ ( ๐‘… max + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… max

+ โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 ] ] โ€‹ ( 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘ โ€‹ 256 โ€‹ ๐‘‘ ๐›ผ ๐œ€ 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก ( 16 ๐›ผ ๐œ€ ) ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ log 2 โก 8 ๐›ผ ๐œ€

  • โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ ( ๐‘‘ โ€‹ ( ๐‘‘
  • 1 ) 2 โ€‹ log 2 โก ( 8 ฮ” ๐‘– โˆ’ ๐œ€ )
  • ๐œ‰ โ€‹ 256 โ€‹ ๐‘‘ ( ฮ” ๐‘– โˆ’ ๐œ€ ) 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ฮ” ๐‘– โˆ’ ๐œ€ ) ) ,

(EC.5.36)

which can be further expressed as

๐”ผ โ€‹ [ ๐‘‡ ๐บ โˆฃ โ„ฐ 1 ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐›ผ ๐œ€ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log โก ( ๐›ผ ๐œ€ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐›ผ ๐œ€ โˆ’ 1 ) )

  • โˆ‘ ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ ( ๐’ช ( ๐‘‘ 2 log ( ฮ” ๐‘– โˆ’ ๐œ€ ) โˆ’ 1
  • ๐‘‘ ( ฮ” ๐‘– โˆ’ ๐œ€ ) โˆ’ 2 log ( ๐พ ๐›ฟ log ( ฮ” ๐‘– โˆ’ ๐œ€ ) โˆ’ 2 ) ) ) .

(EC.5.37) EC.5.5A Refined Bound

The result obtained in the previous steps involves a summation over the set ๐บ ๐œ€ , which can be further improved by eliminating this summation. Rather than focusing solely on the round ๐‘… max , defined in Lemma EC.10 as the round in which all arms in ๐บ ๐œ€ are classified into ๐บ ๐‘Ÿ , we now define the round at which all classifications are complete, i.e., ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ

[ ๐พ ] .

Lemma EC.14

For ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰ , we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

0 .

Proof EC.15

Proof. First, for any ๐‘– โˆˆ ๐บ ๐œ€

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 , ๐‘– โˆ‰ ๐บ ๐‘š โ€‹ ( ๐‘š

{ 1 , 2 , โ€ฆ , ๐‘Ÿ โˆ’ 1 } ) ]

โ‰ค ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 ] .

(EC.5.38)

If ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 , then ๐‘– โˆˆ ๐บ ๐‘Ÿ by definition. Otherwise, if ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , then under event โ„ฐ 1 , for ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰ , we have

max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค ๐œ‡ arg โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ๐‘– + 2 โˆ’ ๐‘Ÿ + 1 โ‰ค ฮ” ๐‘– + 2 โˆ’ ๐‘Ÿ + 1 โ‰ค ๐œ€ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ ,

(EC.5.39)

which implies that ๐‘– โˆˆ ๐บ ๐‘Ÿ by line 8 of the Algorithm 4.1. In other words, we now specify the exact condition under which line 8 will occur. In particular, under event โ„ฐ 1 , if ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , for all ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰ , we have

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ]

1 .

(EC.5.40)

Deterministically, ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 ]

0 . Therefore,

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 )

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 ] โ€‹ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 ]

0 ,

(EC.5.41)

where the second line comes from the additivity of expectation. The fourth line follows the deterministic result that ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 ]

0 . The fifth line applies a decomposition based on the conditional expectation. The eighth line uses the fact that the expectation of an indicator function equals the corresponding probability. The final line follows from the result in equation (EC.5.40). The lemma then follows by combining this result with equation (EC.15). \Halmos

Lemma EC.16

The round at which all classifications are complete and the final answer is returned is ๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } .

Proof EC.17

Proof. Combining the result of Lemma EC.14 with that of Lemma EC.10, we have the following: for ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ , we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

0 ; for ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰ , we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

0 . Since ๐›ผ ๐œ€

min ๐‘– โˆˆ ๐บ ๐œ€ โก ( ๐œ€ โˆ’ ฮ” ๐‘– ) and ๐›ฝ ๐œ€

min ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โก ( ฮ” ๐‘– โˆ’ ๐œ€ ) , for any round ๐‘Ÿ โ‰ฅ ๐‘… upper , all arms have been classified into either ๐บ ๐‘Ÿ or ๐ต ๐‘Ÿ , marking the termination of the algorithm. \Halmos

Lemma EC.18

For the expected sample complexity conditioned on the high-probability event โ„ฐ 1 , we have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐’ณ โ€‹ ๐’ด โˆฃ โ„ฐ 1 ]

โ‰ค ๐œ โ€‹ max โก { 256 โ€‹ ๐‘‘ ๐›ผ ๐œ€ 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ผ ๐œ€ ) , 256 โ€‹ ๐‘‘ ๐›ฝ ๐œ€ 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ฝ ๐œ€ ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper ,

(EC.5.42)

where ๐œ is a universal constant and ๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } .

Proof EC.19

Proof. We have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐’ณ โ€‹ ๐’ด โˆฃ โ„ฐ 1 ]
โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… upper ( ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + 1 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper + 2 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ ) โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ + 4 โ€‹ ๐‘‘ โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โ€‹ log โก ( ๐‘Ÿ + 1 )

โ‰ค 4 โ€‹ log โก [ 2 โ€‹ ๐พ ๐›ฟ โ€‹ ( ๐‘… upper + 1 ) ] โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper

(EC.5.43)

โ‰ค ๐œ โ€‹ max โก { 256 โ€‹ ๐‘‘ ๐›ผ ๐œ€ 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ผ ๐œ€ ) , 256 โ€‹ ๐‘‘ ๐›ฝ ๐œ€ 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ฝ ๐œ€ ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper .

(EC.5.44)

The second line follows from Lemma EC.16. Then, we have

๐”ผ โ€‹ [ ๐‘‡ ๐บ โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log 2 โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) ,

(EC.5.45)

where ๐œ‰

min โก ( ๐›ผ ๐œ€ , ๐›ฝ ๐œ€ ) / 16 is the minimum gap between ๐›ผ ๐œ€ and ๐›ฝ ๐œ€ , indicating the difficulty of the problem instance.

\Halmos Appendix EC.6Additional Insights into the Algorithm Optimality

From another perspective, we consider the relationship between the lower bound and the upper bound in the following section and give some additional insights into the algorithm optimality. This relationship serves as the basis for the derivation of a near-optimal upper bound in Theorem 4.2.

For โˆ€ ๐‘– โˆˆ ( ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ ) and โˆ€ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , ๐‘— โ‰  ๐‘– , in round ๐‘Ÿ , we have

๐’š ๐‘— , ๐‘– โŠค โ€‹ ( ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ ) โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ

(EC.6.1)

and

๐’š ๐‘— , ๐‘– โŠค โ€‹ ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œ€ โ‰ค ๐’š ๐‘— , ๐‘– โŠค โ€‹ ๐œฝ + 2 โ€‹ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ .

(EC.6.2) Lemma EC.1

Define ๐บ ๐‘Ÿ โ€ฒ โ‰” { โˆƒ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , ๐‘— โ‰  ๐‘– , ๐‘– : ๐ฒ ๐‘— , ๐‘– โŠค โ€‹ ๐›‰ โˆ’ ๐œ€

โˆ’ 4 โ€‹ ๐œ€ ๐‘Ÿ } . We always have ( ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ โˆฉ ๐บ ๐‘Ÿ ๐‘ ) โІ ๐บ ๐‘Ÿ โ€ฒ .

Proof EC.2

Proof. For ๐‘Ÿ

1 , the lemma follows directly from the assumption in Theorem 4.2 that max ๐‘– โˆˆ [ ๐พ ] โก | ๐œ‡ 1 โˆ’ ๐œ€ โˆ’ ๐œ‡ ๐‘– | โ‰ค 2 . For ๐‘Ÿ โ‰ฅ 2 , we proceed by contradiction. Suppose ๐‘– โˆˆ ( ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ โˆฉ ๐บ ๐‘Ÿ ๐‘ ) โˆฉ ( ๐บ ๐‘Ÿ โ€ฒ ) ๐‘ , then for every ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) with ๐‘— โ‰  ๐‘– , we have

๐’š ๐‘— , ๐‘– โŠค โ€‹ ๐œฝ โ‰ค โˆ’ 4 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ .

(EC.6.3)

Hence, using equation (EC.6.2), for every ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) and ๐‘— โ‰  ๐‘– , we have

๐’š ๐‘— , ๐‘– โŠค โ€‹ ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œ€ โ‰ค โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ ,

(EC.6.4)

which is exactly the condition for the algorithm to add arm ๐‘– into ๐บ ๐‘Ÿ at line 8 of the algorithm, which yields a contradiction and completes the argument. Moreover, note that when ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , we have ๐บ ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€

โˆ… . Furthermore, considering that ๐‘– โˆˆ ๐บ ๐œ€ , we have ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€

0 . \Halmos

There is, however, one exceptional case that invalidates the above derivation. Specifically, when ๐‘– is the index of the arm with the largest mean value, i.e., ๐‘–

arg โก max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , the proof no longer holds. For this situation to occur, it must be the case that arg โก max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆˆ ๐บ ๐‘Ÿ ๐‘ , which is equivalent to ๐œ€ โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ . Since this condition can only hold for a limited number of rounds, its impact is negligible and can therefore be ignored.

On the other hand, for ๐‘– โˆˆ ( ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ ๐‘ ) , in any round ๐‘Ÿ , we have

๐’š 1 , ๐‘– โŠค โ€‹ ( ๐œฝ โˆ’ ๐œฝ ^ ๐‘Ÿ ) โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ

(EC.6.5)

and

๐’š 1 , ๐‘– โŠค โ€‹ ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œ€ โ‰ฅ ๐’š 1 , ๐‘– โŠค โ€‹ ๐œฝ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€ .

(EC.6.6) Lemma EC.3

Define ๐ต ๐‘Ÿ โ€ฒ โ‰” { ๐‘– : ๐ฒ 1 , ๐‘– โŠค โ€‹ ๐›‰ โˆ’ ๐œ€ < 4 โ€‹ ๐œ€ ๐‘Ÿ } . We always have ( ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ ๐‘ ) โŠ‚ ๐ต ๐‘Ÿ โ€ฒ .

Proof EC.4

Proof. To establish this, first note that when ๐‘Ÿ

1 , the lemma directly follows from the assumption in Theorem 4.2 that max ๐‘– โˆˆ [ ๐พ ] โก | ๐œ‡ 1 โˆ’ ๐œ€ โˆ’ ๐œ‡ ๐‘– | โ‰ค 2 . For ๐‘Ÿ โ‰ฅ 2 , using the same contradiction argument, assume that ๐‘– โˆˆ ( ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ ๐‘ ) โˆฉ ( ๐ต ๐‘Ÿ โ€ฒ ) ๐‘ . Then we must have

๐’š 1 , ๐‘– โŠค โ€‹ ๐œฝ โ‰ฅ 4 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ .

(EC.6.7)

Consequently, by equation (EC.6.6), it follows that

๐’š 1 , ๐‘– โŠค โ€‹ ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œ€ โ‰ฅ 2 โ€‹ ๐œ€ ๐‘Ÿ .

(EC.6.8)

This is precisely the condition for the algorithm to add arm ๐‘– into ๐ต ๐‘Ÿ and eliminate it from ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) as specified in line 6 of the algorithm. This contradiction leads to the desired result. Moreover, when ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ , we have ๐ต ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ ๐‘

โˆ… . Finally, for ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ , it follows that ๐ฒ 1 , ๐‘– โŠค โ€‹ ๐›‰ โˆ’ ๐œ€

0 . \Halmos

We now present a critical lemma that establishes the connection between the lower bound in Theorem 3.2 and the upper bound. Define ๐’ข ๐’ด as the gauge of ๐’ด โ€‹ ( ๐’œ ) , where ๐’œ denotes the initial set of all arm vectors. The details are provided in Lemma EC.3.

Lemma EC.5

Considering the lower bound ( ฮ“ โˆ— ) โˆ’ 1 in Theorem 3.2, we have

( ฮ“ โˆ— ) โˆ’ 1 โ‰ฅ ๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 4 โ€‹ ๐‘… upper โ€‹ ๐‘‘ โ€‹ ๐ฟ 1 โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โˆ’ 3 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ,

(EC.6.9)

where ๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } , and ๐ฟ 1 and ๐ฟ 2 are constants.

Proof EC.6

Proof. When round ๐‘Ÿ exceeds ๐‘… upper , we have ๐บ ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€

โˆ… and ๐ต ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ ๐‘

โˆ… , implying that ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ

[ ๐พ ] and the algorithm terminates. From Theorem 3.2, we obtain

( ฮ“ โˆ— ) โˆ’ 1

min ๐’‘ โˆˆ ๐‘† ๐พ โก max ( ๐‘– , ๐‘— , ๐‘š ) โˆˆ ๐’ณ โก max โก { 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š 1 , ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 }

min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘Ÿ โ‰ค ๐‘… upper โก max ๐‘– โˆˆ ๐บ ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โก max ๐‘š โˆˆ ๐ต ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ ๐‘ โก max โก { 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š ๐‘– , ๐‘— โŠค โ€‹ ๐œฝ + ๐œ€ ) 2 , 2 โ€‹ โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( ๐’š 1 , ๐‘š โŠค โ€‹ ๐œฝ โˆ’ ๐œ€ ) 2 }

โ‰ฅ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘Ÿ โ‰ค ๐‘… upper โก max ๐‘– โˆˆ ๐บ ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โก max ๐‘š โˆˆ ๐ต ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ ๐‘ โก max โก { 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( 4 โ€‹ ๐œ€ ๐‘Ÿ ) 2 , 2 โ€‹ โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( 4 โ€‹ ๐œ€ ๐‘Ÿ ) 2 }

โ‰ฅ (i) 1 ๐‘… upper โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper max ๐‘– โˆˆ ๐บ ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โก max ๐‘š โˆˆ ๐ต ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ ๐‘ โก max โก { 2 โ€‹ โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( 4 โ€‹ ๐œ€ ๐‘Ÿ ) 2 , 2 โ€‹ โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 ( 4 โ€‹ ๐œ€ ๐‘Ÿ ) 2 }

โ‰ฅ (ii) 1 ๐‘… upper โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โˆ’ 3 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐บ ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โก max ๐‘š โˆˆ ๐ต ๐‘Ÿ โ€ฒ โˆฉ ๐บ ๐œ€ ๐‘ โก max โก { โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 , โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 }

โ‰ฅ (iii) 1 ๐‘… upper โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โˆ’ 3 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ โˆฉ ๐บ ๐‘Ÿ ๐‘ โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โก max ๐‘š โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ ๐‘ โก max โก { โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 , โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 }

โ‰ฅ (iv) ๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 ๐‘‘ โ€‹ ๐ฟ 1 โ€‹ 1 ๐‘… upper โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โˆ’ 3 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€
{ 1 } โก max ๐‘š โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โˆฉ ๐บ ๐œ€ ๐‘ โก max โก { โ€– ๐’š 1 , ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 , โ€– ๐’š 1 , ๐‘š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 }

โ‰ฅ (v) ๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 4 โ€‹ ๐‘… upper โ€‹ ๐‘‘ โ€‹ ๐ฟ 1 โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โˆ’ 3 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘– โ‰  ๐‘— โก โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2

๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 32 โ€‹ ๐‘… upper โ€‹ ๐‘‘ โ€‹ ๐ฟ 1 โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 2 โ€‹ ๐‘Ÿ โ€‹ ๐‘” ๐’ณ โ€‹ ๐’ด โ€‹ ( ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) ) ,

(EC.6.10)

where (i) follows from the fact that the maximum of positive numbers is always greater than or equal to their average, and (ii) uses the fact that the minimum of a sum is greater than or equal to the sum of the minimums. (iii) arises from the set inclusion relationships established in Lemma EC.1 and Lemma EC.3. (iv) is a direct consequence of Lemma EC.3 with ๐‘—

1 . Finally, for (v), note that for any ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , we have max ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , ๐‘– โ‰  ๐‘— โก โ€– ๐ฒ ๐‘– , ๐‘— โ€– ๐• ๐ฉ โˆ’ 1 2 โ‰ค 4 โ€‹ max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )
{ 1 } โก โ€– ๐ฒ 1 , ๐‘– โ€– ๐• ๐ฉ โˆ’ 1 2 . \Halmos

Moreover, to provide additional insights into the G-optimal design, from equation (EC.5.43), we obtain

๐”ผ ๐ โ€‹ [ ๐‘‡ โˆฃ โ„ฐ 1 ]
โ‰ค 4 โ€‹ log โก [ 2 โ€‹ ๐พ ๐›ฟ โ€‹ ( ๐‘… upper + 1 ) ] โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper .

(EC.6.11)

From inequalities (EC.6.10) and (EC.6.11), it follows that to align the upper and lower bounds, one must establish ๐‘‘ โ‰ค ๐‘ โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , ๐‘– โ‰  ๐‘— โก โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 for some universal constant. However, applying the Kieferโ€“Wolfowitz Theorem from Lemma EC.5 yields only the reverse inequality

min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , ๐‘– โ‰  ๐‘— โก โ€– ๐’š ๐‘– , ๐‘— โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 โ‰ค 4 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2

4 โ€‹ ๐‘‘ .

(EC.6.12)

This result indicates that LinFACT-G cannot achieve the lower bound established in Theorem 3.2.

Appendix EC.7Proof of Theorem 4.2

The central idea of this proof is to establish a direct relationship between the lower bound and the key minimax summation terms in the upper bound, thereby enabling the sample complexity to be bounded explicitly in terms of the lower bound term ( ฮ“ โˆ— ) โˆ’ 1 . We first show that the good event โ„ฐ 3 defined below occurs with probability at least 1 โˆ’ ๐›ฟ ๐‘Ÿ in each round ๐‘Ÿ , where ๐›ฟ ๐‘Ÿ denotes the probability that the good event โ„ฐ 3 does not hold in a certain round ๐‘Ÿ . By taking the union bound across different rounds, we can complete the proof of Lemma EC.1 regarding the overall probability of event โ„ฐ 3 occurring, denoted by ๐›ฟ . We then demonstrate that the probability of this good event holding in every round is at least 1 โˆ’ ๐›ฟ . Consequently, we can sum the sample complexity bounds from each round (conditioned on the good event) to obtain the overall bound on the sample complexity.

First, we define the good event โ„ฐ 3 :

โ„ฐ 3

โ‹‚ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โ‹‚ ๐‘Ÿ โˆˆ โ„• | ( ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ .

(EC.7.1)

Since arms are sampled according to a preset allocation (i.e., a fixed design), we introduce the following lemma to provide a confidence region for the estimated parameter ๐œฝ .

Lemma EC.1

Let ๐›ฟ โˆˆ ( 0 , 1 ) . Then, it holds that ๐‘ƒ โ€‹ ( โ„ฐ 3 ) โ‰ฅ 1 โˆ’ ๐›ฟ .

Proof EC.2

Proof. Since ๐›‰ ^ ๐‘Ÿ is a ordinary least squares estimator of ๐›‰ and the noise is i.i.d., it follows that ๐ฒ โŠค โ€‹ ( ๐›‰ โˆ’ ๐›‰ ^ ๐‘Ÿ ) โ€‹  is  โ€‹ โ€– ๐ฒ โ€– ๐• ๐‘Ÿ โˆ’ 1 2 -sub-Gaussian for all ๐ฒ โˆˆ ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) . Moreover, the guarantees of the rounding procedure ensure that

โ€– ๐’š โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 โ‰ค ( 1 + ๐œ– ) โ€‹ ๐‘” ๐’ณ โ€‹ ๐’ด โ€‹ ( ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) ) / ๐‘‡ ๐‘Ÿ โ‰ค 2 โˆ’ 2 โ€‹ ๐‘Ÿ โˆ’ 1 log โก 2 โ€‹ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ

(EC.7.2)

for all ๐ฒ โˆˆ ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) , as ensured by our choice of ๐‘‡ ๐‘Ÿ in equation (22). Since the right-hand side is deterministic and does not depend on the randomness of the arm rewards, we have that for any ๐œŒ

0 and ๐ฒ โˆˆ ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) ,

โ„™ โ€‹ { | ๐’š โŠค โ€‹ ( ๐œฝ โˆ’ ๐œฝ ^ ๐‘Ÿ ) |

2 โˆ’ 2 โ€‹ ๐‘Ÿ โ€‹ log โก ( 2 / ๐œŒ ) log โก 2 โ€‹ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ } โ‰ค ๐œŒ .

(EC.7.3)

Letting ๐œŒ

๐›ฟ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) and applying a union bound over all possible ๐ฒ โˆˆ ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) , where | ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) | โ‰ค | ๐’ด โ€‹ ( ๐’œ โ€‹ ( 0 ) ) | โ‰ค ๐พ โ€‹ ( ๐พ โˆ’ 1 ) , we obtain the desired probability guarantee:

โ„™ โ€‹ ( โ„ฐ 3 ๐‘ )

โ„™ โ€‹ { โ‹ƒ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹ƒ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– โ‹ƒ ๐‘Ÿ โˆˆ โ„• | ( ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | > ๐œ€ ๐‘Ÿ }

โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž โ„™ โ€‹ { โ‹ƒ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹ƒ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 )

๐‘— โ‰  ๐‘– | ( ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | > ๐œ€ ๐‘Ÿ }

โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž โˆ‘ ๐‘–

1 ๐พ โˆ‘ ๐‘—

1

๐‘— โ‰  ๐‘– ๐พ ๐›ฟ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 )

๐›ฟ .

(EC.7.4)

Taking the union bound over all rounds ๐‘Ÿ โˆˆ โ„• completes the proof. \Halmos

Thus, by the standard result of the ๐’ณ โ€‹ ๐’ด -optimal design, we have

๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โ‰” ๐œ€ ๐‘Ÿ ,

(EC.7.5)

which matches the expression in equation (EC.5.5) for the G-optimal design.

Lemma EC.3

On the event โ„ฐ 3 , the best arm 1 โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ ) for all ๐‘Ÿ โˆˆ โ„• .

Proof EC.4

Proof. If the event โ„ฐ 3 holds, then for any arm ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , we have

๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) โ‰ค ๐œ‡ ๐‘– โˆ’ ๐œ‡ 1 + 2 โ€‹ ๐œ€ ๐‘Ÿ โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ < 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ,

(EC.7.6)

which implies that ๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) + ๐œ€ ๐‘Ÿ > max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– โˆ’ ๐œ€ ๐‘Ÿ โˆ’ ๐œ€

๐ฟ ๐‘Ÿ and ๐œ‡ ^ 1 โ€‹ ( ๐‘Ÿ ) + ๐œ€ ๐‘Ÿ โ‰ฅ max ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) .

These inequalities ensure that arm 1 will not be eliminated from ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ ) in LinFACT. \Halmos

Lemma EC.5

With probability at least 1 โˆ’ ๐›ฟ , and employing an ๐œ€ -efficient rounding procedure, LinFACT- ๐’ณ โ€‹ ๐’ด successfully identifies all ๐œ€ -best arms and achieves instance-optimal sample complexity up to logarithmic factors, as given by

๐‘‡ โ‰ค ๐‘ โ€‹ [ ๐‘‘ โ€‹ ๐‘… upper โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐‘… upper + 1 ) ๐›ฟ ) ] โ€‹ ( ฮ“ โˆ— ) โˆ’ 1 + ๐‘ž โ€‹ ( ๐œ– ) โ€‹ ๐‘… upper ,

(EC.7.7)

where ๐‘ is a universal constant, ๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } .

Proof EC.6

Proof. Combining the result of Lemma EC.1, we conclude that with probability at least 1 โˆ’ ๐›ฟ

๐‘‡
โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… upper max โก { โŒˆ 2 โ€‹ ๐‘” ๐’ณ โ€‹ ๐’ด โ€‹ ( ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) ) โ€‹ ( 1 + ๐œ– ) ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โŒ‰ , ๐‘ž โ€‹ ( ๐œ– ) }

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… upper 2 โ‹… 2 2 โ€‹ ๐‘Ÿ โ€‹ ๐‘” ๐’ณ โ€‹ ๐’ด โ€‹ ( ๐’ด โ€‹ ( ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ) ) โ€‹ ( 1 + ๐œ– ) โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ( 1 + ๐‘ž โ€‹ ( ๐œ– ) ) โ€‹ ๐‘… upper

โ‰ค [ 64 โ€‹ ( 1 + ๐œ– ) โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐พ โˆ’ 1 ) โ€‹ ๐‘… upper โ€‹ ( ๐‘… upper + 1 ) ๐›ฟ ) โ€‹ ๐‘… upper โ€‹ ๐‘‘ โ€‹ ๐ฟ 1 ๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 ] โ€‹ ( ฮ“ โˆ— ) โˆ’ 1 + ( 1 + ๐‘ž โ€‹ ( ๐œ– ) ) โ€‹ ๐‘… upper

โ‰ค [ 128 โ€‹ ( 1 + ๐œ– ) โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐‘… upper + 1 ) ๐›ฟ ) โ€‹ ๐‘… upper โ€‹ ๐‘‘ โ€‹ ๐ฟ 1 ๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 ] โ€‹ ( ฮ“ โˆ— ) โˆ’ 1 + ( 1 + ๐‘ž โ€‹ ( ๐œ– ) ) โ€‹ ๐‘… upper

โ‰ค ๐‘ โ€‹ [ ๐‘‘ โ€‹ ๐‘… upper โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ( ๐‘… upper + 1 ) ๐›ฟ ) ] โ€‹ ( ฮ“ โˆ— ) โˆ’ 1 + ๐‘ž โ€‹ ( ๐œ– ) โ€‹ ๐‘… upper ,

(EC.7.8)

where ๐‘ is a universal constant, ๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } , and the third inequality follows from equation (EC.6.10).

Then, let ๐œ‰

min โก ( ๐›ผ ๐œ€ , ๐›ฝ ๐œ€ ) / 16 be the minimum gap of the problem instance. Considering that the approximation error term ๐‘ž โ€‹ ( ๐œ– ) is in the form of ๐’ช โ€‹ ( ๐‘‘ ๐œ– 2 ) (Allen-Zhu et al. 2021, Fiez et al. 2019), we have

๐”ผ โ€‹ [ ๐‘‡ ๐’ณ โ€‹ ๐’ด โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ ฮ“ โˆ— โ€‹ ๐œ‰ โˆ’ 1 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ ๐œ– 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(EC.7.9) \Halmos Appendix EC.8Proof of Theorem 5.1 EC.8.1Step 1: Define the Clean Events

The core of the proof lies in similarly defining the round at which all classifications are completed under the misspecified model. To this end, we reconstruct the anytime confidence radius for each arm in round ๐‘Ÿ and redefine the high-probability event over the entire execution of the algorithm. We denote this clean event as โ„ฐ 1 โ€‹ ๐‘š .

โ„ฐ 1 โ€‹ ๐‘š

{ โ‹‚ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘Ÿ โˆˆ โ„• | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | โ‰ค ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ } .

(EC.8.1)

Similarly, since arms are sampled according to a preset allocation (i.e., the optimal design criterion), we invoke Lemma EC.1 to derive an adjusted confidence region for the estimated parameter ๐œฝ ^ ๐‘ก . When following the G-optimal sampling rule, as specified in lines 2 and 4 of the pseudocode, we obtain the following result for each round ๐‘Ÿ :

๐‘ฝ ๐‘Ÿ

โˆ‘ ๐’‚ โˆˆ Supp โ€‹ ( ๐œ‹ ๐‘Ÿ ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ ) โ€‹ ๐’‚ โ€‹ ๐’‚ โŠค โชฐ 2 โ€‹ ๐‘‘ ๐œ€ ๐‘Ÿ 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โ€‹ ๐‘ฝ โ€‹ ( ๐œ‹ ) .

(EC.8.2)

To give the confidence radius, we have the following decomposition.

| โŸจ ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ , ๐’‚ โŸฉ |

| ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐‘š โ€‹ ( ๐’‚ ๐ด ๐‘  ) โ€‹ ๐’‚ ๐ด ๐‘  + ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐œ‚ ๐‘  โ€‹ ๐’‚ ๐ด ๐‘  |

โ‰ค | ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ฮ” ๐‘š โ€‹ ( ๐’‚ ๐ด ๐‘  ) โ€‹ ๐’‚ ๐ด ๐‘  | + | ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐œ‚ ๐‘  โ€‹ ๐’‚ ๐ด ๐‘  | ,

(EC.8.3)

where the first term is bounded by

| ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ฮ” ๐‘š โ€‹ ( ๐’‚ ๐ด ๐‘  ) โ€‹ ๐’‚ ๐ด ๐‘  |

| ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐’ƒ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’ƒ ) โ€‹ ฮ” ๐‘š โ€‹ ( ๐’ƒ ) โ€‹ ๐’ƒ |

โ‰ค ๐ฟ ๐‘š โ€‹ โˆ‘ ๐’ƒ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’ƒ ) โ€‹ | ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’ƒ |

โ‰ค ๐ฟ ๐‘š โ€‹ ( โˆ‘ ๐’ƒ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’ƒ ) ) โ€‹ ๐’‚ โŠค โ€‹ ( โˆ‘ ๐’ƒ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’ƒ ) โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’ƒ โ€‹ ๐’ƒ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’‚ )

๐ฟ ๐‘š โ€‹ โˆ‘ ๐’ƒ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’ƒ ) โ€‹ โ€– ๐’‚ โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2

โ‰ค ๐ฟ ๐‘š โ€‹ ๐‘‘ ,

(EC.8.4)

where the first inequality follows from Hรถlderโ€™s inequality, the second from Jensenโ€™s inequality, and the last from the guarantee of the G-optimal exploration policy, which ensures that โ€– ๐’‚ โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 โ‰ค ๐‘‘ / ๐‘‡ ๐‘Ÿ .

The second term is also bounded using Lemma EC.1 and the result in Lemma EC.1. For any arm ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , with a probability of at least 1 โˆ’ ๐›ฟ / ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) , we have

| ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐œ‚ ๐‘  โ€‹ ๐’‚ ๐ด ๐‘  |
โ‰ค 2 โ€‹ โ€– ๐’‚ โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ )

2 โ€‹ ๐’‚ โŠค โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’‚ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ )

โ‰ค 2 โ€‹ ๐’‚ โŠค โ€‹ ( ๐œ€ ๐‘Ÿ 2 2 โ€‹ ๐‘‘ โ€‹ 1 log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) โ€‹ ๐‘ฝ โ€‹ ( ๐œ‹ ) โˆ’ 1 ) โ€‹ ๐’‚ โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ )

โ‰ค ๐œ€ ๐‘Ÿ .

(EC.8.5)

Thus, with the standard result of the G-optimal design, we also have

๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โ‰” ๐œ€ ๐‘Ÿ .

(EC.8.6)

To establish the probability guarantee for event โ„ฐ 1 โ€‹ ๐‘š , we combine the results from equations (EC.8.1) and (EC.8.1), yielding

โ„™ โ€‹ ( โ„ฐ 1 โ€‹ ๐‘š ๐‘ )

โ„™ โ€‹ { โ‹ƒ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹ƒ ๐‘Ÿ โˆˆ โ„• | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | > ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ }

โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž โ„™ โ€‹ { โ‹ƒ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | > ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ }

โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž โˆ‘ ๐‘–

1 ๐พ ๐›ฟ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 )

๐›ฟ .

(EC.8.7)

Therefore, taking the union bounds over rounds ๐‘Ÿ โˆˆ โ„• , we have

๐‘ƒ โ€‹ ( โ„ฐ 1 โ€‹ ๐‘š ) โ‰ฅ 1 โˆ’ ๐›ฟ .

(EC.8.8)

Considering an additional event that characterizes the gaps between different arms, defined as follows

โ„ฐ 2 โ€‹ ๐‘š

โ‹‚ ๐‘– โˆˆ ๐บ ๐œ€ โ‹‚ ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘Ÿ โˆˆ โ„• | ( ๐œ‡ ^ ๐‘— โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ .

(EC.8.9)

By equation (EC.8.1), for ๐‘– , ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , we have

โ„™ โ€‹ { | ( ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– ) โˆ’ ( ๐œ‡ ๐‘— โˆ’ ๐œ‡ ๐‘– ) | > 2 โ€‹ ๐œ€ ๐‘Ÿ + 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โˆฃ โ„ฐ 1 โ€‹ ๐‘š }
โ‰ค โ„™ โ€‹ { | ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ๐‘— | + | ๐œ‡ ^ ๐‘– โˆ’ ๐œ‡ ๐‘– | > 2 โ€‹ ๐œ€ ๐‘Ÿ + 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โˆฃ โ„ฐ 1 โ€‹ ๐‘š }

โ‰ค โ„™ โ€‹ { | ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ๐‘— | > ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ โˆฃ โ„ฐ 1 โ€‹ ๐‘š }

+ โ„™ โ€‹ { | ๐œ‡ ^ ๐‘– โˆ’ ๐œ‡ ๐‘– | > ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ โˆฃ โ„ฐ 1 โ€‹ ๐‘š }

0 ,

(EC.8.10)

which implies

โ„™ โ€‹ ( โ„ฐ 2 โ€‹ ๐‘š โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

1 .

(EC.8.11) EC.8.2Step 2: Bound the Expected Sample Complexity

To bound the expected sample complexity under model misspecification, we aim to identify the round in which all arms ๐‘– โˆˆ ๐บ ๐œ€ have been included in ๐บ ๐‘Ÿ , and the round in which all arms ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ have been added to ๐ต ๐‘Ÿ .

Lemma EC.1

For ๐‘– โˆˆ ๐บ ๐œ€ and ๐ฟ ๐‘š < ๐›ผ ๐œ€ 2 โ€‹ ๐‘‘ , if ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , then we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

0 .

Proof EC.2

Proof. First, for any ๐‘– โˆˆ ๐บ ๐œ€

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š , ๐‘– โˆ‰ ๐บ ๐‘š โ€‹ ( ๐‘š

{ 1 , 2 , โ€ฆ , ๐‘Ÿ โˆ’ 1 } ) ]

โ‰ค ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ] .

(EC.8.12)

If ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 , then ๐‘– โˆˆ ๐บ ๐‘Ÿ by definition. Otherwise, if ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , then under event โ„ฐ 1 โ€‹ ๐‘š , for ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , we have

max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค ๐œ‡ arg โก max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ๐‘– + 2 โˆ’ ๐‘Ÿ + 1 + 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โ‰ค ฮ” ๐‘– + 2 โˆ’ ๐‘Ÿ + 1 + 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โ‰ค ๐œ€ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ ,

(EC.8.13)

which implies that ๐‘– โˆˆ ๐บ ๐‘Ÿ by line 8 of the algorithm. In particular, under event โ„ฐ 1 โ€‹ ๐‘š , if ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , for all ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , we have

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ]

1 .

(EC.8.14)

Consequently, ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 ]

0 . Therefore,

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ โˆ’ 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

0 ,

(EC.8.15)

where the second line follows from the additivity of expectation. The fourth line follows the deterministic result that ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐บ ๐‘Ÿ โˆ’ 1 ]

0 . The fifth line is the decomposition based on the conditional expectation. The eighth line comes from the fact that the expectation of the indicator function is simply the probability. The last line follows the result in equation (EC.8.14). The lemma can thus be concluded together with equation (EC.2).

In the perfectly linear model, ๐œ€ โˆ’ ฮ” ๐‘–

0 always holds for any ๐‘– โˆˆ ๐บ ๐œ€ . However, under model misspecification, the sign of this term within the logarithm must be verified. To ensure positivity for all ๐‘– โˆˆ ๐บ ๐œ€ , it is necessary that ๐›ผ ๐œ€

2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ . As the misspecification magnitude ๐ฟ ๐‘š approaches ๐›ผ ๐œ€ / ( 2 โ€‹ ๐‘‘ ) , the upper bound on the expected sample complexity increases sharply, since the misspecification significantly impairs the identification of ๐œ€ -best arms. Moreover, when ๐ฟ ๐‘š โ‰ฅ ๐›ผ ๐œ€ / ( 2 โ€‹ ๐‘‘ ) , the sample complexity can no longer be bounded in this form, which is intuitive and consistent with the general insights discussed earlier in Section 5.3. \Halmos

Lemma EC.3

For ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐ฟ ๐‘š < ๐›ฝ ๐œ€ 2 โ€‹ ๐‘‘ , if ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , then we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

0 .

Proof EC.4

Proof. First, we for any ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ ,

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š , ๐‘– โˆ‰ ๐ต ๐‘š โ€‹ ( ๐‘š

{ 1 , 2 , โ€ฆ , ๐‘Ÿ โˆ’ 1 } ) ]

โ‰ค ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ] .

(EC.8.16)

If ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 , then ๐‘– โˆˆ ๐ต ๐‘Ÿ by definition. Otherwise, if ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , then under event โ„ฐ 1 โ€‹ ๐‘š , for ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , we have

max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ฅ ฮ” ๐‘– โˆ’ 2 โˆ’ ๐‘Ÿ + 1 โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โ‰ฅ ๐œ€ + 2 โ€‹ ๐œ€ ๐‘Ÿ ,

(EC.8.17)

which implies that ๐‘– โˆˆ ๐ต ๐‘Ÿ by line 6 of the algorithm. In particular, under event โ„ฐ 1 โ€‹ ๐‘š , if ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , for all ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , we have

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– > 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ]

1 .

(EC.8.18)

Deterministically, ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 ]

0 . Therefore,

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

+ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โ„™ โ€‹ ( ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 โˆฃ โ„ฐ 1 โ€‹ ๐‘š )

๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ max ๐‘— โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โก ๐œ‡ ^ ๐‘— โˆ’ ๐œ‡ ^ ๐‘– โ‰ค 2 โ€‹ ๐œ€ ๐‘Ÿ + ๐œ€ ] โˆฃ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 , โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ โˆ’ 1 ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

0 .

(EC.8.19)

The second line follows from the linearity of expectation. The fourth line uses the deterministic fact that ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โ€‹ ๐Ÿ™ โ€‹ [ ๐‘– โˆˆ ๐ต ๐‘Ÿ โˆ’ 1 ]

0 . The fifth line applies the law of total expectation. The eighth line uses the identity that the expectation of an indicator function equals the corresponding probability. The final line follows from equation (EC.8.18). The lemma then follows by combining this result with equation (EC.4).

Similarly, we must ensure the positivity of the term inside the logarithm. To guarantee that ฮ” ๐‘– โˆ’ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘

0 for every ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ , it is necessary that ๐›ฝ ๐œ€

2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ . \Halmos

Lemma EC.5

Suppose ๐ฟ ๐‘š < min โก { ๐›ผ ๐œ€ 2 โ€‹ ๐‘‘ , ๐›ฝ ๐œ€ 2 โ€‹ ๐‘‘ } , then the round by which all arms are classified is ๐‘… upper โ€ฒ

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โŒ‰ } under model misspecification.

Proof EC.6

Proof. Combining the results of Lemmas EC.1 and EC.3, we define an auxiliary round ๐‘… ๐‘š to facilitate the derivation of the upper bound. Specifically, for ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

0 . Similarly, for ๐‘– โˆˆ ๐บ ๐œ€ and ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ , we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

0 .

Noting that ๐›ผ ๐œ€

min ๐‘– โˆˆ ๐บ ๐œ€ โก ( ๐œ€ โˆ’ ฮ” ๐‘– ) and ๐›ฝ ๐œ€

min ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ โก ( ฮ” ๐‘– โˆ’ ๐œ€ ) , it follows that for any round ๐‘Ÿ โ‰ฅ ๐‘… upper โ€ฒ , all arms have been included in either ๐บ ๐‘Ÿ or ๐ต ๐‘Ÿ , marking the termination of the algorithm.

\Halmos Lemma EC.7

Under model specification, for the expected sample complexity conditioned on the high-probability event โ„ฐ 1 โ€‹ ๐‘š , we have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ mis โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

โ‰ค ๐‘ max { 256 โ€‹ ๐‘‘ ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) 2 log ( 2 โ€‹ ๐พ ๐›ฟ log 2 16 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) ,

256 โ€‹ ๐‘‘ ( ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) 2 log ( 2 โ€‹ ๐พ ๐›ฟ log 2 16 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ๐‘… upper โ€ฒ ,

(EC.8.20)

where ๐‘ is a universal constant and ๐‘… upper โ€ฒ

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ โŒ‰ } .

Proof EC.8

Proof. We can also decompose ๐‘‡ as in equation (EC.5.26), where all expectations are conditioned on the high-probability event โ„ฐ 1 โ€‹ ๐‘š , as given by

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ mis โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]
โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒ ( ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + 1 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper โ€ฒ + 2 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ ) โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒ 2 2 โ€‹ ๐‘Ÿ + 4 โ€‹ ๐‘‘ โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒ 2 2 โ€‹ ๐‘Ÿ โ€‹ log โก ( ๐‘Ÿ + 1 )

โ‰ค 4 โ€‹ log โก [ 2 โ€‹ ๐พ ๐›ฟ โ€‹ ( ๐‘… upper โ€ฒ + 1 ) ] โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒ ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper โ€ฒ

โ‰ค ๐‘ max { 256 โ€‹ ๐‘‘ ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) 2 log ( 2 โ€‹ ๐พ ๐›ฟ log 2 16 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) ,

256 โ€‹ ๐‘‘ ( ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) 2 log ( 2 โ€‹ ๐พ ๐›ฟ log 2 16 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ๐‘… upper โ€ฒ .

(EC.8.21)

Then, let ๐œ‰

min โก ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ , ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) / 16 , we have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ mis โˆฃ โ„ฐ 1 โ€‹ ๐‘š ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(EC.8.22) \Halmos Appendix EC.9Proof of Theorem 5.2 EC.9.1Step 1: Confidence Radius

In equation (EC.8.1), the first termโ€”arising from the unknown model misspecificationโ€”is unavoidable without prior knowledge. However, rather than focusing on the distance between the true parameter ๐œฝ and its estimator ๐œฝ ^ ๐‘ก , we instead compare the orthogonal projection ๐œฝ ๐‘ก with ๐œฝ ^ ๐‘ก in the direction of ๐’™ โˆˆ โ„ ๐‘‘ .

Building on the definition of the empirically optimal vector ๐ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) , with ( ๐œฝ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) , ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) ) as its associated solution, and the orthogonal parameterization ( ๐œฝ ๐‘Ÿ , ๐šซ ๐‘š โ€‹ ( ๐‘Ÿ ) ) , we derive the confidence radius for each arm via the following decomposition. For any ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) , let ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ )

๐œ‡ ^ ๐‘œ , ๐‘– โ€‹ ( ๐‘Ÿ ) denote the value of the optimal estimator on arm ๐‘– in round ๐‘Ÿ , then we have

| ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– |

| โŸจ ๐œฝ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œฝ , ๐’‚ ๐‘– โŸฉ + ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ฮ” ๐‘š โ€‹ ๐‘– |

โ‰ค | โŸจ ๐œฝ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œฝ ๐‘Ÿ , ๐’‚ ๐‘– โŸฉ | + | โŸจ ๐œฝ ๐‘Ÿ โˆ’ ๐œฝ , ๐’‚ ๐‘– โŸฉ | + | ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ฮ” ๐‘š โ€‹ ๐‘– | ,

(EC.9.1)

where the third term is bounded by definition as | ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ฮ” ๐‘š โ€‹ ๐‘– | โ‰ค 2 โ€‹ ๐ฟ ๐‘š , while the first two terms can be bounded using the auxiliary lemmas provided below.

Lemma EC.1

Let ๐‘Ÿ be any round such that ๐• ๐‘Ÿ is invertible. Consider the orthogonal parameterization ( ๐›‰ ๐‘Ÿ , ๐šซ ๐‘š ( ๐‘Ÿ ) for ๐›

๐šฟ โ€‹ ๐›‰ + ๐šซ ๐‘š with โ€– ๐šซ ๐ฆ โ€– โˆž โ‰ค ๐ฟ ๐‘š . Then

โ€– ๐œฝ ๐‘Ÿ โˆ’ ๐œฝ โ€– ๐‘ฝ ๐‘Ÿ โ‰ค ๐ฟ ๐‘š โ€‹ ๐‘‡ ๐‘Ÿ ,

(EC.9.2)

where ๐‘‡ ๐‘Ÿ is defined in equation (33).

Proof EC.2

Proof. We use the expression ๐›‰ ๐‘Ÿ

๐›‰ + ๐• ๐‘Ÿ โˆ’ 1 โ€‹ ๐šฟ โŠค โ€‹ ๐ƒ ๐ ๐‘Ÿ โ€‹ ๐šซ ๐‘š derived above. Let ๐ ๐ ๐‘Ÿ

๐šฟ ๐ ๐‘Ÿ โ€‹ ( ๐šฟ ๐ ๐‘Ÿ โŠค โ€‹ ๐šฟ ๐ ๐‘Ÿ ) โ€  โ€‹ ๐šฟ ๐ ๐‘Ÿ โŠค be a projection, we have

โ€– ๐œฝ ๐‘Ÿ โˆ’ ๐œฝ โ€– ๐‘ฝ ๐‘Ÿ

โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐šฟ โŠค โ€‹ ๐‘ซ ๐‘ต ๐‘Ÿ โ€‹ ๐šซ ๐’Ž โ€– ๐‘ฝ ๐‘Ÿ

๐šซ ๐’Ž โŠค โ€‹ ๐‘ซ ๐‘ต ๐‘Ÿ โ€‹ ๐šฟ โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐šฟ โŠค โ€‹ ๐‘ซ ๐‘ต ๐‘Ÿ โ€‹ ๐šซ ๐’Ž

โ€– ๐‘ซ ๐‘ต ๐‘Ÿ 1 / 2 โ€‹ ๐šซ ๐’Ž โ€– ๐‘ท ๐‘ต ๐‘Ÿ

โ‰ค โ€– ๐‘ซ ๐‘ต ๐‘Ÿ 1 / 2 โ€‹ ๐šซ ๐’Ž โ€–

โ€– ๐šซ ๐’Ž โ€– ๐ท ๐‘ต ๐‘Ÿ

โ‰ค ๐ฟ ๐‘š โ€‹ ๐‘‡ ๐‘Ÿ .

(EC.9.3) \Halmos Lemma EC.3

(Rรฉda et al. (2021), Lemma 10). Let ๐› ^ ๐‘œ โ€‹ ( ๐‘Ÿ )

๐šฟ โ€‹ ๐›‰ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) + ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) in round ๐‘Ÿ , where ( ๐›‰ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) , ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) ) are the solution of (30).Then the following relationship holds.

โ€– ๐œฝ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œฝ ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ 2 โ‰ค โ€– ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ 2 .

(EC.9.4) Lemma EC.4

(Lattimore and Szepesvรกri (2020), Section 20). Let ๐›ฟ โˆˆ ( 0 , 1 ) . Then, with a probability of at least 1 โˆ’ ๐›ฟ , it holds that for all ๐‘ก โˆˆ โ„• ,

โ€– ๐œฝ ^ ๐‘ก โˆ’ ๐œฝ โ€– ๐‘‰ ๐‘ก < 2 โ€‹ 2 โ€‹ ( ๐‘‘ โ€‹ log โก ( 6 ) + log โก ( 1 ๐›ฟ ) ) .

(EC.9.5)

Equation (EC.9.5) is not directly applicable in our setting due to the deviation term introduced by model misspecification, as shown in equation (EC.8.1). However, by leveraging the orthogonal parameterization, we obtain

โ€– ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ 2

โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ฮจ โŠค โ€‹ ๐‘† ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ 2

โ€– ฮจ โŠค โ€‹ ๐‘† ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 ,

(EC.9.6)

where ๐‘† ๐‘Ÿ

โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐‘‹ ๐ด ๐‘  โˆ’ ๐œ‡ ๐ด ๐‘  is the standard self-normalized quantity in the linear bandit literature, allowing existing techniquesโ€”such as various concentration inequalitiesโ€”to be applied directly in the presence of model misspecification. This observation leads to the conclusion that, under misspecification, for any round ๐‘Ÿ โˆˆ โ„• , it holds with probability at least 1 โˆ’ ๐›ฟ that

โ€– ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ ๐‘Ÿ โ€– ๐‘‰ ๐‘ก < 2 โ€‹ 2 โ€‹ ( ๐‘‘ โ€‹ log โก ( 6 ) + log โก ( 1 ๐›ฟ ) ) .

(EC.9.7)

This result serves as the basis for designing the sampling budget and conducting theoretical analyses.

Together with Lemmas EC.1, EC.3, and EC.4, the distance | ๐œ‡ ~ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | can be bounded as follows: with probability at least 1 โˆ’ ๐›ฟ / ( ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ) , we have

| ๐œ‡ ~ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– |

โ‰ค | โŸจ ๐œฝ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œฝ ๐‘Ÿ , ๐’‚ ๐‘– โŸฉ | + | โŸจ ๐œฝ ๐‘Ÿ โˆ’ ๐œฝ , ๐’‚ ๐‘– โŸฉ | + | ๐šซ ^ ๐‘š โ€‹ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ฮ” ๐‘š โ€‹ ๐‘– |

โ‰ค โ€– ๐œฝ ^ ๐‘œ โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œฝ ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ โ€‹ โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 + โ€– ๐œฝ ๐‘Ÿ โˆ’ ๐œฝ โ€– ๐‘ฝ ๐‘Ÿ โ€‹ โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 + 2 โ€‹ ๐ฟ ๐‘š

โ‰ค โ€– ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ ๐‘Ÿ โ€– ๐‘ฝ ๐‘Ÿ โ€‹ โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 + โ€– ๐œฝ ๐‘Ÿ โˆ’ ๐œฝ โ€– ๐‘ฝ ๐‘Ÿ โ€‹ โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 + 2 โ€‹ ๐ฟ ๐‘š

โ‰ค ( 2 โ€‹ 2 โ€‹ ( ๐‘‘ โ€‹ log โก ( 6 ) + log โก ( ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) ) + ๐ฟ ๐‘š โ€‹ ๐‘‡ ๐‘Ÿ ) โ€‹ ๐‘‘ ๐‘‡ ๐‘Ÿ + 2 โ€‹ ๐ฟ ๐‘š

โ‰ค ๐œ€ ๐‘Ÿ + ( ๐‘‘ + 2 ) โ€‹ ๐ฟ ๐‘š .

(EC.9.8)

Then, we define a new clean event

โ„ฐ 1 โ€‹ ๐‘š โ€ฒ

{ โ‹‚ ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) โ‹‚ ๐‘Ÿ โˆˆ โ„• | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | โ‰ค ๐œ€ ๐‘Ÿ + ( ๐‘‘ + 2 ) โ€‹ ๐ฟ ๐‘š } .

(EC.9.9)

This mirrors the event defined in equation (EC.8.1), allowing us to derive all corresponding results in Section EC.8.

EC.9.2Step 2: Bound the Expected Sample Complexity Lemma EC.5

For ๐‘– โˆˆ ๐บ ๐œ€ and ๐ฟ ๐‘š < ๐›ผ ๐œ€ 2 โ€‹ ( ๐‘‘ + 2 ) , if ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) โŒ‰ , then we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š โ€ฒ ]

0 .

Lemma EC.6

For ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ and ๐ฟ ๐‘š < ๐›ฝ ๐œ€ 2 โ€‹ ( ๐‘‘ + 2 ) , if ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) โŒ‰ , then we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š โ€ฒ ]

0 .

Lemma EC.7

For the misspecification magnitude ๐ฟ ๐‘š < min โก { ๐›ผ ๐œ€ 2 โ€‹ ( ๐‘‘ + 2 ) , ๐›ฝ ๐œ€ 2 โ€‹ ( ๐‘‘ + 2 ) } , the round ๐‘… upper โ€ฒโ€ฒ

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) โŒ‰ } marks the point at which all classifications are completed and the algorithm terminates under model misspecification when using an estimation procedure based on orthogonal parameterization.

The proofs of Lemmas EC.5, EC.6, and EC.7 follow similar arguments to those presented in Section EC.8 and are therefore omitted for brevity.

Lemma EC.8

For the expected sample complexity given the high probability event โ„ฐ 1 โ€‹ ๐‘š โ€ฒ , we have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ op โˆฃ โ„ฐ 1 โ€‹ ๐‘š โ€ฒ ]

โ‰ค ๐‘ max { 256 โ€‹ ๐‘‘ ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) 2 log ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ log 2 8 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) ,

256 โ€‹ ๐‘‘ ( ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) 2 log ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ log 2 8 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ๐‘… upper โ€ฒโ€ฒ ,

(EC.9.10)

where ๐‘ is a universal constant and ๐‘… upper โ€ฒโ€ฒ

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) โŒ‰ } denotes the round in which all classifications are completed and the algorithm terminates under model misspecification with orthogonal parameterization.

Proof EC.9

Proof. The decomposition of ๐‘‡ in equation (EC.5.26) can be reformulated, where the expectations are conditioned on the high-probability event โ„ฐ 1 โ€‹ ๐‘š โ€ฒ , given by

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ op โˆฃ โ„ฐ 1 โ€‹ ๐‘š โ€ฒ ]
โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 โ€‹ ๐‘š โ€ฒ ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒโ€ฒ ( ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + 3 โ€‹ ( ๐‘‘ โ€‹ log โก ( 6 ) + log โก ( ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper โ€ฒโ€ฒ + 8 โ€‹ ๐‘‘ โ€‹ log โก ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ ) โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒโ€ฒ 2 2 โ€‹ ๐‘Ÿ + 16 โ€‹ ๐‘‘ โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒโ€ฒ 2 2 โ€‹ ๐‘Ÿ โ€‹ log โก ( ๐‘Ÿ + 1 )

โ‰ค 16 โ€‹ log โก [ ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ โ€‹ ( ๐‘… upper โ€ฒโ€ฒ + 1 ) ] โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… upper โ€ฒโ€ฒ ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… upper โ€ฒโ€ฒ

(EC.9.11)

โ‰ค ๐‘ max { 256 โ€‹ ๐‘‘ ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) 2 log ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ log 2 16 ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) ,

256 โ€‹ ๐‘‘ ( ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) 2 log ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ log 2 16 ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 ๐‘… upper โ€ฒโ€ฒ ,

(EC.9.12)

where ๐‘ is a universal constant. Then, let ๐œ‰

min โก ( ๐›ผ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) , ๐›ฝ ๐œ€ โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ( ๐‘‘ + 2 ) ) / 16 , we have

๐”ผ ๐ โ€‹ [ ๐‘‡ ๐บ op โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ โ€‹ 6 ๐‘‘ ๐›ฟ โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(EC.9.13) \Halmos Appendix EC.10Proof of Proposition 5.4

The proof of this proposition closely follows that of Theorem 5.1 in Section EC.8, with the only modification being the definition of ๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) , which is now set to ๐œ€ ๐‘Ÿ + ๐ฟ ๐‘š โ€‹ ๐‘‘ given that ๐ฟ ๐‘š is known in advance. The conclusions in Section EC.8.1 remain valid. Additionally, the marginal round in Lemma EC.1 is updated from โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– โˆ’ 2 โ€‹ ๐ฟ ๐‘š โ€‹ ๐‘‘ ) โŒ‰ to โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰ . The remainder of the proof follows directly by applying the same reasoning steps.

Appendix EC.11Proof of Theorem 6.1 EC.11.1Step 1: Rearrange the Clean Event

Following the derivation approach in Section EC.5.5, the core of the proof is to similarly identify the round in which all classifications are completed under the GLM setting. To this end, we reconstruct the anytime confidence radius for the arms in each round ๐‘Ÿ and define a high-probability event that holds throughout the execution of the algorithm.

Let ๐‘ฝ ห‡ ๐‘Ÿ

โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐œ‡ ห™ link โ€‹ ( ๐’‚ โŠค โ€‹ ๐œฝ ห‡ ๐‘Ÿ ) โ€‹ ๐’‚ ๐‘  โ€‹ ๐’‚ ๐‘  โŠค , where ๐œฝ ห‡ ๐‘Ÿ is some convex combination of true parameter ๐œฝ and parameter ๐œฝ ^ ๐‘Ÿ based on maximum likelihood estimation (MLE). It can be checked that the unweighted matrix ๐‘ฝ ๐‘Ÿ

โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ( ๐’‚ โŠค โ€‹ ๐œฝ ห‡ ๐‘Ÿ ) โ€‹ ๐’‚ ๐‘  โ€‹ ๐’‚ ๐‘  โŠค in the standard linear model is the special case of this newly defined matrix ๐‘ฝ ห‡ ๐‘Ÿ when the inverse link function ๐œ‡ link โ€‹ ( ๐‘ฅ )

๐‘ฅ .

For each arm ๐‘– , we define the following auxiliary vector

๐‘Š ๐‘–

( ๐‘Š ๐‘– , 1 , ๐‘Š ๐‘– , 2 , โ€ฆ , ๐‘Š ๐‘– , ๐‘‡ ๐‘Ÿ )

๐’‚ ๐‘– โŠค โ€‹ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 โ€‹ ( ๐’‚ ๐ด 1 , ๐’‚ ๐ด 2 , โ€ฆ , ๐’‚ ๐ด ๐‘‡ ๐‘Ÿ ) โˆˆ โ„ ๐‘‡ ๐‘Ÿ ,

(EC.11.1)

and thus we have

โ€– ๐‘Š ๐‘– โ€– 2 2

๐‘Š ๐‘– โ€‹ ๐‘Š ๐‘– โŠค

๐’‚ ๐‘– โŠค โ€‹ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 โ€‹ ๐‘ฝ ๐‘Ÿ โ€‹ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’‚ ๐‘– .

(EC.11.2)

To give the confidence radius under GLM, we have the following statement for any arm ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) in round ๐‘Ÿ .

| ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– |

| ๐’‚ ๐‘– โŠค โ€‹ ( ๐œฝ ^ ๐‘Ÿ โˆ’ ๐œฝ ) |

| ๐’‚ ๐‘– โŠค โ€‹ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 โ€‹ โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐’‚ ๐ด ๐‘  โ€‹ ๐œ‚ ๐‘  |

| โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐‘Š ๐‘– , ๐‘  โ€‹ ๐œ‚ ๐ด ๐‘  | ,

(EC.11.3)

where the second equality is established with Lemma 1 of Kveton et al. (2023) and ๐‘‡ ๐‘Ÿ , which is defined in equation (40), is the adjusted sampling budget in each round ๐‘Ÿ for the GLM. Since ( ๐œ‚ ๐ด ๐‘  ) ๐‘  โˆˆ ๐‘‡ ๐‘Ÿ are independent, mean zero, 1-sub-Gaussian random variables, then โˆ‘ ๐‘ 

1 ๐‘‡ ๐‘Ÿ ๐‘Š ๐‘– , ๐‘  โ€‹ ๐œ‚ ๐ด ๐‘  is a โ€– ๐‘Š ๐‘– โ€– 2 -sub-Gaussian variable for each arm ๐‘– , then we have

โ„™ โ€‹ ( | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– |

๐œ€ ๐‘Ÿ )

โ‰ค 2 โ€‹ exp โ€‹ ( โˆ’ ๐œ€ ๐‘Ÿ 2 2 โ€‹ โ€– ๐‘Š ๐‘– โ€– 2 2 ) .

(EC.11.4)

Since ๐œฝ ห‡ ๐‘Ÿ is not known in the process, we need to find another way to represent this term. By assumption, we know ๐œ‡ ห™ link โ‰ฅ ๐‘ min for some ๐‘ min โˆˆ โ„ + and for all ๐‘– โˆˆ ๐’œ ๐ผ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) . Therefore ๐‘ min โˆ’ 1 โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โชฐ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 by definition of ๐‘ฝ ห‡ ๐‘Ÿ , and then we have

โ€– ๐‘Š ๐‘– โ€– 2 2

๐’‚ ๐‘– โŠค โ€‹ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 โ€‹ ๐‘ฝ ๐‘Ÿ โ€‹ ๐‘ฝ ห‡ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’‚ ๐‘–

โ‰ค ๐’‚ ๐‘– โŠค โ€‹ ๐‘ min โˆ’ 1 โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐‘ฝ ๐‘Ÿ โ€‹ ๐‘ min โˆ’ 1 โ€‹ ๐‘ฝ ๐‘Ÿ โˆ’ 1 โ€‹ ๐’‚ ๐‘–

๐‘ min โˆ’ 2 โ€‹ โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 .

(EC.11.5)

Furthermore, if G-optimal design is considered, we have โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 โ‰ค ๐‘‘ ๐‘‡ ๐‘Ÿ . Together with equation (EC.11.4), we have

โ„™ โ€‹ ( | ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– |

๐œ€ ๐‘Ÿ )

โ‰ค 2 โ€‹ exp โ€‹ ( โˆ’ ๐œ€ ๐‘Ÿ 2 2 โ€‹ ๐‘ min โˆ’ 2 โ€‹ โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐‘Ÿ โˆ’ 1 2 )

โ‰ค 2 โ€‹ exp โ€‹ ( โˆ’ ๐œ€ ๐‘Ÿ 2 โ€‹ ๐‘ min 2 2 โ€‹ ๐‘‘ โ€‹ ๐‘‡ ๐‘Ÿ ) .

(EC.11.6)

Finally, considering the definition of ๐‘‡ ๐‘Ÿ in equation (40), with a probability of at least 1 โˆ’ ๐›ฟ / ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) , we have

| ๐œ‡ ^ ๐‘– โ€‹ ( ๐‘Ÿ ) โˆ’ ๐œ‡ ๐‘– | โ‰ค ๐œ€ ๐‘Ÿ .

(EC.11.7)

Thus, with the standard result of the G-optimal design, we still have

๐ถ ๐›ฟ / ๐พ โ€‹ ( ๐‘Ÿ ) โ‰” ๐œ€ ๐‘Ÿ ,

(EC.11.8)

with which the events โ„ฐ 1 and โ„ฐ 2 in Section EC.5.1 hold with a probability of at least 1 โˆ’ ๐›ฟ .

EC.11.2Step 2: Bound the Expected Sample Complexity Lemma EC.1

For ๐‘– โˆˆ ๐บ ๐œ€ , if ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ๐œ€ โˆ’ ฮ” ๐‘– ) โŒ‰ , then we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐บ ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

0 .

Lemma EC.2

For ๐‘– โˆˆ ๐บ ๐œ€ ๐‘ , if ๐‘Ÿ โ‰ฅ โŒˆ log 2 โก ( 4 ฮ” ๐‘– โˆ’ ๐œ€ ) โŒ‰ , then we have ๐”ผ ๐› โ€‹ [ ๐Ÿ™ โ€‹ [ ๐‘– โˆ‰ ๐ต ๐‘Ÿ ] โˆฃ โ„ฐ 1 ]

0 .

Lemma EC.3

๐‘… GLM

๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } is the round where all the classifications have been finished and the answer is returned under GLM.

The proofs of Lemmas EC.1, EC.2, and EC.3 closely follow those in Section EC.5.5.

Lemma EC.4

For the expected sample complexity with high probability event โ„ฐ 1 , we have

๐”ผ ๐ โ€‹ [ ๐‘‡ GLM โˆฃ โ„ฐ 1 ]

โ‰ค ๐‘ โ€‹ max โก { 256 โ€‹ ๐‘‘ ๐›ผ ๐œ€ 2 โ€‹ ๐‘ min 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ผ ๐œ€ ) , 256 โ€‹ ๐‘‘ ๐›ฝ ๐œ€ 2 โ€‹ ๐‘ min 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ฝ ๐œ€ ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… GLM ,

(EC.11.9)

where ๐‘ is a universal constant, ๐‘ min is the known constant controlling the first-order derivative of the inverse link function, and ๐‘… GLM

๐‘… upper

max โก { โŒˆ log 2 โก 4 ๐›ผ ๐œ€ โŒ‰ , โŒˆ log 2 โก 4 ๐›ฝ ๐œ€ โŒ‰ } is the round where all the classifications have been finished and the answer is returned under GLM.

Proof EC.5

Proof. We also consider the decomposition of ๐‘‡ in equation (EC.5.26), where all expectations are conditioned on the high-probability event โ„ฐ 1 , given by

๐”ผ ๐ โ€‹ [ ๐‘‡ GLM โˆฃ โ„ฐ 1 ]
โ‰ค โˆ‘ ๐‘Ÿ

1 โˆž ๐”ผ ๐ โ€‹ [ ๐Ÿ™ โ€‹ [ ๐บ ๐‘Ÿ โˆช ๐ต ๐‘Ÿ โ‰  [ ๐พ ] ] โˆฃ โ„ฐ 1 ] โ€‹ โˆ‘ ๐’‚ โˆˆ ๐’œ โ€‹ ( ๐‘Ÿ โˆ’ 1 ) ๐‘‡ ๐‘Ÿ โ€‹ ( ๐’‚ )

โ‰ค โˆ‘ ๐‘Ÿ

1 ๐‘… GLM ( ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + 1 ๐‘ min 2 โ€‹ log โก ( 2 โ€‹ ๐พ โ€‹ ๐‘Ÿ โ€‹ ( ๐‘Ÿ + 1 ) ๐›ฟ ) + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 )

โ‰ค ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… GLM + 2 โ€‹ ๐‘ min โˆ’ 2 โ€‹ ๐‘‘ โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ ) โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… GLM 2 2 โ€‹ ๐‘Ÿ + 4 โ€‹ ๐‘ min โˆ’ 2 โ€‹ ๐‘‘ โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… GLM 2 2 โ€‹ ๐‘Ÿ โ€‹ log โก ( ๐‘Ÿ + 1 )

โ‰ค 4 โ€‹ ๐‘ min โˆ’ 2 โ€‹ log โก [ 2 โ€‹ ๐พ ๐›ฟ โ€‹ ( ๐‘… GLM + 1 ) ] โ€‹ โˆ‘ ๐‘Ÿ

1 ๐‘… GLM ๐‘‘ โ€‹ 2 2 โ€‹ ๐‘Ÿ + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… GLM

(EC.11.10)

โ‰ค ๐‘ โ€‹ max โก { 256 โ€‹ ๐‘‘ ๐›ผ ๐œ€ 2 โ€‹ ๐‘ min 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ผ ๐œ€ ) , 256 โ€‹ ๐‘‘ ๐›ฝ ๐œ€ 2 โ€‹ ๐‘ min 2 โ€‹ log โก ( 2 โ€‹ ๐พ ๐›ฟ โ€‹ log 2 โก 16 ๐›ฝ ๐œ€ ) } + ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) 2 โ€‹ ๐‘… GLM ,

(EC.11.11)

where ๐‘ is a universal constant. Then, let ๐œ‰

min โก ( ๐›ผ ๐œ€ , ๐›ฝ ๐œ€ ) / 16 denoting the minimum gap of the problem instance, we have

๐”ผ โ€‹ [ ๐‘‡ GLM โˆฃ โ„ฐ ]

๐’ช โ€‹ ( ๐‘‘ ๐‘ min 2 โ€‹ ๐œ‰ โˆ’ 2 โ€‹ log โก ( ๐พ ๐›ฟ โ€‹ log 2 โก ( ๐œ‰ โˆ’ 2 ) ) + ๐‘‘ 2 โ€‹ log โก ( ๐œ‰ โˆ’ 1 ) ) .

(EC.11.12) \Halmos Appendix EC.12Detailed Settings for Synthetic Experiments

We recap the figure here for better clarity.

(a)Synthetic I - Adaptive Setting (b)Synthetic II - Static Setting Figure EC.12.1:Illustration on Synthetic Settings EC.12.1Synthetic I - Adaptive Setting.

First, we randomly sample ๐‘š ~ , representing the number of ๐œ€ -best arms, from a distribution with an expected value of ๐‘š (used as input for a top ๐‘š algorithm). Next, we randomly sample ๐‘‹ ~ , representing the best arm reward minus ๐œ€ , from a distribution with an expected value of ๐‘‹ (used as input for a threshold bandit algorithm). We then assign ๐‘š ~

๐œ€ -best arms with expected rewards uniformly distributed between ๐‘‹ ~ + ๐œ€ and ๐‘‹ ~ . Additionally, we assign ( 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ ) arms that are not ๐œ€ -best with expected rewards uniformly distributed between ๐‘‹ ~ and ๐‘‹ ~ โˆ’ ๐œ€ , as illustrated in Figure EC.12.1.

Based on these designed arm rewards, we define the linear model parameter as:

๐œฝ

( ๐‘‹ ~ + ๐œ€ , ๐‘‹ ~ + ( ๐‘š ~ โˆ’ 1 ) โ€‹ ๐œ€ / ๐‘š ~ , โ€ฆ , ๐‘‹ ~ + ๐œ€ / ๐‘š ~ , 0 , โ€ฆ , 0 ) โŠค .

Arms are ๐‘‘ -dimensional canonical basis ๐‘’ 1 , ๐‘’ 2 , โ€ฆ , ๐‘’ ๐‘‘ and ( 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ ) additional disturbing arms

๐’™ ๐’Š

( ๐‘‹ ~ โˆ’ ( 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ โˆ’ ๐‘– ) โ€‹ ๐œ€ / ( 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ ) ๐‘‹ ~ + ๐œ€ , 0 , โ‹ฏ , 0 , 1 โˆ’ ( ๐‘‹ ~ โˆ’ ( 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ โˆ’ ๐‘– ) โ€‹ ๐œ€ / ( 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ ) ๐‘‹ ~ + ๐œ€ ) 2 ) โŠค

with ๐‘– โˆˆ [ 1.5 โ€‹ ๐‘š โˆ’ ๐‘š ~ ] .

In the adaptive setting, pulling one arm can provide information about the distributions of other arms. The optimal policy in this setting should adaptively refine its sampling and stopping strategy based on historical data. This allows the algorithm to focus more on the disturbing arms, making adaptive strategies particularly effective as the algorithm progresses. In our experiments, we set ๐‘š

4 , ๐‘‹

1 , with ๐‘‘

10 and ๐œ€ โˆˆ { 0.1 , 0.2 , 0.3 } . A total of six different problem instances are evaluated to compare the performance of the algorithms.

EC.12.2Synthetic II - Static Setting.

We consider a static synthetic setting, similar to the one proposed by Xu et al. (2018), where arms are represented as ๐‘‘ -dimensional canonical basis vectors ๐‘’ 1 , ๐‘’ 2 , โ€ฆ , ๐‘’ ๐‘‘ . We set the parameter vector ๐œฝ

( ฮ” , โ€ฆ , ฮ” , 0 , โ€ฆ , 0 ) โŠค , where ๐‘š ~ elements are ฮ” and ๐‘‘ โˆ’ ๐‘š ~ elements are 0. In this setting, ๐”ผ โ€‹ [ ๐‘š ~ ]

๐‘š , and only the value ๐‘š is provided as input to top ๐‘š algorithms. Consequently, the true mean values consist of some ฮ” โ€™s and some 0 โ€™s.

If we set ๐œ€

ฮ” / 2 , as ฮ” approaches 0, it becomes difficult to distinguish between the ๐œ€ -best arms and the arms that are not ๐œ€ -best. In the static setting, knowledge of the rewards does not alter the sampling strategy, as all arms must be estimated with equal accuracy to effectively differentiate between them. Therefore, a static policy is optimal in this case, and the goal of this setting is to assess the ability of our algorithm to adapt to such static conditions. In our experiment, we set ๐‘š

4 with ๐‘‘ โˆˆ { 8 , 12 , 16 } and ฮ”

1 . A total of three different problem instances are evaluated to compare the algorithms.

Appendix EC.13Auxiliary Results

The following lemma shows that matrix inversion reverses the order relation.

Lemma EC.1

(Inversion Reverses Loewner orders) Let ๐€ , ๐ โˆˆ โ„ ๐‘‘ ร— ๐‘‘ . Suppose that ๐€ โชฐ ๐ and ๐ is invertible, we have

๐‘จ โˆ’ 1 โชฏ ๐‘ฉ โˆ’ 1 .

(EC.13.1) Proof EC.2

Proof. By definition, to show ๐ โˆ’ 1 โˆ’ ๐€ โˆ’ 1 is a positive semi-definite matrix, it suffices to show that โ€– ๐ฑ โ€– ๐ โˆ’ 1 2 โˆ’ โ€– ๐ฑ โ€– ๐€ โˆ’ 1 2

โ€– ๐ฑ โ€– ๐ โˆ’ 1 โˆ’ ๐€ โˆ’ 1 2 โ‰ฅ 0 for any ๐ฑ โˆˆ โ„ ๐‘‘ . Then, by the Cauchy-Schwarz inequality,

โ€– ๐’™ โ€– ๐‘จ โˆ’ 1 2

โŸจ ๐’™ , ๐‘จ โˆ’ 1 โ€‹ ๐’™ โŸฉ โ‰ค โ€– ๐’™ โ€– ๐‘ฉ โˆ’ 1 โ€‹ โ€– ๐‘จ โˆ’ 1 โ€‹ ๐’™ โ€– ๐‘ฉ โ‰ค โ€– ๐’™ โ€– ๐‘ฉ โˆ’ 1 โ€‹ โ€– ๐‘จ โˆ’ 1 โ€‹ ๐’™ โ€– ๐‘จ

โ€– ๐’™ โ€– ๐‘ฉ โˆ’ 1 โ€‹ โ€– ๐’™ โ€– ๐‘จ โˆ’ 1 .

(EC.13.2)

Hence โ€– ๐ฑ โ€– ๐€ โˆ’ 1 โ‰ค โ€– ๐ฑ โ€– ๐ โˆ’ 1 for all ๐ฑ , which completes the lemma. \Halmos

The following lemma establishes an upper bound on the ratio between two optimization problems that incorporate instance-specific information from the bandit setting.

Lemma EC.3

We always have ๐’œ ๐ผ

[ ๐พ ] , i.e., the entire set of arms is under consideration. For any arm ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€ โˆ– { 1 } , we have

min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐’š 1 , ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 min ๐’‘ โˆˆ ๐‘† ๐พ โก min ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐’š 1 , ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 โ‰ค ๐‘‘ โ€‹ ๐ฟ 1 ๐’ข ๐’ด 2 โ€‹ ๐ฟ 2 .

(EC.13.3) Proof EC.4

Proof. For any arm ๐‘– โˆˆ [ ๐พ ] , from a perspective of geometry quantity, let conv( ๐’œ โˆช โˆ’ ๐’œ ) denote the convex hull of symmetric ๐’œ โˆช โˆ’ ๐’œ . Then for any set ๐’ด โŠ‚ โ„ ๐‘‘ define the gauge of ๐’ด as

๐’ข ๐’ด

max { ๐‘

0 : ๐‘ ๐’ด โІ conv( ๐’œ โˆช โˆ’ ๐’œ ) } .

(EC.13.4)

We then provide a natural upper bound for min ๐ฉ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐ฒ 1 , ๐‘– โ€– ๐• ๐ฉ โˆ’ 1 2 , given by

min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐’š 1 , ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2
โ‰ค min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘ฆ โˆˆ ๐’ด โ€‹ ( ๐’œ ๐ผ ) โก โ€– ๐’š โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2

1 ๐’ข ๐’ด 2 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘ฆ โˆˆ ๐’ด โ€‹ ( ๐’œ ๐ผ ) โก โ€– ๐’š โ€‹ ๐’ข ๐’ด โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2

โ‰ค 1 ๐’ข ๐’ด 2 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐’‚ โˆˆ conv( โ€‹ ๐’œ โˆช โฃ โˆ’ ๐’œ โ€‹ ) โก โ€– ๐’‚ โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2

1 ๐’ข ๐’ด 2 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก max ๐‘– โˆˆ ๐’œ ๐ผ โก โ€– ๐’‚ ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2

โ‰ค ๐‘‘ ๐’ข ๐’ด 2 ,

(EC.13.5)

where the third line follows from the fact that the maximum value of a convex function on a convex set must occur at a vertex. With the Kiefer-Wolfowitz Theorem for the G-optimal design, the last inequality is achieved.

Furthermore, for any arm ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } , we have

min ๐’‘ โˆˆ ๐‘† ๐พ โก min ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐’š 1 , ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2
โ‰ฅ min ๐’‘ โˆˆ ๐‘† ๐พ โก min ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก eig min โ€‹ ( ๐‘ฝ ๐’‘ โˆ’ 1 ) โ€‹ โ€– ๐’š 1 , ๐‘– โ€– 2 2

min ๐’‘ โˆˆ ๐‘† ๐พ โก min ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก 1 eig max โ€‹ ( ๐‘ฝ ๐’‘ ) โ€‹ โ€– ๐’š 1 , ๐‘– โ€– 2 2

โ‰ฅ 1 max ๐‘– โˆˆ ๐’œ ๐ผ โก โ€– ๐’‚ ๐‘– โ€– 2 โ€‹ min ๐’‘ โˆˆ ๐‘† ๐พ โก min ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐’š 1 , ๐‘– โ€– 2 2 ,

(EC.13.6)

where eig max โ€‹ ( โ‹… ) and eig min โ€‹ ( โ‹… ) are respectively the largest and smallest eigenvalues of a matrix. The first line follows from the Rayleigh Quotient and Rayleigh Theorem. The last line is derived by the relationship eig max โ€‹ ( ๐• ๐ฉ ) โ‰ค max ๐‘– โˆˆ ๐’œ ๐ผ โก โ€– ๐š ๐‘– โ€– 2 . Recall the assumption in Theorem 4.2 that min ๐‘– โˆˆ ๐บ ๐œ€
{ 1 } โก โ€– ๐š 1 โˆ’ ๐š ๐‘– โ€– 2 โ‰ฅ ๐ฟ 2 and the assumption in Section 2 that โ€– ๐š ๐‘– โ€– 2 โ‰ค ๐ฟ 1 โ€‹  for  โ€‹ โˆ€ ๐‘– โˆˆ [ ๐พ ] , we have

min ๐’‘ โˆˆ ๐‘† ๐พ โก min ๐‘– โˆˆ ๐’œ ๐ผ โˆฉ ๐บ ๐œ€
{ 1 } โก โ€– ๐’š 1 , ๐‘– โ€– ๐‘ฝ ๐’‘ โˆ’ 1 2 โ‰ฅ ๐ฟ 2 ๐ฟ 1 .

(EC.13.7)

Finally, combining inequalities (EC.4) and (EC.13.7) completes the lemma. \Halmos

Lemma EC.5 (Kiefer and Wolfowitz (1960))

If the arm vectors ๐š โˆˆ ๐’œ span โ„ ๐‘‘ , then for any probability distribution ๐œ‹ โˆˆ ๐’ซ โ€‹ ( ๐’œ ) , the following statements are equivalent:

1.

๐œ‹ โˆ— minimizes the function ๐‘” โ€‹ ( ๐œ‹ )

max ๐’‚ โˆˆ ๐’œ โก โ€– ๐’‚ โ€– ๐‘ฝ โ€‹ ( ๐œ‹ ) โˆ’ 1 2 .

2.

๐œ‹ โˆ— maximizes the function ๐‘“ โ€‹ ( ๐œ‹ )

log โ€‹ det ๐‘ฝ โ€‹ ( ๐œ‹ ) .

3.

๐‘” โ€‹ ( ๐œ‹ โˆ— )

๐‘‘ .

Additionally, there exists a ๐œ‹ โˆ— of ๐‘” โ€‹ ( ๐œ‹ ) such that the size of its support, | Supp โ€‹ ( ๐œ‹ โˆ— ) | , is at most ๐‘‘ โ€‹ ( ๐‘‘ + 1 ) / 2 .

\c@NAT@ctr Appendix References Abbasi-Yadkori et al. (2011) โ†‘ Abbasi-Yadkori Y, Pรกl D, Szepesvรกri C (2011) Improved algorithms for linear stochastic bandits. Advances in neural information processing systems 24. Abe and Long (1999) โ†‘ Abe N, Long PM (1999) Associative reinforcement learning using linear probabilistic concepts. ICML, 3โ€“11 (Citeseer). Ahn and Shin (2020) โ†‘ Ahn D, Shin D (2020) Ordinal optimization with generalized linear model. 2020 Winter Simulation Conference (WSC), 3008โ€“3019 (IEEE). Ahn et al. (2024) โ†‘ Ahn D, Shin D, Zeevi A (2024) Feature misspecification in sequential learning problems. Management Science 0(0):null. Azizi et al. (2021a) โ†‘ Azizi MJ, Kveton B, Ghavamzadeh M (2021a) Fixed-budget best-arm identification in contextual bandits: A static-adaptive algorithm. CoRR abs/2106.04763. Azizi et al. (2021b) โ†‘ Azizi MJ, Kveton B, Ghavamzadeh M (2021b) Fixed-budget best-arm identification in structured bandits. arXiv preprint arXiv:2106.04763 . Chaloner and Verdinelli (1995) โ†‘ Chaloner K, Verdinelli I (1995) Bayesian experimental design: A review. Statistical science 273โ€“304. Chapelle and Li (2011) โ†‘ Chapelle O, Li L (2011) An empirical evaluation of thompson sampling. Advances in neural information processing systems 24. Fiez et al. (2019) โ†‘ Fiez T, Jain L, Jamieson KG, Ratliff L (2019) Sequential experimental design for transductive linear bandits. Advances in neural information processing systems 32. Filippi et al. (2010) โ†‘ Filippi S, Cappe O, Garivier A, Szepesvรกri C (2010) Parametric bandits: The generalized linear case. Advances in neural information processing systems 23. Gabillon et al. (2012) โ†‘ Gabillon V, Ghavamzadeh M, Lazaric A (2012) Best arm identification: A unified approach to fixed budget and fixed confidence. Advances in Neural Information Processing Systems 25. Garivier and Kaufmann (2016) โ†‘ Garivier A, Kaufmann E (2016) Optimal best arm identification with fixed confidence. Conference on Learning Theory, 998โ€“1027 (PMLR). Ghosh et al. (2017) โ†‘ Ghosh A, Chowdhury SR, Gopalan A (2017) Misspecified linear bandits. Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. Hoffman et al. (2014) โ†‘ Hoffman M, Shahriari B, Freitas N (2014) On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. Artificial Intelligence and Statistics, 365โ€“374 (PMLR). Kaufmann et al. (2016) โ†‘ Kaufmann E, Cappรฉ O, Garivier A (2016) On the complexity of best arm identification in multi-armed bandit models. Journal of Machine Learning Research 17:1โ€“42. Kaufmann and Koolen (2021) โ†‘ Kaufmann E, Koolen WM (2021) Mixture martingales revisited with applications to sequential tests and confidence intervals. The Journal of Machine Learning Research 22(1):11140โ€“11183. Kiefer and Wolfowitz (1960) โ†‘ Kiefer J, Wolfowitz J (1960) The equivalence of two extremum problems. Canadian Journal of Mathematics 12:363โ€“366. Kveton et al. (2023) โ†‘ Kveton B, Zaheer M, Szepesvari C, Li L, Ghavamzadeh M, Boutilier C (2023) Randomized exploration in generalized linear bandits. Lattimore and Szepesvรกri (2020) โ†‘ Lattimore T, Szepesvรกri C (2020) Bandit algorithms (Cambridge University Press). Lattimore et al. (2020) โ†‘ Lattimore T, Szepesvari C, Weisz G (2020) Learning with good feature representations in bandits and in rl with a generative model. Li et al. (2010) โ†‘ Li L, Chu W, Langford J, Schapire RE (2010) A contextual-bandit approach to personalized news article recommendation. Proceedings of the 19th International Conference on World Wide Web, 661โ€“670, WWW โ€™10 (New York, NY, USA: Association for Computing Machinery), ISBN 9781605587998. McCullagh (2019) โ†‘ McCullagh P (2019) Generalized linear models (Routledge). Qin and You (2025) โ†‘ Qin C, You W (2025) Dual-directed algorithm design for efficient pure exploration. Operations Research . Rรฉda et al. (2021) โ†‘ Rรฉda C, Tirinzoni A, Degenne R (2021) Dealing with misspecification in fixed-confidence linear top-m identification. Advances in Neural Information Processing Systems 34:25489โ€“25501. Russo et al. (2018) โ†‘ Russo DJ, Van Roy B, Kazerouni A, Osband I, Wen Z, et al. (2018) A tutorial on thompson sampling. Foundations and Trendsยฎ in Machine Learning 11(1):1โ€“96. Soare et al. (2014) โ†‘ Soare M, Lazaric A, Munos R (2014) Best-arm identification in linear bandits. Advances in Neural Information Processing Systems 27. Wang et al. (2021) โ†‘ Wang PA, Tzeng RC, Proutiere A (2021) Fast pure exploration via frank-wolfe. Advances in Neural Information Processing Systems 34:5810โ€“5821. Xu et al. (2018) โ†‘ Xu L, Honda J, Sugiyama M (2018) A fully adaptive algorithm for pure exploration in linear bandits. International Conference on Artificial Intelligence and Statistics, 843โ€“851 (PMLR). Yang and Tan (2021) โ†‘ Yang J, Tan VY (2021) Minimax optimal fixed-budget best arm identification in linear bandits. arXiv preprint arXiv:2105.13017 . Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
240 kB
ยท
Xet hash:
27f6055662c6406f7b10b036b982959717edc4af6fb7d56e69b2817bbb87f49b

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.