Achieving Minimax Rates in Pool-Based Batch Active Learning
Claudio Gentile1 Zhilei Wang2 Tong Zhang13
Abstract
We consider a batch active learning scenario where the learner adaptively issues batches of points to a labeling oracle. Sampling labels in batches is highly desirable in practice due to the smaller number of interactive rounds with the labeling oracle (often human beings). However, batch active learning typically pays the price of a reduced adaptivity, leading to suboptimal results. In this paper we propose a solution which requires a careful trade off between the informativeness of the queried points and their diversity. We theoretically investigate batch active learning in the practically relevant scenario where the unlabeled pool of data is available beforehand (pool-based active learning). We analyze a novel stage-wise greedy algorithm and show that, as a function of the label complexity, the excess risk of this algorithm matches the known minimax rates in standard statistical learning settings. Our results also exhibit a mild dependence on the batch size. These are the first theoretical results that employ careful trade offs between informativeness and diversity to rigorously quantify the statistical performance of batch active learning in the pool-based scenario.
1. Introduction
The aim of Active Learning is to reduce the data requirement of training processes through the careful selection of informative subsets of the data across several interactive rounds. This increased interactive power enables the adaptation of the sampling process to the actual state of the learning algorithm at hand, yet this benefit comes at the price of frequent re-training of the model and increased interactions with the labeling oracle (which is often just a
1Google Research, New York 2Citadel Securities, New York 3The Hong Kong University of Science and Technology, Hong Kong. Correspondence to: Zhilei Wang zhilei-wang92@gmail.com.
Proceedings of the $39^{th}$ International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
pool of human labelers).
The batch mode of active learning is one where labels are queried in batches of suitable size, and the models are re-trained/updated either after each batch or even less frequently. This sampling mode often corresponds to the way labels are gathered in practical large-scale processing pipelines.
Batch active learning tries to strike a reasonable balance between the benefits of adaptivity and the costs associated with interaction and re-training. Yet, since the sampling is split into batches, and model updates can only be performed at the end of each batch, a batch active learning algorithm has to prevent to the extent possible the sampling of redundant points. The standard trade-off that arises is then to ensure that the sampled points are informative enough for the model, if taken in isolation, while at the same time being diverse enough so as to avoid sampling redundant labels.
We study batch active learning in the pool-based model, where an unlabeled pool of data is made available to the algorithm beforehand, and the goal is to single out a subset of the data so as to achieve the same statistical performance as if training were carried out on the entire pool. In this setting, we describe and analyze novel algorithms that obtain minimax rates of convergence of their excess risk as a function of the number of requested labels. Interestingly enough, these optimal rates are retained even if we allow the batch size to grow with the pool size, the actual trade-off being ruled by the amount of noise in the data. Another appealing aspect is that our algorithms guarantee a number of re-training rounds which is at worst logarithmic, while being able to automatically adapt to the level of noise.
We operate in specific realizable settings, starting with linear or generalized linear models, and then extending our results to the more general non-linear setting. Unlike what is traditionally done by many algorithmic solutions to active learning available in the literature (e.g., (Balcan et al., 2007; Balcan & Long, 2013; Zhang & Li, 2021)), we do not formulate strong assumptions on the input distribution. We establish careful trade-offs between the informativeness and the diversity of the queried labels, and rigorously quantify the statistical performance on batch active learning in a noisy pool-based setting. To our knowledge, these are the first guarantees of this kind that apply to a noisy (hence
realistic) batch pool-based active learning scenario. See also the related work contained in Section 3.
1.1. Content and contributions
Our contributions can be described as follows.
- We present an efficient algorithm for pool-based batch active learning for noisy linear models (Algorithm 1). This algorithm generates pseudo-labels by computing sequences of linear classifiers that restrict their attention to exponentially small regions of the margin space, and then trains a single model based on the pseudolabels only. The design inspiring the sampling within each stage is a G-optimal design, computed through a greedy strategy. We show (Theorem 4.1) that under the standard i.i.d. assumption of the (input, label) pairs, the model so trained enjoys an excess risk bound with respect to the Bayes optimal predictor which is best possible, when expressed in terms of the total number of requested labels. The number of re-training stages (that is, the number of linear classifiers computed to generate pseudo-labels) is at most logarithmic in the pool size, and automatically adapts to the noise level without knowing it in advance.
- Since the above algorithm does not operate on a constant batch size $B$ , we show in Section 4.2 an easy adaptation to the constant batch size, and make the observation that $B$ therein may also scale as $T^{\beta}$ , for some exponent $\beta < 1$ that depends on the amount of noise (see comments surrounding Corollary 4.2), still retaining the above-mentioned optimal rates.
- We extend in Section 5 our results to the generalized linear case (specifically, the logistic case), and point out that restricting to exponentially small regions of the margin space is also beneficial for obtaining bounds with a milder dependence on the loss curvature.
- Last but not least, despite we work out the details only for (generalized) linear models, our algorithmic technique can be seen as a skeleton technique that can be applied to more general situations, provided the estimators employed at each stage and the diversity measure guiding the design have matching properties, as briefly discussed in Section 6.
2. Preliminaries and Notation
We denote by $\mathcal{X}$ the input space (e.g., $\mathcal{X} = \mathbb{R}^d$ ), by $\mathcal{Y}$ the output space, and by $\mathcal{D}$ an unknown distribution over $\mathcal{X} \times \mathcal{Y}$ . The corresponding random variables will be denoted by $\mathbf{x}$ and $y$ . We also denote by $\mathcal{D}_{\mathcal{X}}$ the marginal distribution of $\mathcal{D}$ over $\mathcal{X}$ . Given a function $h$ (also called a hypothesis or a
model) mapping $\mathcal{X}$ to $\mathcal{V}$ , the population loss (often referred to as risk) of $h$ is denoted by $\mathcal{L}(h)$ , and defined as $\mathcal{L}(h) = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}[loss(h(x),y)]$ , where $loss(\cdot ,\cdot):\mathcal{Y}\times \mathcal{Y}\to [0,1]$ is a given loss function. For simplicity of presentation, we restrict ourselves to a binary classification setting with 0-1 loss, so that $\mathcal{Y} = {-1, + 1}$ , and $loss(\hat{y},y) = \mathbb{1}{\hat{y}\neq y} \in {0,1}$ , being $\mathbb{1}{\cdot }$ the indicator function of the predicate at argument. When clear from the surrounding context, we will omit subscripts like “ $(\mathbf{x},y)\sim \mathcal{D}$ ” from probabilities and expectations.
We are given a class of models $\mathcal{F} = {f:\mathcal{X}\to [0,1]}$ and the Bayes optimal predictor $h^* (x) = \operatorname {sgn}(f^* (x) - 1 / 2)$ where
is assumed to belong to class $\mathcal{F}$ (the so-called realizability assumption). This assumption is reasonable whenever the model class $\mathcal{F}$ we operate on is wide enough. For instance, a realizability (or quasi-realizability) assumption seems natural in overparameterized settings implemented by nowadays' Deep Neural Networks.
As a simple example, we consider a generalized linear model
where $\sigma : \mathbb{R} \to [0,1]$ is a suitable sigmoidal function, e.g., $\sigma(z) = \frac{e^z}{1 + e^z}$ , $\mathbf{w}^*$ is an unknown vector in $\mathbb{R}^d$ , with bounded (Euclidean) norm $||\mathbf{w}|| \leq R$ for some $R \geq 1$ , and $\langle \cdot, \cdot \rangle$ denotes the usual inner product in $\mathbb{R}^d$ .
Throughout this paper, we adopt the commonly used low-noise condition on the marginal distribution $\mathcal{D}_{\mathcal{X}}$ of Mammen & Tsybakov (1999): there are constant $c > 0$ , $\epsilon_0 \in (0,1]$ and exponent $\alpha \geq 0$ such that for all $\epsilon \in (0,\epsilon_0]$ we have
Notice, in particular, that $\alpha \to \infty$ gives the so-called hard margin condition $\mathbb{P}\big(|f^{*}(\mathbf{x}) - 1 / 2| < \epsilon \big) = 0$ . while, at the opposite end of the spectrum, exponent $\alpha = 0$ (and $c = 1$ ) corresponds to making no assumptions whatsoever on $\mathcal{D}_{\mathcal{X}}$ . For simplicity, we shall assume throughout that the above low-noise condition holds for $c = 1$ . The noise exponent $\alpha$ and range constant $\epsilon_0$ are typically unknown, and our algorithms will not rely on the prior knowledge of them.
We are given a class of models $\mathcal{F}$ , and a pool $\mathcal{P}$ of $T$ unlabeled instances $\mathbf{x}_1, \ldots, \mathbf{x}T \in \mathcal{X}$ , drawn i.i.d. according to a marginal distribution $\mathcal{D}{\mathcal{X}}$ obeying condition (2) (with $c = 1$ ). The associated labels $y_1, \ldots, y_T \in \mathcal{Y}$ are such that the pairs $(\mathbf{x}_t, y_t), t = 1, \ldots, T$ , are drawn i.i.d. according to $\mathcal{D}$ , the labels being generated according to the conditional distribution determined by some $f^* \in \mathcal{F}$ . The labels are not initially revealed to us, and the goal of the active learning algorithm is to come up at the end of training with a model
$\widehat{h}:\mathcal{X}\to \mathcal{Y}$ whose excess risk $\mathcal{L}(\widehat{h}) - \mathcal{L}(h^{*})$ is as small as possible, while querying as few labels as possible in $\mathcal{P}$ .
The way labels are queried follows the standard batch active learning protocol. We are given a batch size $B \geq 1$ . Label acquisition and learning proceeds in a sequence of stages, $\ell = 1,2,\ldots$ . At each stage $\ell$ , the algorithm is allowed to query $B$ -many labels by only relying on labels acquired in the past $\ell - 1$ stages. Notice that each point $\mathbf{x}t$ in pool $\mathcal{P}$ can only be queried once, which is somehow equivalent to assuming that the noise in the corresponding label $y_t$ is persistent. We shall henceforth denote by $N_T(\mathcal{P})$ the total number of labels (sometimes referred to as label complexity) queried by the algorithm at hand on pool $\mathcal{P}$ , and by $N{T,B}(\mathcal{P})$ the same quantity if we want to emphasize the dependence on $B$ .
The analysis of our algorithms hinges upon a suitable measure of diversity, $D(\mathbf{x}, S)$ , that quantifies how far off a data point $\mathbf{x} \in \mathcal{X}$ is from a finite set of points $S \subseteq \mathcal{X}$ . Though many diversity measures may be adopted for practical purposes (e.g., (Wei et al., 2015; Sener & Savarese, 2018; Kirsch et al., 2019; Ash et al., 2020; Killamsetty et al., 2020; Kirsch et al., 2021; Citovsky et al., 2021)), the one enabling tight theoretical analyses for our algorithms is a spectral-like diversity measure defined in the finite dimensional case $\mathcal{X} = \mathbb{R}^d$ as $D(\mathbf{x}, S) = \langle \mathbf{x}, \mathbf{x} \rangle_{A_S^{-1}}^{\frac{1}{2}} = ||\mathbf{x}||{A_S^{-1}} = \sqrt{\mathbf{x}^\top A_S^{-1}\mathbf{x}}$ , that is, the Mahalanobis norm of $\mathbf{x}$ w.r.t. the positive semi-definite matrix $A_S^{-1}$ , where $A_S = I + \sum{\mathbf{z} \in S} \mathbf{z}\mathbf{z}^\top$ , being $I$ the $d \times d$ identity matrix. Notice that $D(\mathbf{x}, S)$ is large when $\mathbf{x}$ is aligned with small eigenvectors of $A_S$ , while it is small if $\mathbf{x}$ is aligned with large eigenvectors of that matrix. In particular, $D(\mathbf{x}, S)$ achieves its maximal value $||\mathbf{x}||^2$ when $\mathbf{x}$ is orthogonal to the space spanned by $S$ . Hence, $\mathbf{x}$ is "very different" from $S$ as measured by $D(\mathbf{x}, S)$ if $\mathbf{x}$ contributes a direction of the input space which is not already spanned by $S$ . We denote by $|A_S|$ the determinant of matrix $A_S$ .
At an intuitive level, since the label requests are batched, and model updates are typically performed only at the end of each stage, a batch active learning algorithm is compelled to operate within each stage by trading off the (predicted) informativeness of the selected labels against the diversity of the data points whose labels are requested. Moreover, the larger the batch size $B$ the less adaptive the algorithm is forced to be, hence we expect $B$ to somehow play a role in the performance of the algorithm.
From a practical standpoint, there are indeed two separate notions of adaptivity to consider. One is the number of interactive rounds with the labeling oracle, the other is the number of times we re-train (or update) a model based on the labels gathered during the interactive rounds. The two notions need not coincide. While the former essentially accounts for the cost of interacting with human labelers, the
latter is more related to the cost of re-training/updating a (potentially very complex) learning system.
3. Related Work
While experimental studies on batch active learning are reported since the early 2000s (see, e.g., (Hoi et al., 2006)), it is only with the deployment at scale of Deep Neural Networks that we have seen a general resurgence of interest in active learning, and batch active learning in particular. The batch pool-based model studied here is the one that has spurred the widest attention, as it corresponds to the way in practice labels are gathered in large-scale processing pipelines. This interest has generated a flurry of recent investigations, mainly of experimental nature, yet containing a lot of interesting and diverse approaches to batch active learning. Among these are (Gu et al., 2012; 2014; Sener & Savarese, 2018; Kirsch et al., 2019; Zhdanov, 2019; Shui et al., 2020; Ash et al., 2020; Kim et al., 2020; Killamsetty et al., 2020; Kirsch et al., 2021; Ghorbani et al., 2021; Citovsky et al., 2021; Kothawade et al., 2021).
On the theoretical side, active learning is a well-studied subfield of statistical learning. General references in pool-based active learning include (Dasgupta, 2004; 2005; Hanneke, 2014; Nowak, 2011; Tosh & Dasgupta, 2017), and specific algorithms for half-spaces under classes of input distributions are contained, e.g., in (Balcan et al., 2007; Balcan & Long, 2013; Zhang & Li, 2021). However, none of these papers tackle the practically relevant scenario of batch active learning. In fact, restricting to theoretical aspects of batch active learning makes the research landscape far less populated. Below we briefly summarize what we think are among the most relevant papers to our work, as directly related to batch active learning, and then mention recent efforts in contiguous fields, like adaptive sampling and subset selection, which may serve as a general reference and inspiration.
Batch active learning in the pool-based scenario is one of the motivating applications in (Chen & Krause, 2013), where the main concern is to investigate general conditions under which a batch greedy policy achieves similar performance as the optimal policy that operates with the same batch size. Yet, the authors consider simple noise free scenarios, while the important observation (Theorem 2 therein) that a batch greedy algorithm is also competitive with respect to an optimal fully sequential policy (batch size one) does not apply to active learning. Chen et al. (2015; 2017) are along similar lines, with the addition of persistent noise, but do not tackle batch active learning problems.
A paper with a similar aim as ours, yet operating in the streaming setting of active learning, is (Amin et al., 2020). The authors show that some classes of fully sequential ac
tive algorithms can be turned into sequential algorithms that query labels in batches and suffer only an additive (times log factors) overhead in the label complexity. This transformation is essentially obtained by freezing the state of the fully sequential algorithm, but it is unclear whether any notion of diversity over the batch is enforced by the resulting batch algorithms.
Very recent stream-based active learning papers that are worth mentioning are (Katz-Samuels et al., 2021; Camilleri et al., 2021b)). These papers share similar methods and modeling assumptions as ours in leveraging optimal design, but they do not deal with batch active learning. The main concern there is essentially to improve the performance of adaptive sampling by reducing the variance of the estimators.
A learning problem similar to pool-based batch active learning is training subset selection (sometimes called dataset summarization), whose goal is to come up with a compressed version of a (big) dataset that offers to a given learning algorithm the same inference capabilities as if applied to the original dataset. The problem can be organized in rounds (as in batch active learning) and bridging one to the other can in practice be done by label hallucination/pseudolabeling. Representative works include (Wei et al., 2015; Killamsetty et al., 2020; Borsos et al., 2021).
4. The Linear Case
We start off by considering a simple linear model of the form $f^{}(\mathbf{x}) = \frac{1 + \langle\mathbf{w}^{},\mathbf{x}\rangle}{2}$ , where both $\mathbf{w}^*$ and $\mathbf{x}$ lie in the $d$ -dimensional Euclidean unit ball (so that $\langle \mathbf{w}^*,\mathbf{x}\rangle \in [-1,1]$ and $f^{*}(\mathbf{x})\in [0,1])$ ). Algorithm 1 contains in a nutshell the main ideas behind our algorithmic solutions, which is to greedily approximate a G-optimal design in the selection of points at each stage. The way it is formulated, Algorithm 1 does not operate with a constant batch size $B$ per stage. We will reduce to the constant batch size case in Section 4.2.
The algorithm takes as input a finite pool of points $\mathcal{P}$ of size $T$ and proceeds across stages $\ell = 1,2,\ldots$ by generating at each stage $\ell$ a (linear-threshold) predictor $\mathrm{sgn}(\langle \mathbf{w}{\ell},\mathbf{x}\rangle)$ , where $\mathbf{w}{\ell}$ is a ridge regression estimator computed only on the labeled pairs $(\mathbf{x}{\ell ,1},y{\ell ,1}),\dots ,(\mathbf{x}{\ell ,T{\ell}},y_{\ell ,T_{\ell}})$ collected during that stage. These predictors are used to trim the current pool $\mathcal{P}{\ell -1}$ by eliminating both the points on which $\mathbf{w}{\ell}$ is itself confident (set $\mathcal{C}{\ell}$ ) and those whose labels have just been queried (set $\mathcal{Q}{\ell}$ ). At each stage $\ell$ , the points $\mathbf{x}{\ell ,t}$ to query are selected in a greedy fashion by maximizing $^1 D(\mathbf{x},\mathcal{Q}\ell) = ||\mathbf{x}||{A{\ell ,t - 1}^{-1}}$ over the current pool $\mathcal{P}{\ell -1}$ (excluding the already selected points $\mathcal{Q}{\ell}$ , which are contained in $A_{\ell ,t - 1}$ ), so as to make $\mathbf{x}{\ell ,t}$ maximally different from $\mathcal{Q}{\ell}$ .
Algorithm 1: Pool-based batch active learning algorithm for linear models.
1 Input: Confidence level $\delta \in (0,1]$ , pool of instances $\mathcal{P} \subseteq \mathbb{R}^d$ of size $|\mathcal{P}| = T$
2 Initialize: $\mathcal{P}_0 = \mathcal{P}$
3 for $\ell = 1,2,\ldots$
4 Initialize within stage $\ell$
while $\mathcal{P}{\ell -1}\backslash \mathcal{Q}\ell \neq \emptyset$ and $\max{\mathbf{x}\in \mathcal{P}{\ell -1}\backslash \mathcal{Q}\ell}| \mathbf{x}|{A_{\ell ,t}^{-1}} > \epsilon_\ell$
Set $T_{\ell} = t$ , the number of queries made in stage $\ell$ if $\mathcal{Q}_{\ell}\neq \emptyset$
- Query the labels $y_{\ell,1}, \ldots, y_{\ell,T_{\ell}}$ associated with the unlabeled data in $\mathcal{Q}_{\ell}$ , and compute
- Set $\mathcal{C}{\ell} = {\mathbf{x}\in \mathcal{P}{\ell -1}\backslash \mathcal{Q}{\ell}:|\langle \mathbf{w}{\ell},\mathbf{x}\rangle | > 2^{-\ell}}$
- Compute pseudo-labels on each $\mathbf{x} \in \mathcal{C}{\ell}$ as $\hat{y} = \operatorname{sgn}\langle \mathbf{w}{\ell}, \mathbf{x} \rangle$
else
Set $\mathcal{P}{\ell} = \mathcal{P}{\ell -1}\backslash (\mathcal{C}{\ell}\cup \mathcal{Q}{\ell})$
- Exit the for-loop ( $L$ is the total no. of stages)
5 Predict labels in pool $\mathcal{P}$ :
- Train an SVM classifier $\widehat{\mathbf{w}}$ on $\cup_{\ell=1}^{L} \mathcal{C}_{\ell}$ via the generated pseudo-labels $\hat{y}$
- Predict on each $\mathbf{x} \in (\cup_{\ell=1}^{L} \mathcal{Q}{\ell}) \cup \mathcal{P}{L}$ through $\operatorname{sgn}(\langle \widehat{\mathbf{w}}, \mathbf{x} \rangle)$
When stage $\ell$ terminates, we are guaranteed that we have collected a set of points $\mathcal{Q}{\ell}$ such that all remaining points $\mathbf{x}$ in the pool satisfy $D(\mathbf{x},\mathcal{Q}{\ell})\leq \epsilon_{\ell}$ . Threshold $\epsilon_{\ell}$ , defined at the beginning of the stage, is exponentially decaying with $\ell$ . It is this threshold that determines the actual length of the stage, and rules the elimination of un queried points from the pool, along with the corresponding generation of pseudo-labels during the stage.
Algorithm 1 stops generating new stages when the size $|\mathcal{P}{\ell}|$ of pool $\mathcal{P}{\ell}$ triggers the condition $d / 2^{-\ell +1} > 2^{-\ell +1}|\mathcal{P}{\ell}|$ (which is satisfied, in particular, when $\mathcal{P}{\ell}$ becomes empty). In that case, the current stage $\ell$ becomes the final stage $L$
Finally, the algorithm uses the subset of points $\cup_{\ell=1}^{L}\mathcal{C}{\ell}$ and the associated pseudo-labels $\hat{y}$ generated during the $L$ stages to train a linear classifier $\widehat{\mathbf{w}}$ (e.g., an SVM) to zero empirical error on that subset. Our analysis (see Appendix A) shows that with high probability such a consistent linear classifier exists. Each point $\mathbf{x}$ that remains in the pool, that is, each $\mathbf{x} \in (\cup{\ell=1}^{L}\mathcal{Q}{\ell}) \cup \mathcal{P}{L}$ , is assigned label $\mathrm{sgn}(\langle \widehat{\mathbf{w}},\mathbf{x}\rangle)$ . Notice, in particular, that $\widehat{\mathbf{w}}$ is not trying to fit the queried labels of $\cup_{\ell=1}^{L}\mathcal{Q}{\ell}$ , but only the pseudo-labels of $\cup{\ell=1}^{L}\mathcal{C}_{\ell}$ .
The fact that the algorithm only uses pseudo-labels to train its final predictor may look counter-intuitive at first, but this is due to our proof technique that derives an excess risk bound out of weighted empirical risk bounds — see, e.g., the proof sketch of Theorem 4.1. Algorithmically, the queried labels can be noisy and, in general, we do not know whether they are consistent with the Bayes optimal predictor $\mathbf{w}^*$ . In this sense, the process of generating pseudo-labels can be seen as a label denoising process. This is made possible by our algorithm, which guarantees (with high probability) that for the selected data points, pseudo-labels generated by the model are consistent with those of the Bayes optimal predictor, while labels of other data (including those in the training data) may not.
Further, notice that the final predictor $\hat{\mathbf{w}}$ need not be an SVM. Any training algorithm that returns a linear classifier which is consistent with the pseudo-labels will suffice. From our analysis we know that such a linear classifier has to exist (with high probability). Incidentally, this is the main the reason why relying on (denoised) pseudo-labels facilitates our statistical analysis, beyond the involved algorithms.
It is also worth observing how Algorithm 1 resolves the trade-off between informativeness and diversity we alluded to in previous sections. Once we reach stage $\ell$ , what remains in the pool are only the points $\mathbf{x}$ such that $|\langle \mathbf{w}{\ell - 1}, \mathbf{x} \rangle| \leq 2^{-\ell + 1}$ (this is because we have eliminated in stage $\ell - 1$ all the points in $\mathcal{C}{\ell - 1}$ ). Hence, the remaining points which the approximate G-optimal design operates with in stage $\ell$ are those which the previous model $\mathbf{w}_{\ell - 1}$ is not sufficiently confident on. The algorithm then puts all
these low-confident points on the same footing (that is, they are considered equally informative if taken in isolation), and then relies on the approximate G-optimal design scheme to maximize diversity among them. The set-wise diversity measure we end up maximizing is indeed a determinant-like diversity measure. This is easily seen from the fact that $\sum_{t=1}^{T_{\ell}}||\mathbf{x}{\ell,t}||{A_{\ell,t-1}^{-1}}^2 \approx \log |A_{\ell,T_{\ell}}|$ .
On one hand, this careful selection of points contributes to keeping the variance of estimator $\mathbf{w}{\ell}$ under control. On the other hand, the fact that we stop accumulating labels when $\max{\mathbf{x} \in \mathcal{P}{\ell-1} \setminus \mathcal{Q}{\ell}} | \mathbf{x} |{A{\ell, T_{\ell}}^{-1}} \leq \epsilon_{\ell}$ essentially implies that $\operatorname{sgn}(\langle \mathbf{w}_{\ell}, \mathbf{x} \rangle) = \operatorname{sgn}(\langle \mathbf{w}^{}, \mathbf{x} \rangle)$ on all points $\mathbf{x}$ we generate pseudo-labels for, which in turn ensures that these pseudolabels are consistent with $\mathbf{w}^{}$ .
Sequential experimental design has become popular, e.g., in the (contextual) bandits literature, see Ch. 22 in (Lattimore & Szepesvari, 2020), and is explicitly contained in recent works on best arm identification (e.g., (Fiez et al., 2019; Camilleri et al., 2021a)). Notice that in those works a design is a distribution over the set of actions (which would correspond to pool $\mathcal{P}$ in our case), and the algorithm is afforded to sample a given action $\mathbf{x}t$ multiple times, obtaining each time a fresh reward value $y{t}$ such that $\mathbb{E}[y_t\mid \mathbf{x}_t] = \langle \mathbf{w}^*,\mathbf{x}_t\rangle$ . This is not conceivable in a pool-based active learning scenario where label noise is persistent, and each "action" $\mathbf{x}_t$ can only be played once. This explains why the design we rely upon here is necessarily more restrained than in those papers.
4.1. Analysis
The following is the main result of this section.2
Theorem 4.1. Let $T \geq d$ and assume that $| \mathbf{x} |_2 \leq 1$ for all $\mathbf{x} \in \mathcal{P}$ . Then with probability at least $1 - \delta$ over the random draw of $(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_T, y_T) \sim \mathcal{D}$ the excess risk $\mathcal{L}(\widehat{\mathbf{w}}) - \mathcal{L}(\mathbf{w}^*)$ , the label complexity $N_T(\mathcal{P})$ , and the number of stages $L$ generated by Algorithm 1 are simultaneously upper bounded as follows:
for an absolute constant $\bar{C}$ and
Proof sketch. We first derive a high-probability bound on the weighted empirical risk
and then turn it into an excess risk bound through a uniform convergence argument. In order to bound $R_{T}(\mathcal{P})$ , we partition the points in $\mathcal{P}$ into the three subsets
and consider the contribution to $R_{T}(\mathcal{P})$ of each subset separately.
When $\mathbf{x} \in \cup_{\ell=1}^{L} \mathcal{C}_{\ell}$ , we show that the pseudo-labels $\hat{y}$ generated by the algorithm are with high probability consistent with those generated by $\mathbf{w}^*$ , that is, $\operatorname{sgn}\langle \widehat{\mathbf{w}}, \mathbf{x} \rangle = \operatorname{sgn}\langle \mathbf{w}^*, \mathbf{x} \rangle$ , hence those $\mathbf{x}$ do not contribute to the weighted empirical risk.
Any $\mathbf{x} \in \mathcal{Q}{\ell}$ , is shown to contribute to $R{T}(\mathcal{P})$ by at most $2^{-\ell}$ , thus the overall contribution of $\cup_{\ell=1}^{L} \mathcal{Q}{\ell}$ can be bounded by $\sum{\ell=1}^{L} T_{\ell} / 2^{\ell}$ . In turn, by the way points are picked, $T_{\ell}$ is roughly bounded by $d / \epsilon_{\ell}^{2}$ , allowing us to conclude that the total contribution of $\cup_{\ell=1}^{L} \mathcal{Q}_{\ell}$ is bounded by
Next, for $\mathbf{x} \in \mathcal{P}_L$ , we show that (with high probability) it must be $|\langle \mathbf{w}^*, \mathbf{x} \rangle| \leq 2^{-L}$ which, combined with the stopping condition defining $L$ implies an overall contribution of the same form $d / \epsilon_L$ .
Finally, since $L$ is itself a random variable, we need to devise high probability upper bounds on it. We rely on the low noise assumption (2) to conclude that $L$ is with high probability of the form
which we replace back into the previous bounds yielding a guarantee of the form
hence an excess risk bound of the form
The analysis of the label complexity $N_{T}(\mathcal{P}) = \sum_{\ell=1}^{L} T_{\ell}$ follows a similar pattern, but it does not require uniform convergence.
4.2. Constant batch size
We now describe a simple modification to Algorithm 1 that makes it work in the constant batch size case. Let us denote by $T_{\ell}$ the length of stage $\ell$ in Algorithm 1. The modified algorithm simply runs Algorithm 1: If $T_{\ell} < B$ the modified algorithm relies on model $\mathbf{w}{\ell}$ generated by Algorithm 1 without saturating the budget of $B$ labels at that stage. On the contrary, if $T{\ell} \geq B$ , the modified algorithm splits stage $\ell$ of Algorithm 1 into $\lceil T_{\ell} / B \rceil$ stages of size $B$ (except, possibly, for the last one), and then uses the queried set $\mathcal{Q}{\ell}$ generated by Algorithm 1 across all those stages. Hence, in this case, the modified algorithm is not exploiting the potential benefit of updating the model every $B$ queried labels. For instance, if $B = 100$ and $T{\ell} = |\mathcal{Q}{\ell}| = 240$ , the modified algorithm will split this stage into three successive stages of size 100, 100, and 40, respectively, and then rely on the 240 labels queried by Algorithm 1 across the three stages. In particular, the update of the model $\mathbf{w}{\ell}$ , and the associated pseudo-label computation on sets $\mathcal{C}_{\ell}$ is only performed at the end of the third stage.
Notice that the modified algorithm we just described is a legitimate pool-based batch active learning algorithm operating on a constant batch size $B$ , and its analysis is a direct consequence of the one in Theorem 4.1, after we take care of the possible over-counting that may arise in the reduction. Specifically, observe that the final hypothesis $\hat{\mathbf{w}}$ produced by the modified algorithm is the same as the one computed by Algorithm 1, hence the same bound on the excess risk applies. As for label complexity, if we stipulate that a batch algorithm operating on a constant batch size $B$ will be billed $B$ labels at each stage even if it ends up querying less, then the label complexity of the modified algorithm will overcount the number of labels simply due to the rounding off in $\lceil T_{\ell} / B \rceil$ . However, at each of the $L$ stages of Algorithm 1, the over-counting is bounded by $B$ , so that, overall, the label complexity of the constant batch size variant exceeds the one of Algorithm 1 by at most an additive $BL$ term which, due to the bound on $L$ in Theorem 4.1, is of the form $\max \left{\frac{B}{\alpha + 2} \log \left(\frac{T}{d}\right), B \log \left(\frac{1}{\epsilon_0}\right)\right}$ . This is summarized in the following corollary.
Corollary 4.2. With the same assumptions and notation as in Theorem 4.1, with probability at least $1 - \delta$ over the random draw of $(\mathbf{x}_1,y_1),\ldots ,(\mathbf{x}T,y_T)\sim \mathcal{D}$ the label complexity $N{T,B}(\mathcal{P})$ achieved by the modified algorithm operating on a batch of size $B$ is bounded as follows:
where $\bar{C}, C(\delta, T, \epsilon_0)$ are the same as in Theorem 4.1.
A few comments are in order.
An important practical aspect of this modified algorithm (inherited from Algorithm 1) is the very mild number of re-trainings required to achieve the claimed performance. Despite the total number of labels can be as large as $T^{\frac{2}{\alpha + 2}}$ , the number $L$ of times the model is actually re-trained is not $T^{\frac{2}{\alpha + 2}} / B$ , but only logarithmic in $T$ , irrespective of the noise level $\alpha$ (that is, even when the low-noise assumption (2) is vacuous). On the other hand, it is also important to observe that the bound on $L$ shrinks as $\alpha$ increases, that is, when the problem becomes easier. Overall, these properties make the algorithm attractive in practical learning scenarios where the re-training time turns out to be the main bottleneck in the data acquisition process, and a learning procedure is needed that automatically adapts the re-training effort to the hardness of the problem.
Let us disregard lower order terms and only consider the asymptotic behavior as $T \to \infty$ . Comparing the excess risk bound in Theorem 4.1 to the label complexity bound in Corollary 4.2, one can see that when $B = O(T^{\frac{2}{\alpha + 2}})$ we have with high probability
which is the minimax rate one can achieve for VC classes $^3$ under the low-noise condition (2) with exponent $\alpha$ (e.g., (Castro & Nowak, 2008; Hanneke, 2009; Koltchinskii, 2010; Dekel et al., 2012)). Hence, in order to achieve high-probability minimax rates, one need not try to make the algorithm more adaptive by having it operate with an even smaller $B$ : any $B$ as small as $T^{\frac{2}{\alpha + 2}}$ will indeed suffice in our learning scenario.
- Similar minimax bounds on excess risk against label complexity have been shown in the streaming setting in (Dekel et al., 2012; Wang et al., 2021), though their results only hold in the fully sequential case (that is, $B = 1$ ) and only hold in expectation over the random draw of the data, not with high probability.
The fact that a batch greedy algorithm can be competitive with a fully sequential policy has also been observed in problems which are similar in spirit to active learning, like influence maximization (see, in particular, (Chen & Krause,
2013)). More recently, in the context of adaptive sequential decision making, Esfandiari et al. (2021) have proposed an efficient semi-adaptive policy that performs logarithmically many rounds of interaction achieving similar performance as the fully sequential policy. This paper improves on the original ideas contained in (Golovin & Krause, 2017). Yet, when adapted to active learning, these results turn out to apply to very stylized scenarios that assume lack of noise in the labels, and/or disregard the computational aspects associated with maintaining a posterior distribution or a version space (which would be of size $O(T^{d})$ in our case).
5. The Logistic Case
We now discuss how to extend the result of the previous section to the logistic case (the generalized linear model (1) with $\sigma(z) = \frac{e^z}{1 + e^z}$ ).
Algorithm 2 is the adaptation to the logistic case of the algorithm of Section 4, the main difference being that we now assume the comparison vector $\mathbf{w}^*$ to lie in a Euclidean ball of (known) radius $R$ , and compute estimators $\mathbf{w}_{\ell}$ as regularized logistic regressors:
where $Loss(\cdot)$ is the logistic function
One of the main concerns in the logistic case is to investigate how excess risk and label complexity bounds depend on the complexity $R$ of the comparison class. The following is the logistic counterpart to Theorem 4.1.
Theorem 5.1. Let $T \geq d$ and assume that $| \mathbf{x} |_2 \leq 1$ for all $\mathbf{x} \in \mathcal{P}$ . Then with probability at least $1 - \delta$ over the random draw of $(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_T, y_T) \sim \mathcal{D}$ the excess risk $\mathcal{L}(\widehat{\mathbf{w}}) - \mathcal{L}(\mathbf{w}^*)$ , the label complexity $N_T(\mathcal{P})$ , and the number of stages $L$ generated by Algorithm 2 are simultaneously upper
Algorithm 2: Pool-based batch active learning algorithm for logistic models.
- Input: Confidence level $\delta \in (0,1]$ , pool of instances $\mathcal{P} \subseteq \mathbb{R}^d$ of size $|\mathcal{P}| = T$ , upper bound $R > 0$ on $||\mathbf{w}^{*}||$
2 Initialize: $\mathcal{P}_0 = \mathcal{P}$
3 for $\ell = 1,2,\ldots$
4 Initialize within stage $\ell$
$R_{\ell} = R2^{-\ell}$
while $\mathcal{P}{\ell -1}\backslash \mathcal{Q}\ell \neq \emptyset$ and $\max{\mathbf{x}\in \mathcal{P}{\ell -1}\backslash \mathcal{Q}\ell}| \mathbf{x}|{A_{\ell ,t}^{-1}} > \epsilon_\ell$
$t = t + 1$
Set $T_{\ell} = t$ , the number of queries made in stage $\ell$ if $\mathcal{Q}_{\ell}\neq \emptyset$
Query the labels $y_{\ell,1}, \ldots, y_{\ell,T_\ell}$ associated with the unlabeled data in $\mathcal{Q}_\ell$
Compute $\mathbf{w}_{\ell}$ as in (4)
- Compute pseudo-labels on each $\mathbf{x} \in \mathcal{C}{\ell}$ as $\hat{y} = \operatorname{sgn}\langle \mathbf{w}{\ell}, \mathbf{x} \rangle$
else
Set $\mathcal{P}{\ell} = \mathcal{P}{\ell -1}\backslash (\mathcal{C}{\ell}\cup \mathcal{Q}{\ell})$
$L = \ell$
- Exit the for-loop ( $L$ is the total no. of stages)
5 Predict labels in pool $\mathcal{P}$ ..
- Train an SVM classifier $\widehat{\mathbf{w}}$ on $\cup_{\ell=1}^{L} \mathcal{C}_{\ell}$ via the generated pseudo-labels $\hat{y}$
- Predict on each $\mathbf{x} \in (\cup_{\ell=1}^{L} \mathcal{Q}{\ell}) \cup \mathcal{P}{L}$ through $\operatorname{sgn}(\langle \widehat{\mathbf{w}}, \mathbf{x} \rangle)$
bounded as follows:
where $\bar{C}$ is an absolute constant and
In the above bounds, the complexity term $R$ is meant to be a constant. Notice that the dependence on $e^{R}$ is common to many logistic bounds, specifically in the bandits literature. This is due to the nonlinear shape of $\sigma(\cdot)$ (see, e.g., (Filippi et al., 2010; Gentile & Orabona, 2012; Li et al., 2017; Faury et al., 2020), where it takes the form of an upper bound on $1 / \sigma'(\cdot)$ ). In fact, a closer look at the multiplicative dependence on $e^{R}$ above reveals that this factor multiplies only logarithmic terms in $T$ . This is akin to the more refined self-concordant analysis of logistic models contained in (Faury et al., 2020). Since our algorithm is focusing attention to exponentially shrinking regions of margin values $\langle \mathbf{w}^*, \mathbf{x} \rangle$ around the origin, we have obtained here similar guarantees without resorting to a self-concordant analysis.
A constant batch size version of Algorithm 2 can also be devised, and the associated properties spelled out. The details are very similar to those in Section 4.2, and are therefore omitted.
6. Conclusions and Ongoing Research
We have described and analyzed novel batch active learning algorithms in the pool-based setting that achieve minimax rates of convergence of their excess risk as a function of the number of queried labels. The minimax nature of our results is retained also when the batch size $B$ is allowed to scale polynomially ( $B \leq T^{\beta}$ , for $\beta \leq 1$ ) with the size $T$
of the training set, the allowed exponent $\beta$ depending on the actual level of noise in the data. The algorithms have a number of re-training rounds which is at worst logarithmic, and is able to automatically adapt to the noise level.
Our algorithms generate pseudo-labels by restricting to exponentially small regions of the margin space. In the logistic case, this has the side benefit of delivering performance bounds where the classical exponential dependence on the complexity of the comparator $\mathbf{w}^*$ occurs as a multiplicative factor only in logarithmic terms.
The logistic algorithm we presented in Section 5 has a suboptimal dependence on the input dimension $d$ (notice the extra factor $d$ contained in $C_1$ in the excess risk bound of Theorem 5.1), and we are currently trying to see if it is possible to achieve the same result as in the linear case. For the logistic case, a more computationally efficient algorithm actually exists that is based on the online Newton step-like analysis in (Gentile & Orabona, 2012). Yet, this algorithm will have a similar suboptimal dependence on $d$ .
Related to the above, we are currently investigating to what extent it is possible to improve the logistic analysis so as to turn the constrained minimization problem therein into an unconstrained one. Analyses we are aware of in the contiguous field of contextual bandits in generalized linear scenarios (e.g., (Li et al., 2017)) do not seem to help, given the strong assumptions on the context distribution they formulate to achieve the optimal dependence on $d$ .
The methods we have presented here are instances of a more general approach to batch active learning in realizable settings where, given a diversity measure $D(\mathbf{x}, S)$ , an estimator $\widehat{f} = \widehat{f}(S)$ in fixed design scenarios exists for which we can guarantee $L_{\infty}$ approximation bounds of the form
For instance, our approach can be seamlessly extended to the case where $f^{*}$ belongs to a RKHS, the algorithmic aspects simply requiring a dual variable formulation of Algorithm 1, and the statistical ones simply resorting to covering number bounds (e.g., (Zhou, 2002)) or empirical versions thereof. As another relevant example, (5) can be shown to hold for known plug-in estimators, like local polynomial estimators (e.g., Sect. 1.6.1. in (Tsybakov, 2009)). Hence our general approach may be extended to those cases as well.
References
Amin, K., Cortes, C., DeSalvo, G., and Rostamizadeh, A. Understanding the effects of batching in online active learning. In Proc. AISTATS, 2020.
Ash, J., Zhang, C., Krishnamurthy, A., Langford, J., and Agarwal, A. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations (ICLR), 2020.
Balcan, M.-F., Broder, A., and Zhang, T. Margin based active learning. In $COLT$ , 2007.
Balcan, N. and Long, P. Active and passive learning of linear separators under log-concave distributions. In Colt 2013, 2013.
Borsos, Z., Mutny, M., Tagliasacchi, M., and Krause, A. Data summarization via bilevel optimization, 2021.
Camilleri, R., Katz-Samuels, J., and Jamieson, K. High-dimensional experimental design and kernel bandits. In Proc. 38th International Conference on Machine Learning, PMLR 139, 2021a.
Camilleri, R., Xiong, Z., Fazel, M., Jain, L., and Jamieson, K. Selective sampling for online best-arm identification. In Advances in Neural Information Processing Systems, 2021b.
Castro, R. and Nowak, R. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5): 2339-2353, 2008.
Chen, Y. and Krause, A. Near-optimal batch mode active learning and adaptive submodular optimization. In Proceedings of the 30th International Conference on Machine Learning, volume 28(1), pp. 160-168. PMLR, 2013.
Chen, Y., Hassani, S. H., Karbasi, A., and Krause, A. Sequential information maximization: When is greedy near-optimal? In Proc. 28th Conference on Learning Theory, PMLR 40, pp. 338-363, 2015.
Chen, Y., Hassani, S. H., and Krause, A. Near-optimal bayesian active learning with correlated and noisy tests. In Proc. 20th International Conference on Artificial Intelligence and Statistics, 2017.
Citovsky, G., DeSalvo, G., Gentile, C., Karydas, L., Rajagopalan, A., Rostamizadeh, A., and Kumar, S. Batch active learning at scale. In Neurips 2021, 2021.
Dasgupta, S. Analysis of a greedy active learning strategy. In NIPS, 2004.
Dasgupta, S. Coarse sample complexity bounds for active learning. In Advances in neural information processing systems, pp. 235-242, 2005.
Dekel, O., Gentile, C., and Sridharan, K. Selective sampling and active learning from single and multiple teachers. J. Mach. Learn. Res., 13(1), 2012.
Esfandiari, H., Karbasi, A., and Mirrokni, V. Adaptivity in adaptive submodularity. In Proc. 34th Annual Conference on Learning Theory, volume 134, pp. 1-24. PMLR, 2021.
Faury, L., Abeille, M., Calauzènes, C., and Fercoq, O. Improved optimistic algorithms for logistic bandits. In 37th ICML, 2020.
Fiez, T., Jain, L., Jamieson, K. G., and Ratliff, L. Sequential experimental design for transductive linear bandits. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Filippi, S., Cappe', O., Garivier, A., and Szepesvari, C. Parametric bandits: The generalized linear case. In Advances in Neural Information Processing Systems, pp. 586-594, 2010.
Gentile, C. and Orabona, F. On multilabel classification and ranking with partial feedback. In Advances in Neural Information Processing Systems, volume 25, pp. 1151-1159. Curran Associates, Inc., 2012.
Ghorbani, A., Zou, J., and Esteva, A. Data shapley valuation for efficient batch active learning. In arXiv:2104.08312v1, 2021.
Golovin, D. and Krause, A. Adaptive submodularity: A new approach to active learning and stochastic optimization. In arXiv:1003.3967, 2017.
Gu, Q., Zhang, T., Han, J., and Ding, C. Selective labeling via error bound minimization. Advances in neural information processing systems, 25, 2012.
Gu, Q., Zhang, T., and Han, J. Batch-mode active learning via error bound minimization. In UAI, pp. 300-309. CiteSeer, 2014.
Hanneke, S. Adaptive rates of convergence in active learning. In Proc. of the 22th Annual Conference on Learning Theory, 2009.
Hanneke, S. Theory of disagreement-based active learning. Foundations and Trends in Machine Learning, 7(2-3): 131-309, 2014.
Hoi, S., Jin, R., Zhu, J., and Lyu, M. R. Batch mode active learning and its application to medical image classification. In ICML, 2006.
Katz-Samuels, J., Zhang, J., Jain, L., and Jamieson, K. Improved algorithms for agnostic pool-based active classification. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 5334-5344. PMLR, 2021.
Killamsetty, K., Sivasubramanian, D., Ramakrishnan, G., and Iyer, R. G路由器: Generalization based data subset selection for efficient and robust learning. arXiv preprint arXiv:2012.10630, 2020.
Kim, K., Park, D. Kim, K., and Chun, S. Task-aware variational adversarial active learning. In arXiv:2002.04709v2, 2020.
Kirsch, A., van Amersfoort, J., and Gal, Y. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In arXiv:1906.08158v2, 2019.
Kirsch, A., Farquhar, S., and Gal, Y. A simple baseline for batch active learning with stochastic acquisition functions. arXiv preprint arXiv:2106.12059, 2021.
Koltchinskii, V. Rademacher complexities and bounding the excess risk of active learning. Journal of Machine Learning Research, 11:2457-2485, 2010.
Kothawade, S., Beck, N., Killamsetty, K., and Iyer, R. Similar: Submodular information measures based active learning in realistic scenarios. In Advances in Neural Information Processing Systems, 2021.
Lattimore, T. and Szepesvari, C. Bandit Algorithms. Cambridge University Press, 2020.
Li, L., Lu, Y., and Zhou, D. Provably optimal algorithms for generalized linear contextual bandits. In International Conference on Machine Learning, pp. 2071-2080. PMLR, 2017.
Mammen, E. and Tsybakov, A. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808-1829, 1999.
Nowak, R. D. The geometry of generalized binary search. IEEE Transactions on Information Theory, 57(12):7893-7906, 2011.
Sauer, N. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13:145-147, 1972.
Sener, O. and Savarese, S. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1aIuk-RW.
Shui, C., Zhou, F. Gagne, C., and Wang, B. Deep active learning: Unified and principled method for query and training. In Proc. AiSTATS 2020, 2020.
Tosh, C. and Dasgupta, S. Diameter-based active learning. In Thirty-fourth International Conference on Machine Learning (ICML), 2017.
Tsybakov, A. Introduction to Nonparametric Estimation. Springer, 2009.
Wang, Z., Awasthi, P., Dann, C., and Sekhari, A. Gentile, C. Neural active learning with performance guarantees. In Advances in Neural Information Processing Systems 34, 2021.
Wei, K., Iyer, R., and Bilmes, J. Submodularity in data subset selection and active learning. In International Conference on Machine Learning, pp. 1954-1963. PMLR, 2015.
Zhang, C. and Li, Y. Improved algorithms for efficient active learning halfspaces with massart and tsybakov noise. In COLT, 2021.
Zhdanov, F. Diverse mini-batch active learning. In arXiv:1901.05954v1, 2019.
Zhou, D.-X. The covering number in learning theory. Journal of Complexity, 18:739-767, 2002.
A. Proofs for Section 4
Consider Algorithm 1, and denote by $T_{\ell}$ the length of stage $\ell$
We denote for any $\epsilon >0$
Recall that in Algorithm 1 variable $L$ counts the total number of stages (a random quantity), while the size of the original pool $|\mathcal{P}|$ is denoted by $T$ .
We first show that on the confident sets, that is, on sets $\mathcal{C}_{\ell}$ where pseudo-labels are generated, the learner has with high probability no regret. Before giving our key lemma, it will be useful to define the events
for $\ell = 1,\ldots ,L$
Lemma A.1. For any positive $L$ ,
Proof. We assume $\mathcal{P}{\ell -1}\backslash \mathcal{Q}\ell$ is not empty (it could be empty only in the final stage $L$ ). We follow the material contained in Chapters 20 and 21 of Lattimore & Szepesvari (2020). Let $\xi{\ell ,t} = y{\ell ,t} - \langle \mathbf{w}^*,\mathbf{x}{\ell ,t}\rangle$ and notice that $\xi{\ell ,t}$ are independent 1-sub-Gaussian random variables conditioned on $\mathcal{P}{\ell -1}$ . Also, observe that, conditioned on past stages $1,\ldots ,\ell -1$ , we are in a fixed design scenario, where the $\mathbf{x}{\ell ,t}$ are chosen without knowledge of the corresponding labels $y_{\ell ,t}$ . We can write, for any $\mathbf{x}\in \mathcal{P}_{\ell -1}$ ,
Since ${\xi_{\ell ,t}}{t = 1}^{T{\ell}}$ are 1-sub-Gaussian and independent conditioned on ${\mathbf{x}{\ell ,t}}$ , the variance term $\sum{t = 1}^{T_{\ell}}\langle \mathbf{x}{\ell ,t},\mathbf{x}\rangle{A_{\ell ,T_{\ell}}^{-1}}\xi_{\ell ,t}$ is $\sqrt{\sum_{t = 1}^{T_{\ell}}\langle\mathbf{x}{\ell,t},\mathbf{x}\rangle{A_{\ell,T_{\ell}}}^2}$ -sub-Gaussian. We apply lemma C.5
Now observe that
We plug back into the previous inequality to obtain
Using a union bound, we get with probability at least $1 - \frac{\delta}{\ell(\ell + 1)}$
holds uniformly for all $\mathbf{x} \in \mathcal{P}{\ell-1}$ . For the bias term $\langle \mathbf{w}^*, \mathbf{x} \rangle{A_{\ell, T_\ell}^{-1}}$ , notice that $A_{\ell, T_\ell} \succeq I$ implies
Hence with probability at least $1 - \frac{\delta}{\ell(\ell + 1)}$
holds uniformly for all $\mathbf{x} \in \mathcal{P}_{\ell-1}$ .
Notice that by the selection criterion in Algorithm 1, $\max_{\mathbf{x} \in \mathcal{P}{\ell - 1} \setminus \mathcal{Q}\ell} | \mathbf{x} |{A{\ell, T_\ell}^{-1}} \leq \epsilon_\ell^2$ . As a consequence, with probability at least $1 - \frac{\delta}{\ell(\ell + 1)}$ ,
Recalling the definition of $\epsilon_{\ell}$ in Algorithm 1 and using an union bound over $\ell$ , we get the desired result.
As a simple consequence, we have the following lemma.
Lemma A.2. Assume $\bigcap_{\ell = 1}^{L}\mathcal{E}{\ell}$ holds. Then Algorithm 1 generates pseudo-labels such that, on all points $\mathbf{x}\in \cup{\ell = 1}^{L}\mathcal{C}_{\ell}$ , $\operatorname {sgn}(\langle \mathbf{w}_\ell ,\mathbf{x}\rangle) = \operatorname {sgn}(\langle \mathbf{w}^*,\mathbf{x}\rangle)$ .
Proof. Simply observe that if $\mathbf{x} \in \cup_{\ell=1}^{L} \mathcal{C}{\ell}$ is such that $\operatorname{sgn}(\langle \mathbf{w}{\ell}, \mathbf{x} \rangle) = 1$ then $\langle \mathbf{w}{\ell}, \mathbf{x} \rangle > 2^{-\ell}$ , which implies $\langle \mathbf{w}^*, \mathbf{x} \rangle > 0$ by the assumption that $\mathcal{E}{\ell}$ holds. Similarly, $\operatorname{sgn}(\langle \mathbf{w}_{\ell}, \mathbf{x} \rangle) = -1$ implies $\langle \mathbf{w}^*, \mathbf{x} \rangle < 0$ .
Lemma A.3. The length $T_{\ell}$ of stage $\ell$ in Algorithm 1 is (deterministically) upper bounded as
Proof. Since in stage $\ell$ the algorithm terminates at $T_{\ell}$ , any round $t < T_{\ell}$ is such that
We denote $|\cdot|$ as the determinant of the matrix at argument and have the known identity
where the third equality holds since $I + A_{\ell, t}^{-1/2} \mathbf{x}{\ell, t+1} \mathbf{x}{\ell, t+1}^{\top} A_{\ell, t}^{-1/2}$ has $d - 1$ eigenvalues 1 and one eigenvalue 1 + $||\mathbf{x}{\ell, t+1}||{A_{\ell, t}^{-1}}^{2}$ .
Combining the above equality with the fact that $\log (1 + x) \geq \frac{x}{1 + x} \geq \frac{x}{2}$ for $0 \leq x \leq 1$ , we get
Therefore,
Summing over $t = 0,\dots ,T_{\ell} - 1$ yields,
Now, $A_{\ell ,0} = I$ , so that $|A_{\ell ,0}| = 1$ , and
yields
Let $G(x) = \frac{x}{\log(1 + x)}$ , and notice that $G(x)$ is increasing for $x > 0$ . We have
where the second inequality holds since $\epsilon_{\ell} \leq \epsilon_{1} < \frac{1}{4}$ .
As a consequence,
The proof then proceeds by bounding two relevant quantities associated with the behavior of Algorithm 1: the label complexity
and the weighted cumulative regret over pool $\mathcal{P}$ of size $T$ , defined as
We will first present intermediate bounds on $R_{T}(\mathcal{P})$ and $N_{T}(\mathcal{P})$ as a function of $L$ , and then rely on the properties of the noise (hence the randomness on $\mathcal{P}$ ) to complete the proofs. To simplify the math display we denote
so that $\epsilon_{\ell} = \frac{1}{2^{\ell}K_{T}(\delta,\ell)}$
Lemma A.3 immediately delivers the following bound on $N_T(\mathcal{P})$ :
Theorem A.4. For any pool realization $\mathcal{P}$ , the label complexity $N_{T}(\mathcal{P})$ of Algorithm 1 operating on a pool $\mathcal{P}$ of size $T$ is bounded deterministically as
Proof. By definition
where the second inequality follows from the fact that both $\frac{1}{\epsilon_{\ell}}$ and $K_{T}(\delta, \ell)$ increase with $\ell$ , and the last inequality follows from $\sum_{\ell=1}^{L} 4^{\ell} < \frac{4}{3} 4^{L}$ .
As for the regret $R_{T}(\mathcal{P})$ , we have the following high probability result.
Theorem A.5. For any pool realization $\mathcal{P}$ , the weighted cumulative regret $R_{T}(\mathcal{P})$ of Algorithm 1 operating on a pool $\mathcal{P}$ of size $T$ is bounded as
assuming $\bigcap_{\ell = 1}^{L}\mathcal{E}_{\ell}$ holds.
Proof. We decompose the pool $\mathcal{P}$ as the union of following disjoint sets
and, correspondingly, the weighted cumulative regret $R_{T}(\mathcal{P})$ as the sum of the three components
Assume $\bigcap_{\ell = 1}^{L}\mathcal{E}{\ell}$ holds. First, notice that on $\mathcal{C}{\ell}$
under the assumption that $\mathcal{E}{\ell}$ holds, thus points in $\cup{\ell = 1}^{L}\mathcal{C}_{\ell}$ do not contribute weighted regret for $\widehat{\mathbf{w}}$ , i.e.,
Next, on $\mathcal{P}_L$ , we have $|\langle \mathbf{w}_L,\mathbf{x}\rangle |\leq 2^{-L}$ . Combining this with the assumption that $\mathcal{E}_L$ holds, we get $|\langle \mathbf{w}^{*},\mathbf{x}\rangle |\leq 2^{-L + 1}$ which implies that the weighted cumulative regret on $\mathcal{P}_L$ is bounded as
the second inequality deriving from the stopping condition defining $L$ in Algorithm 1.
Finally, on the queried points $\cup_{l=1}^{L}\mathcal{Q}{\ell}$ , it is unclear whether $\operatorname{sgn}\langle \widehat{\mathbf{w}},\mathbf{x}\rangle = \operatorname{sgn}\langle \mathbf{w}^*,\mathbf{x}\rangle$ or not, so we bound the weighted cumulative regret contribution of each data item $\mathbf{x}$ therein by $|\langle \mathbf{w}^*,\mathbf{x}\rangle|$ . Now, by construction, $\mathbf{x} \in \mathcal{Q}{\ell} \subset \mathcal{P}{\ell-1}$ , so that $|\langle \mathbf{w}{\ell-1},\mathbf{x}\rangle| \leq 2^{-\ell+1}$ which, combined with the assumption that $\mathcal{E}{\ell-1}$ holds, yields $|\langle \mathbf{w}^*,\mathbf{x}\rangle| \leq 2^{-\ell+2}$ . Since $|\mathcal{Q}{\ell}| = T_{\ell}$ , we have
and Lemma A.3 allows us to write
the last inequality following from a reasoning similar to the one that lead us to theorem A.4.
Given any pool realization $\mathcal{P}$ , both the label complexity and weighted regret are bounded by a function of $L$ . Adding the ingredient of the low noise condition (2) helps us leverage the randomness in $\mathcal{P}$ and further bound from above the number of stages $L$ .
Specifically, assume the low noise condition (2) holds for $f^{}(\mathbf{x}) = \frac{1 + \langle\mathbf{w}^{},\mathbf{x}\rangle}{2}$ , for some unknown exponent $\alpha \geq 0$ and unknown constant $\epsilon_0 \in (0,1]$ . Using a multiplicative Chernoff bound, it is easy to see that for any fixed $\epsilon_{*}$ , with probability at least $1 - \delta$ ,
the probability being over the random draw of the initial pool $\mathcal{P}$ . Now, since $\epsilon_L$ is itself a random variable (since so is $L$ ), we need to resort to a covering argument. For any positive number $M$ , consider the following set of fixed $\epsilon$ values
Then with probability at least $1 - \delta$
holds simultaneously over $\epsilon \in \mathcal{K}M$ . Set $M = \log_2T$ and assume $\epsilon$ is the smallest value in $\mathcal{K}M$ that is bigger than or equal to $\epsilon{}$ . If $\epsilon$ is not the smallest value in $\mathcal{K}M$ , then by construction we have $\epsilon{}^{\alpha} \leq \epsilon^{\alpha} < 2\epsilon_{*}^{\alpha}$ so that, for all $\epsilon_{*} > \frac{\epsilon{0}}{2^{M / \alpha}}$ ,
On the other hand if $\epsilon_{*} \leq \frac{\epsilon_{0}}{2^{M / \alpha}}$ we can write
making Eq. (6) hold in this case as well.
We define the event
Then
for $M = \log_2T$
We set $\epsilon^{*}$ to be the unique solution of the equation
Eq. (6) will be applied, in particular, to the margin value $2^{-L + 2}$ when $2^{-L + 2} \leq \epsilon_0$ .
Armed with Eqs. (6) and (8) with $M = \log_2 T$ , we prove a lemma that upper bounds the number of stages $L$ .
Lemma A.6. Let $\epsilon_{*}$ be defined through (8), with $T > \frac{2}{3} d$ . Assume both $\bar{\mathcal{E}}$ and $\bigcap_{\ell = 1}^{L}\mathcal{E}_{\ell}$ hold. Then the number of stages $L$ of Algorithm 1 is upper bounded as
Here the $O$ -notation only omits absolute constants.
Proof. If at stage $L - 1$ the algorithm has not stopped, then we must have
Notice that if $\mathbf{x} \in \mathcal{P}{L-1}$ then $|\langle \mathbf{w}{L-1}, \mathbf{x} \rangle| \leq 2^{-L+1}$ . Combining it with the assumption that $\mathcal{E}{L-1}$ holds, we have $|\langle \mathbf{w}^*, \mathbf{x} \rangle| \leq 2^{-L+2}$ which implies $|\mathcal{P}{L-1}| \leq |\mathcal{T}_{2^{-L+2}}|$ .
We split the analysis into two cases. On one hand, when $2^{-L + 2} > \epsilon_0$ , this condition gives us directly
On the other hand if $2^{-L + 2}\leq \epsilon_0$ , then given $\bar{\mathcal{E}}$ holds, $|\mathcal{T}_{2^{-L + 2}}|$ is upper bounded as
with $M = \log_2T$ . Plugging into the first display results in
which resembles (8) with $2^{-L + 2}$ here playing the role of $\epsilon^{}$ therein. Then, from the definition of $\epsilon^{}$ in (8) we immediately obtain $2^{-L + 2} \geq \epsilon_{}$ , thus $L \leq \log_2\left(\frac{1}{\epsilon_}\right) + 2$ . Moreover, from (8) we see that $d / \epsilon_{} \geq 3T\epsilon_{}^{\alpha + 1}$ , which is equivalent to $\epsilon_{} \leq \left(\frac{d}{3T}\right)^{\frac{1}{\alpha + 2}}$ . Replacing this upper bound on $\epsilon^{}$ back into the right-hand side of (8) and dividing by $d$ yields
which gives the claimed upper bound on $L$ through $L \leq \log_2\left(\frac{1}{\epsilon_*}\right) + 2$ .
Corollary A.7. Let $T > d$ . Then with probability at least $1 - 2\delta$ over the random draw of $(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_T, y_T) \sim \mathcal{D}$ the label complexity $N_T(\mathcal{P})$ and the weighted cumulative regret $R_T(\mathcal{P})$ of Algorithm 1 simultaneously satisfy the following:
where the $O$ -notation only omits absolute constants.
Proof. Assume both $\bar{\mathcal{E}}$ and $\bigcap_{\ell = 1}^{L}\mathcal{E}{\ell}$ hold. Recalling the definition of $K{T}(\delta ,L)$ , we have
Similar to lemma A.6, we split the analysis into two cases depending on whether or not $2^{-L + 2}$ is bigger than $\epsilon_0$ . If $2^{-L + 2} \leq \epsilon_0$ , we have
therefore,
Plugging the above bounds into Theorem A.4 gives
Similarly applying them to Theorem A.5,
where in the second equality we used the assumption that $d < T$ .
If $2^{-L + 2} > \epsilon_0$ , then $2^L \leq \frac{4}{\epsilon_0}$ . Plugging these bounds into Theorem A.4 and Theorem A.5 gives
Lastly, (7) and lemma A.1 together yield
which concludes the proof.
We now turn the bound on the weighted cumulative regret $R_{T}(\mathcal{P})$ in the previous corollary into a bound on the excess risk. We can write
where $\widehat{\mathbf{w}}$ is the hypothesis returned by Algorithm 1. Now, simply observe that
has the same form as the function $\phi (\widehat{\mathbf{w}},\mathbf{x})$ in Appendix C on which the uniform convergence result of Theorem C.4 applies, with $\widehat{\epsilon} (\delta)$ therein replaced by the bound on $R_{T}(\mathcal{P})$ borrowed from Corollary A.7. This allows us to conclude that with probability at least $1 - \delta$
as claimed in Theorem 4.1 in the main body of the paper.
B. Proofs for Section 5
We adopt the same notation as in Section A and follow the same proof structure.
Define the loss function
and the sigmoidal function
The noise model in the main body of the paper can be re-formulated as follows: there exists an unknown vector $\mathbf{w}^*$ belonging to a Euclidean ball of radius $R\geq 1$ such that for any instance $\mathbf{x}$ of Euclidean norm at most 1,
Therefore we have
and the noise variable $\xi$ can be written as
Similar to the linear case, we denote for any $\epsilon > 0$ ,
Now, recall the notation in Algorithm 2. Similar to $\mathcal{E}_{\ell}$ defined in linear case, it will be useful to define the events
where $R_{\ell} = R2^{-\ell}$ for $\ell = 0,\dots ,L$
Lemma B.1. For any positive $L$ ,
Proof. We decompose the above quantity as
and bound each factor individually.
At the beginning of the stage $\ell$ , the remaining pool is $\mathcal{P}{\ell - 1}$ , and $\sup{x \in \mathcal{P}{\ell - 1}} |\langle \mathbf{w}{l - 1}, \mathbf{x} \rangle| \leq R_{\ell - 1}$ .
For $\ell \geq 2$ , if $\mathcal{E}_{\ell -1}$ holds then
Note that (9) also holds for $\ell = 1$ since $| \mathbf{w}^*| \leq R$ and $| \mathbf{x}| \leq 1$ .
Now, for any positive number $b$ , let
which is a convex compact set of $\mathbf{w}$ 's.
The predictor $\mathbf{w}_{\ell}$ in Eq. (4) in the main body is defined as the solution of the following constraint minimization problem:
For simplicity, from now on we omit the stage index $\ell$ from the subscripts of $\mathbf{x}{\ell,t}$ and $y{\ell,t}$ and denote $A_{\ell,T_\ell}$ as $A_\ell$ . For $t = 1,\dots,T_\ell$ , denote
Notice that by definition
where $\xi_{t}$ is the noise term $\xi_{t} = y_{t} - \mathbb{E}[y_{t}|\mathbf{x}_{t}]$ . Since $\cosh (\cdot)$ is an even function,
does not depend on $y_{t}$
Since $\mathbf{w}^{*}\in \Omega_{\ell}(2R_{\ell -1})$ (as a consequence of (9)), the assumption that $\mathcal{E}{\ell -1}$ holds and the optimality of $\mathbf{w}{\ell}$ in $\Omega_{\ell}(2R_{\ell -1})$ allows us to write
where
It follows that
For each $t = 1,\ldots ,T_{\ell}$ , the mean-value theorem insures the existence of a constant $\mu_{\ell}^{t}\in [0,1]$ such that for
we have
Since
we have
Introduce now the matrix
where $I$ is the $d\times d$ identity matrix. We can write
As a consequence, (10) implies
We thus obtain
where in the second inequality we used $H_{\ell} \succeq \frac{1}{4} e^{-4R_{\ell}}A_{\ell}$ .
To bound $| \mathbf{g}(\mathbf{w}^{*})|{A{\ell^{-1}}}$ , note that
We plug in $A = [A_{\ell}^{-1 / 2}\mathbf{x}1,\ldots ,A{\ell}^{-1 / 2}\mathbf{x}{T_\ell}], \xi = (\xi_1,\dots ,\xi{T_\ell})$ into lemma C.6 and get with probability at least $1 - \frac{\delta}{\ell(\ell + 1)}$
Thus for any $\mathbf{x} \in \mathcal{P}{\ell - 1} \backslash \mathcal{Q}{\ell}$ , we obtain that with probability at least $1 - \frac{\delta}{\ell(\ell + 1)}$ :
Recalling the definition of $\epsilon_{\ell}$ in Algorithm 2, we have, with probability at least $1 - \frac{\delta}{\ell(\ell + 1)}$
that is, $\mathbb{P}(\mathcal{E}{\ell}|\bigcap{s = 1}^{\ell -1}\mathcal{E}_{s}^{\sigma})\geq 1 - \frac{\delta}{\ell(\ell + 1)}$ (for $\ell = 1$ the above analysis gives $\mathbb{P}(\mathcal{E}_1^\sigma)\geq 1 - \frac{\delta}{2})$ . Hence
thereby concluding the proof.
Similar to linear case, Lemma A.2 and Lemma A.3 also hold for logistic case.
We define the weighted cumulative regret for the logistic case as
where $\widehat{\mathbf{w}}$ is the model output by Algorithm 2. Notice that since $|2\sigma (x) - 1|\leq |x| / 2$ for all $x$ , we alternatively upper bound
To simplify the math display we denote
then $\epsilon_{\ell} = \frac{R_{\ell}}{16e^{8R_{\ell}}K_{d}(\delta,\ell) + 4Re^{4R_{\ell}}}$ . Note that here the factor $K_{d}(\delta ,\ell)$ doesn't depend on $T$ but has a $\sqrt{d}$ dependence.
To bound the number of queries note that lemma A.3 still holds, we use this to prove the following result.
Theorem B.2. For any pool realization $\mathcal{P}$ , the label complexity $N_{T}(\mathcal{P})$ of Algorithm 2 operating on a pool $\mathcal{P}$ of size $T$ is bounded deterministically as
where the $O$ -notation only omits absolute constants.
Proof. By lemma A.3 and the fact that $K_{d}(\delta, \ell)$ is an increasing function of $\ell$ , we get
where the second inequality uses $(a + b)^2 \leq 2a^2 + 2b^2$ .
For the terms within the big-oh, once we sum over $\ell$ we can write
And similarly
Putting them together gives
as claimed.
The following bound on the weighted cumulative regret is the logistic counterpart to Theorem A.5.
Theorem B.3. For any pool realization $\mathcal{P}$ , the weighted cumulative regret $R_{T}(\mathcal{P})$ of Algorithm 2 operating on a pool $\mathcal{P}$ of size $T$ is bounded as
assuming $\bigcap_{\ell = 1}^{L}\mathcal{E}_{\ell}$ holds.
Proof. We follow the same reasoning as in Theorem A.5. We decompose the pool $\mathcal{P}$ as the union of following disjoint sets
and study the weighted cumulative regret components
Assume $\bigcap_{\ell = 1}^{L}\mathcal{E}{\ell}$ holds. First, notice that in $\mathcal{C}{\ell}$
under the assumption that $\mathcal{E}{\ell}$ holds, thus $\cup{\ell = 1}^{L}\mathcal{C}_{\ell}$ does not contribute weighted regret for $\widehat{\mathbf{w}}$ , i.e.,
Next, on $\mathcal{P}_L$ , we have $|\langle \mathbf{w}_L,\mathbf{x}\rangle |\leq R_L$ . Combining this with the assumption that $\mathcal{E}_L^\sigma$ holds, we get $|\langle \mathbf{w}^*,\mathbf{x}\rangle |\leq 2R_L$ , which implies that the weighted cumulative regret on $\mathcal{P}_L$ is bounded as
the second inequality deriving from the stopping condition defining $L$ in Algorithm 2.
Finally, on the queried points $\cup_{l=1}^{L}\mathcal{Q}{\ell}$ , it is unclear whether $\mathrm{sgn}\langle \widehat{\mathbf{w}},\mathbf{x}\rangle = \mathrm{sgn}\langle \mathbf{w}^*,\mathbf{x}\rangle$ or not, so we bound the weighted cumulative regret contribution of each data item $\mathbf{x}$ therein by $|\langle \mathbf{w}^*,\mathbf{x}\rangle|$ . Now, by construction, $\mathbf{x} \in \mathcal{Q}{\ell} \subset \mathcal{P}{\ell-1}$ , so that $|\langle \mathbf{w}{\ell-1},\mathbf{x}\rangle| \leq R_{\ell-1}$ which, combined with the assumption that $\mathcal{E}{\ell-1}^{\sigma}$ holds, yields $|\langle \mathbf{w}^*,\mathbf{x}\rangle| \leq 2R{\ell-1}$ . Since $|\mathcal{Q}{\ell}| = T{\ell}$ , we have
and Lemma A.3 allows us to write
Similar to the argument in theorem A.4, we have
and
Piecing together, we conclude that the total regret is bounded as
thereby concluding the proof.
As in the linear case, adding the ingredient of the low noise condition (2) helps us exploit the randomness in $\mathcal{P}$ to further bound from above the number of stages $L$ in the logistic case.
Specifically, assume the low noise condition (2) holds for $f^{}(\mathbf{x}) = \sigma (\langle \mathbf{w}^{},\mathbf{x}\rangle)$ , for some unknown exponent $\alpha \geq 0$ and unknown constant $\epsilon_0\in (0,1]$ . Similar to linear case we define the event
Then
for $M = \log_2T$
Lemma B.4. Let $\epsilon_{*}$ be defined through (8), with $T > \frac{2}{3} d$ . Assume both $\bar{\mathcal{E}}^{\sigma}$ and $\bigcap_{\ell = 1}^{L}\mathcal{E}_{\ell}$ hold. Then the number of stages $L$ of Algorithm 2 is upper bounded as
where the $O$ -notation only hides absolute constants.
Proof. If at stage $L - 1$ the algorithm has not stopped, then we must have
Notice that if $\mathbf{x} \in \mathcal{P}{L-1}$ then $|\langle \mathbf{w}{L-1}, \mathbf{x} \rangle| \leq R_{L-1}$ . Combining it with the assumption that $\mathcal{E}{L-1}$ holds, we have $|\langle \mathbf{w}^*, \mathbf{x} \rangle| \leq 2R{L-1}$ , which implies $|\mathcal{P}{L-1}| \leq |\mathcal{T}{\tanh(R_{L-1})}^\sigma| \leq |\mathcal{T}{2R{L-1}}^\sigma|$ .
We split the analysis into two cases. On one hand, when $2R_{L-1} > \epsilon_0$ , this condition gives us directly
On the other hand if $2R_{L - 1}\leq \epsilon_0$ , then given that $\bar{\mathcal{E}}^{\sigma}$ holds, $|\mathcal{T}{2R{L - 1}}^{\sigma}|$ is upper bounded as
with $M = \log_2 T$ . Plugging into the first display results in
which resembles (8) with $2R_{L-1}$ here playing the role of $\epsilon^{}$ therein. Then, from the definition of $\epsilon^{}$ in (8) we immediately obtain $2R_{L-1} \geq \epsilon_{}$ , thus $L \leq \log_2\left(\frac{R}{\epsilon_{}}\right) + 2$ . Moreover, from (8) we see that $d / \epsilon_{} \geq 3T\epsilon_{}^{\alpha+1}$ , which is equivalent to $\epsilon_{} \leq \left(\frac{d}{3T}\right)^{\frac{1}{\alpha+2}}$ . Replacing this upper bound on $\epsilon^{}$ back into the right-hand side of (8), dividing by $d$ and multiply by $R$ yields
which gives the claimed upper bound on $L$ through $L \leq \log_2\left(\frac{R}{\epsilon_*}\right) + 2$ .
Corollary B.5. Let $T > \frac{2}{3} d$ . Then with probability at least $1 - 2\delta$ over the random draw of $(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_T, y_T) \sim \mathcal{D}$ the label complexity $N_T(\mathcal{P})$ and the weighted cumulative regret $R_T(\mathcal{P})$ of Algorithm 1 simultaneously satisfy the following:
where the $O$ -notation hides absolute constants and
Proof. Assume both $\bar{\mathcal{E}}^{\sigma}$ and $\bigcap_{\ell = 1}^{L}\mathcal{E}{\ell}$ hold. Recalling the definition of $K{d}(\delta ,\ell)$ , we have
Similar to Lemma B.4, we split the analysis into two cases depending on whether or not $2R_{L-1}$ is bigger than $\epsilon_0$ . If $2R_{L-1} \leq \epsilon_0$ , we have
therefore
Moreover, we have
and
where the last equality is because $T > \frac{2}{3} d$ .
Plugging these bounds together back into factor
of Theorem B.2 yields
where the last equality is due to the assumption that $T > \frac{2}{3} d$ . Combining the above estimates gives
A similar argument gives
If $2R_{L-1} > \epsilon_0$ , then $\frac{1}{R_L} \leq \frac{4}{\epsilon_0}$ . Applying these bounds into Theorem B.2 we get
Similarly
Lastly, (11) and Lemma B.1 together yield
which concludes the proof.
We now turn the bound on the weighted cumulative regret $R_{T}(\mathcal{P})$ in the previous corollary into a bound on the excess risk. As in the linear case, we have
where $\widehat{\mathbf{w}}$ is the hypothesis returned by Algorithm 2. Now, simply observe that
has the same form as the function $\phi (\widehat{\mathbf{w}},\mathbf{x})$ in Appendix C on which Theorem C.4 applies, with $\widehat{\epsilon} (\delta)$ therein replaced by the bound on $R_{T}(\mathcal{P})$ deriving from Corollary B.5. This allows us to conclude that with probability at least $1 - \delta$
as claimed in Theorem 5.1 in the main body of the paper.
C. Ancillary Technical Results
This section collects ancillary technical results that are used throughout the appendix.
We first recall the following version of the Hoeffding's bound.
Lemma C.1. Let $a_1, \ldots, a_T$ be $T$ arbitrary real numbers, and ${\sigma_1, \ldots, \sigma_T}$ be $T$ i.i.d. Rademacher variables, each taking values $\pm 1$ with equal probability. Then for any $\epsilon \geq 0$
where the probability is with respect to ${\sigma_1,\dots ,\sigma_T}$
Let us consider the linear case first. Define the function $\phi : \mathbb{R}^d \times \mathcal{P} \to [0,1]$ as
where $\rho (\cdot)$ has range in [0, 1], and does not depend on $\widehat{\mathbf{w}}$ . We have the following standard covering result, which is a direct consequence of Sauer-Shelah lemma (e.g., (Sauer, 1972)).
Lemma C.2. Consider any given $S_{T} = {\mathbf{x}{1},\dots ,\mathbf{x}{T}} \in \mathbb{R}^{d}$ , and let
We have, when $T \geq d$ ,
The next result follows from a standard symmetrization argument.
Lemma C.3. Let $\mathcal{X} = \mathbb{R}^d$ , $S_T = (\mathbf{x}_1, \ldots, \mathbf{x}T)$ be a sample drawn i.i.d. according to $\mathcal{D}{\mathcal{X}}$ and $S_T' = (\mathbf{x}_1', \ldots, \mathbf{x}T')$ be another sample drawn according to $\mathcal{D}{\mathcal{X}}$ , with $T \geq d$ . Then with probability at least $1 - \delta$
uniformly over $\widehat{\mathbf{w}}\in \mathbb{R}^d$
Proof. Let ${\sigma_1, \ldots, \sigma_T}$ be independent Rademacher variables as in Lemma C.1. We can write, for any $\epsilon \geq 0$
(from the union bound and Lemma C.2)
(from Lemma C.1)
the last inequality deriving from the fact that, since $\phi (\widehat{\mathbf{w}},\mathbf{x}_t)\in [0,1]$
Take $\epsilon$ such that $\delta = 2\left(\frac{eT}{d}\right)^{d}\exp (-\epsilon /4)$ , to obtain the claimed bound.
Theorem C.4. With the same notation and assumptions as in Lemma C.3, let $\widehat{\mathbf{w}}\in \mathbb{R}^d$ be a function of $S_{T}$ such that
holds with probability at least $1 - \delta$ , for some $\widehat{\epsilon}(\delta) \in [0, 1]$ . Then with probability at least $1 - 3\delta$ :
Proof. Use the multiplicative Chernoff bound
and then apply Lemma C.3 to further bound the right-hand side.
To control noise terms, which are 1-subgaussian random variables, we provide the following lemma which is a direct implication of Chernoff bound.
Lemma C.5. Suppose $\xi$ is a $\sigma$ -subgaussian random variable, then for any $\delta > 0$ ,
Lemma C.6. Let $A = [a_{ij}] \in \mathbb{R}^{m \times n}$ be a matrix. Suppose $\xi_1, \ldots, \xi_n$ are independent $\sigma$ -subgaussian random variables. Then for any $\delta > 0$ ,
where $\xi = (\xi_1,\dots ,\xi_n)^\top$
Proof. Consider
the $i$ th component of vector $A\xi$ . Note that $(A\xi)i$ is a $\sigma \sqrt{\sum{j=1}^{n}a_{ij}^2}$ -subgaussian random variable, by lemma C.5 we have
A union bound over $i$ gives, with probability at least $1 - \delta$ ,
uniformly over $i = 1,\ldots ,m$ . Therefore, with probability at least $1 - \delta$
as claimed.
□




