SlowGuess's picture
Add Batch 7f69c793-bf41-49d8-b6c8-744ef9a657f3
804fc69 verified

A Competitive Algorithm for Agnostic Active Learning

Eric Price

Department of Computer Science

University of Texas at Austin

ecprice@cs.utexas.edu

Yihan Zhou

Department of Computer Science

University of Texas at Austin

joeyzhou@cs.utexas.edu

Abstract

For some hypothesis classes and input distributions, active agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs.

We take a different approach to agnostic active learning, getting an algorithm that is competitive with the optimal algorithm for any binary hypothesis class $H$ and distribution $\mathcal{D}_X$ over $X$ . In particular, if any algorithm can use $m^*$ queries to get $O(\eta)$ error, then our algorithm uses $O(m^* \log |H|)$ queries to get $O(\eta)$ error. Our algorithm lies in the vein of the splitting-based approach of Dasgupta [2004], which gets a similar result for the realizable $(\eta = 0)$ setting.

We also show that it is NP-hard to do better than our algorithm's $O(\log |H|)$ overhead in general.

1 Introduction

Active learning is motivated by settings where unlabeled data is cheap but labeling it is expensive. By carefully choosing which points to label, one can often achieve significant reductions in label complexity [Cohn et al., 1994]. A canonical example with exponential improvement is one-dimensional threshold functions $h_{\tau}(x) \coloneqq 1_{x \geq \tau}$ : in the noiseless setting, an active learner can use binary search to find an $\varepsilon$ -approximation solution in $O\left(\log \frac{1}{\varepsilon}\right)$ queries, while a passive learner needs $\Theta\left(\frac{1}{\varepsilon}\right)$ samples [Cohn et al., 1994, Dasgupta, 2005, Nowak, 2011].

In this paper we are concerned with agnostic binary classification. We are given a hypothesis class $H$ of binary hypotheses $h: \mathcal{X} \to {0,1}$ such that some $h^* \in H$ has $\mathrm{err}(h^*) \leq \eta$ , where the error

err(h):=Pr(x,y)D[h(x)y] \operatorname {e r r} (h) := \Pr_ {(x, y) \sim \mathcal {D}} [ h (x) \neq y ]

is measured with respect to an unknown distribution $\mathcal{D}$ over $\mathcal{X} \times {0,1}$ . In our active setting, we also know the marginal distribution $\mathcal{D}_X$ of $x$ , and can query any point $x$ of our choosing to receive a sample $y \sim (Y \mid X = x)$ for $(X,Y) \sim \mathcal{D}$ . The goal is to output some $\widehat{h}$ with $\mathrm{err}(\widehat{h}) \leq \eta + \varepsilon$ , using as few queries as possible.

The first interesting results for agnostic active learning were shown by Balcan et al. [2006], who gave an algorithm called Agnostic Active $(\mathrm{A}^2)$ that gets logarithmic dependence on $\varepsilon$ in some natural settings: it needs $\widetilde{O}\left(\log \frac{1}{\varepsilon}\right)$ samples for the 1d linear threshold setting (binary search), as long as $\varepsilon > 16\eta$ , and $\widetilde{O}\left(d^{2}\log \frac{1}{\varepsilon}\right)$ samples for $d$ -dimensional linear thresholds when $\mathcal{D}_X$ is the uniform sphere and $\varepsilon > \sqrt{d}\eta$ . This stands in contrast to the polynomial dependence on $\varepsilon$ necessary in the passive setting. The bound's requirement that $\varepsilon \gtrsim \eta$ is quite natural given a lower bound of $\Omega\left(d\frac{\eta^2}{\varepsilon^2}\right)$

due to [Käärääinen, 2006, Beygelzimer et al., 2009], where $d$ is the VC dimension. Subsequent works have given new algorithms [Dasgupta et al., 2007, Beygelzimer et al., 2010] and new analyses [Hanneke, 2007a] to get bounds for more general problems, parameterized by the "disagreement coefficient" of the problem. But while these can give better bounds in specific cases, they do not give a good competitive ratio to the optimum algorithm: see (Hanneke [2014], Section 8.2.5) for a realizable example where $O\left(\log \frac{1}{\varepsilon}\right)$ queries are possible, but disagreement-coefficient based bounds lead to $\Omega \left(\frac{1}{\varepsilon}\right)$ queries.

By contrast, in the realizable, identifiable setting $(\eta = \varepsilon = 0)$ , a simple greedy algorithm is competitive with the optimal algorithm. In particular, Dasgupta [2004] shows that if any algorithm can identify the true hypothesis in $m$ queries, then the greedy algorithm that repeatedly queries the point that splits the most hypotheses will identify the true hypothesis in $O(m\log |H|)$ queries. This extra factor of $\log |H|$ is computationally necessary: as we will show in Theorem 1.2, avoiding it is NP-hard in general. This approach can extend [Dasgupta, 2005] to the PAC setting (so $\varepsilon > 0$ , but still $\eta = 0$ ), showing that if any algorithm gets error $\varepsilon$ in $m^*$ queries, then this algorithm gets error $8\varepsilon$ in roughly $\widetilde{O}(m^* \cdot \log |H|)$ queries (but see the discussion after Theorem 8.2 of Hanneke [2014], which points out that one of the logarithmic factors is in an uncontrolled parameter $\tau$ , and states that "Resolving the issue of this extra factor of $\log \frac{1}{\tau}$ remains an important open problem in the theory of active learning").

The natural question is: can we find an agnostic active learning algorithm that is competitive with the optimal one in the agnostic setting?

Our Results. Our main result is just such a competitive bound. We say an active agnostic learning algorithm $\mathcal{A}$ solves an instance $(\bar{H},\mathcal{D}_X,\eta ,\varepsilon ,\delta)$ with $m$ measurements if, for every distribution $\mathcal{D}$ with marginal $\mathcal{D}X$ and for which some $h^\in H$ has $\mathrm{err}(h^{})\leq \eta$ , with probability $1 - \delta$ , $\mathcal{A}$ uses at most $m$ queries and outputs $\widehat{h}\in H$ with $\mathrm{err}\left(\widehat{h}\right)\leq \eta +\varepsilon$ . Let $m^{*}(H,\mathcal{D}{X},\eta ,\varepsilon ,\delta)$ be the optimal number of queries for this problem, i.e., the smallest $m$ for which any $\mathcal{A}$ can solve $(H,\mathcal{D}_X,\eta ,\varepsilon ,\delta)$ .

Define $N(H, \mathcal{D}X, \alpha)$ to be the size of the smallest $\alpha$ -cover over $H$ , i.e., the smallest set $S \subseteq H$ such that for every $h \in H$ there exists $h' \in S$ with $\operatorname{Pr}{x \sim \mathcal{D}_X}[h(x) \neq h'(x)] \leq \alpha$ . When the context is clear, we drop the parameters and simply use $N$ . Of course, $N$ is at most $|H|$ .

Theorem 1.1 (Competitive Bound). There exist some constants $c_{1}, c_{2}$ and $c_{3}$ such that for any instance $(H, \mathcal{D}_X, \eta, \varepsilon, \delta)$ with $\varepsilon \geq c_1\eta$ , Algorithm 1 solves the instance with sample complexity

m(H,DX,η,ε,δ)(m(H,DX,c2η,c3ε,99100)+log1δ)logN(H,DX,η)δ m (H, \mathcal {D} _ {X}, \eta , \varepsilon , \delta) \lesssim \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {2} \eta , c _ {3} \varepsilon , \frac {9 9}{1 0 0}\right) + \log \frac {1}{\delta}\right) \cdot \log \frac {N (H , \mathcal {D} _ {X} , \eta)}{\delta}

and polynomial time.

Even the case of $\eta = 0$ is interesting, given the discussion in [Hanneke, 2014] of the gap in [Dasgupta, 2005]'s bound, but the main contribution is the ability to handle the agnostic setting of $\eta >0$ . The requirement that $\varepsilon \geq O(\eta)$ is in line with prior work [Balcan et al., 2006, Dasgupta, 2005]. Up to constants in $\eta$ and $\varepsilon$ , Theorem 1.1 shows that our algorithm is within a $\log N\leq \log |H|$ factor of the optimal query complexity.

We show that it NP-hard to avoid this log $N$ factor, even in the realizable $(\eta = \varepsilon = \delta = 0)$ case:

Theorem 1.2 (Lower Bound). It is NP-hard to find a query strategy for every agnostic active learning instance within an $c \log |H|$ for some constant $c > 0$ factor of the optimal sample complexity.

This is a relatively simple reduction from the hardness of approximating SETCOVER [Dinur and Steurer, 2014]. The lower bound instance has $\eta = \varepsilon = \delta = 0$ , although these can be relaxed to being small polynomials (e.g., $\varepsilon = \eta = \frac{1}{3|X|}$ and $\delta = \frac{1}{3|H|}$ ).

Extension. We give an improved bound for our algorithm in the case of noisy binary search (i.e., $H$ consists of 1d threshold functions). When $\eta = \Theta(\varepsilon), N(H, \mathcal{D}_X, \varepsilon) = \Theta\left(\frac{1}{\varepsilon}\right)$ and $m^{*}(\eta, \varepsilon, .99) = O(\log \frac{1}{\varepsilon})$ . Thus Theorem 1.1 immediately gives a bound of $O(\log^2 \frac{1}{\varepsilon \delta})$ , which is nontrivial but not ideal. (For $\eta \ll \varepsilon$ , the same bound holds since the problem is strictly easier when $\eta$ is smaller.) However, the bound in Theorem 1.1 is quite loose in this setting, and we can instead give a bound of

O(log1εδloglog1εδ) O \left(\log {\frac {1}{\varepsilon \delta}} \log {\frac {\log {\frac {1}{\varepsilon}}}{\delta}}\right)

for the same algorithm, Algorithm 1. This matches the bound given by disagreement coefficient based algorithms for constant $\delta$ . The proof of this improved dependence comes from bounding a new parameter measuring the complexity of an $H, \mathcal{D}_x$ pair; this parameter is always at least $\Omega \left( \frac{1}{m^*} \right)$ but may be much larger (and is constant for 1d threshold functions). See Theorem 2.3 for details.

1.1 Related Work

Active learning is a widely studied topic, taking many forms beyond the directly related work on agnostic active learning discussed above [Settles, 2009]. Our algorithm can be viewed as similar to "uncertainty sampling" [Lewis, 1995, Lewis and Catlett, 1994], a popular empirical approach to active learning, though we need some modifications to tolerate adversarial noise.

One problem related to the one studied in this paper is noisy binary search, which corresponds to active learning of 1d thresholds. This has been extensively studied in the setting of i.i.d. noise [Burnashev and Zigangirov, 1974, Ben-Or and Hassidim, 2008, Dereniowski et al., 2021] as well as monotonic queries [Karp and Kleinberg, 2007]. Some work in this vein has extended beyond binary search to (essentially) active binary classification [Nowak, 2008, 2011]. These algorithms are all fairly similar to ours, in that they do multiplicative weights/Bayesian updates, but they query the single maximally informative point. This is fine in the i.i.d. noise setting, but in an agnostic setting the adversary can corrupt that query. For this reason, our algorithm needs to find a set of high-information points to query.

Another related problems is decision tree learning. The realizable, noiseless case $\eta = \varepsilon = 0$ of our problem can be reduced to learning a binary decision tree with minimal depth. Hegedús [1995] studied this problem and gave basically the same upper and lower bound as in Dasgupta [2005]. Kosaraju et al. [2002] studied a split tree problem, which is a generalization of binary decision tree learning, and also gave similar bounds. Azad et al. [2022] is a monograph focusing on decision tree learning, in which many variations are studied, including learning with noise. However, this line of work usually allows different forms of queries so their results are not directly comparable from results in the active learning literature.

For much more work on the agnostic active binary classification problem, see Hanneke [2014] and references therein. Many of these papers give bounds in terms of the disagreement coefficient, but sometimes in terms of other parameters. For example, Katz-Samuels et al. [2021] has a query bound that is always competitive with the disagreement coefficient-based methods, and sometimes much better; still, it is not competitive with the optimum in all cases.

In terms of the lower bound, it is shown in Laurent and Rivest [1976] that the problem is NP-complete, in the realizable and noiseless setting. To the best of our knowledge, our Theorem 1.2 showing hardness of approximation to within a $O(\log |H|)$ factor is new.

Minimax sample complexity bounds. Hanneke and Yang [2015] and Hanneke [2007b] have also given "minimax" sample complexity bounds for their algorithms, also getting a sample complexity within $O(\log |H|)$ of optimal. However, these results are optimal with respect to the sample complexity for the worst-case distribution over $y$ and $x$ . But the unlabeled data $x$ is given as input. So one should hope for a bound with respect to optimal for the actual $x$ and only worst-case over $y$ ; this is our bound.

We give the following example to illustrate that our bound, and indeed our algorithm, can be much better.

Example 1.3. Define a hypothesis class of $N$ hypotheses $h_1, \dots, h_N$ , and $\log N + N$ data points $x_1, \dots, x_{\log N + N}$ . For each hypothesis $h_j$ , the labels of the first $N$ points express $j$ in unary and the labels of the last $\log N$ points express $j$ in binary. We set $\eta = \varepsilon = 0$ and consider the realizable case.

In the above example, the binary region is far more informative than the unary region, but disagreement coefficient-based algorithms just note that every point has disagreement. Our algorithm will query the binary encoding region and take $O(\log N)$ queries. Disagreement coefficient based algorithms, including those in Hanneke and Yang [2015] and Hanneke [2007b], will rely on essentially uniform sampling for the first $\Omega(N / \log N)$ queries. These algorithms are "minimax" over $x$ , in the sense that if you didn't see any $x$ from the binary region, you would need almost as many samples as they use. But you do see $x$ from the binary region, so the algorithm should make use of it to get exponential improvement.

Future Work. Our upper bound assumes full knowledge of $\mathcal{D}_X$ and the ability to query arbitrary points $x$ . Often in active learning, the algorithm receives a large but not infinite set of unlabeled sample points $x$ , and can only query the labels of those points. How well our results adapt to this setting we leave as an open question.

Similarly, our bound is polynomial in the number of hypotheses and the domain size. This is hard to avoid in full generality—if you don't evaluate most hypotheses on most data points, you might be missing the most informative points—but perhaps it can be avoided in structured examples.

2 Algorithm Overview

Our algorithm is based on a Bayesian/multiplicative weights type approach to the problem, and is along the lines of the splitting-based approach of Dasgupta [2004].

We maintain a set of weights $w(h)$ for each $h \in H$ , starting at 1; these induce a distribution $\lambda(h) := \frac{w(h)}{\sum_{h} w(h)}$ which we can think of as our posterior over the "true" $h^*$ .

Realizable setting. As initial intuition, consider the realizable case of $\eta = \varepsilon = 0$ where we want to find the true $h^*$ . If $h^*$ really were drawn from our prior $\lambda$ , and we query a point $x$ , we will see a 1 with probability $\mathbb{E}_{h \sim \lambda} h(x)$ . Then the most informative point to query is the one we are least confident in, i.e., the point $x^*$ maximizing

r(x):=min{[h(x)],1[h(x)]}. r (x) := \min \left\{\underset {h \sim \lambda} {\mathbb {E}} [ h (x) ], 1 - \underset {h \sim \lambda} {\mathbb {E}} [ h (x) ] \right\}.

Suppose an algorithm queries $x_{1},\ldots ,x_{m}$ and receives the majority label under $h\sim \lambda$ each time. Then the fraction of $h\sim \lambda$ that agree with all the queries is at least $1 - \sum_{i = 1}^{m}r(x_{i})\geq 1 - mr(x^{})$ . This suggests that, if $r(x^{})\ll \frac{1}{m}$ , it will be hard to uniquely identify $h^*$ . It is not hard to formalize this, showing that: if no single hypothesis has $75%$ probability under $\lambda$ , and any algorithm exists with sample complexity $m$ and $90%$ success probability at finding $h^*$ , we must have $r(x^{*})\geq \frac{1}{10m}$ .

This immediately gives an algorithm for the $\eta = \varepsilon = 0$ setting: query the point $x$ maximizing $r(x)$ , set $w(h) = 0$ for all hypotheses $h$ that disagree, and repeat. As long as at least two hypotheses remain, the maximum probability will be $50% < 90%$ and each iteration will remove an $\Omega \left(\frac{1}{m}\right)$ fraction of the remaining hypotheses; thus after $O(m \log H)$ rounds, only $h^*$ will remain. This is the basis for Dasgupta [2004].

Handling noise: initial attempt. There are two obvious problems with the above algorithm in the agnostic setting, where a (possibly adversarial) $\eta$ fraction of locations $x$ will not match $h^*$ . First, a single error will cause the algorithm to forever reject the true hypothesis; and second, the algorithm makes deterministic queries, which means adversarial noise could be placed precisely on the locations queried to make the algorithm learn nothing.

To fix the first problem, we can adjust the algorithm to perform multiplicative weights: if in round $i$ we query a point $x_{i}$ and see $y_{i}$ , we set

wi+1(h)={wi(h)i fh(xi)=yieαwi(h)i fh(xi)yi w _ {i + 1} (h) = \left\{ \begin{array}{l l} w _ {i} (h) & \text {i f} h (x _ {i}) = y _ {i} \\ e ^ {- \alpha} w _ {i} (h) & \text {i f} h (x _ {i}) \neq y _ {i} \end{array} \right.

for a small constant $\alpha = \frac{1}{5}$ . To fix the second problem, we don't query the single $x^{}$ of maximum $r(x^{})$ , but instead choose $x$ according to distribution $q$ over many points $x$ with large $r(x)$ .

To understand this algorithm, consider how $\log \lambda_i(h^*)$ evolves in expectation in each step. This increases if the query is correct, and decreases if it has an error. A correct query increases $\lambda_i$ in proportion to the fraction of $\lambda$ placed on hypotheses that get the query wrong, which is at least $r(x)$ ; and the probability of an error is at most $\eta \max_x \frac{q(x)}{\mathcal{D}_x(x)}$ . If at iteration $i$ the algorithm uses query distribution $q$ , some calculation gives that

[logλi+1(h)logλi(h)]0.9α([r(x)]2.3ηmaxxq(x)Dx(x)).(1) \underset {q} {\mathbb {E}} \left[ \log \lambda_ {i + 1} \left(h ^ {*}\right) - \log \lambda_ {i} \left(h ^ {*}\right) \right] \geq 0. 9 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ r (x) ] - 2. 3 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {x} (x)}\right). \tag {1}

λ(h)Values h(x)
h10.91111 1111
h20.1 - 10-61111 0000
h310-60000 1110
y0000 1111

Figure 1: An example demonstrating that the weight of the true hypothesis can decrease if $\lambda$ is concentrated on the wrong ball. In this example, the true labels $y$ are closest to $h_3$ . But if the prior $\lambda$ on hypotheses puts far more weight on $h_1$ and $h_2$ , the algorithm will query uniformly over where $h_1$ and $h_2$ disagree: the second half of points. Over this query distribution, $h_1$ is more correct than $h_3$ , so the weight of $h_3$ can actually decrease if $\lambda(h_1)$ is very large.

The algorithm can choose $q$ to maximize this bound on the potential gain. There's a tradeoff between concentrating the samples over the $x$ of largest $r(x)$ , and spreading out the samples so the adversary can't raise the error probability too high. We show that if learning is possible by any algorithm (for a constant factor larger $\eta$ ), then there exists a $q$ for which this potential gain is significant.

Lemma 2.1 (Connection to OPT). Define $| h - h' | = \operatorname{Pr}_{x \sim \mathcal{D}_x}[h(x) \neq h'(x)]$ . Let $\lambda$ be a distribution over $H$ such that no radius- $(2\eta + \varepsilon)$ ball $B$ centered on $h \in H$ has probability at least $80%$ . Let $m^* = m^* \left( H, \mathcal{D}_X, \eta, \varepsilon, \frac{99}{100} \right)$ . Then there exists a query distribution $q$ over $\mathcal{X}$ with

[r(x)]110ηmaxxq(x)DX(x)9100m. \underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {1}{1 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \frac {9}{1 0 0 m ^ {*}}.

At a very high level, the proof is: imagine $h^* \sim \lambda$ . If the algorithm only sees the majority label $y$ on every query it performs, then its output $\widehat{h}$ is independent of $h^*$ and cannot be valid for more than 80% of inputs by the ball assumption; hence a 99% successful algorithm must have a 19% chance of seeing a minority label. But for $m^*$ queries $x$ drawn with marginal distribution $q$ , without noise the expected number of minority labels seen is $m^* \mathbb{E}[r(x)]$ , so $\mathbb{E}[r(x)] \gtrsim 1 / m^*$ . With noise, the adversary can corrupt the minority labels in $h^*$ back toward the majority, leading to the given bound.

The query distribution optimizing (1) has a simple structure: take a threshold $\tau$ for $r(x)$ , sample from $\mathcal{D}_x$ conditioned on $r(x) > \tau$ , and possibly sample $x$ with $r(x) = \tau$ at a lower rate. This means the algorithm can efficiently find the optimal $q$ .

Except for the caveat about $\lambda$ not already concentrating in a small ball, applying Lemma 2.1 combined with (1) shows that $\log \lambda(h^{})$ grows by $\Omega\left(\frac{1}{m^{}}\right)$ in expectation for each query. It starts out at $\log \lambda(h^{}) = -\log H$ , so after $O(m^{}\log H)$ queries we would have $\lambda(h^{})$ being a large constant in expectation (and with high probability, by Freedman's inequality for concentration of martingales). Of course $\lambda(h^{})$ can't grow past 1, which features in this argument in that once $\lambda(h^{*}) > 80%$ , a small ball will have large probability and Lemma 2.1 no longer applies, but at that point we can just output any hypothesis in the heavy ball.

Handling noise: the challenge. There is one omission in the above argument that is surprisingly challenging to fix, and ends up requiring significant changes to the algorithm: if at an intermediate step $\lambda_{i}$ concentrates in the wrong small ball, the algorithm will not necessarily make progress. It is entirely possible that $\lambda_{i}$ concentrates in a small ball, even in the first iteration—perhaps $99%$ of the hypotheses in $H$ are close to each other. And if that happens, then we will have $r(x) \leq 0.01$ for most $x$ , which could make the RHS of (1) negative for all $q$ .

In fact, it seems like a reasonable Bayesian-inspired algorithm really must allow $\lambda(h^{})$ to decrease in some situations. Consider the setting of Figure 1. We have three hypotheses, $h_1, h_2$ , and $h_3$ , and a prior $\lambda = (0.9, 0.099999, 10^{-6})$ . Because $\lambda(h_3)$ is so tiny, the algorithm presumably should ignore $h_3$ and query essentially uniformly from the locations where $h_1$ and $h_2$ disagree. In this example, $h_3$ agrees with $h_1$ on all but an $\eta$ mass in those locations, so even if $h^{} = h_{3}$ , the query distribution can match $h_1$ perfectly and not $h_3$ . Then $w(h_1)$ stays constant while $w(h_3)$ shrinks. $w(h_2)$ shrinks much faster, of course, but since the denominator is dominated by $w(h_1)$ , $\lambda(h_3)$ will still shrink. However, despite $\lambda(h_3)$ shrinking, the algorithm is still making progress in this example: $\lambda(h_2)$ is shrinking fast, and once it becomes small relative to $\lambda(h_3)$ then the algorithm will start querying points to distinguish $h_3$ from $h_1$ , at which point $\lambda(h_3)$ will start an inexorable rise.

Our solution is to "cap" the large density balls in $\lambda$ , dividing their probability by two, when applying Lemma 2.1. Our algorithm maintains a set $S \subseteq H$ of the "high-density region," such that the capped

distribution:

λ(h):={12λ(h)hSλ(h)112Pr[hS]1Pr[hS]hS \overline {{\lambda}} (h) := \left\{ \begin{array}{l l} \frac {1}{2} \lambda (h) & h \in S \\ \lambda (h) \cdot \frac {1 - \frac {1}{2} \Pr [ h \in S ]}{1 - \Pr [ h \in S ]} & h \notin S \end{array} \right.

has no large ball. Then Lemma 2.1 applies to $\overline{\lambda}$ , giving the existence of a query distribution $q$ so that the corresponding $\overline{r}(x)$ is large. We then define the potential function

ϕi(h):=logλi(h)+logλi(h)hSiλi(h)(2) \phi_ {i} \left(h ^ {*}\right) := \log \lambda_ {i} \left(h ^ {*}\right) + \log \frac {\lambda_ {i} \left(h ^ {*}\right)}{\sum_ {h \notin S _ {i}} \lambda_ {i} (h)} \tag {2}

for $h^* \notin S_i$ , and $\phi_i = 0$ for $h^* \in S_i$ . We show that $\phi_i$ grows by $\Omega\left(\frac{1}{m^*}\right)$ in expectation in each iteration. Thus, as in the example of Figure 1, either $\lambda(h^*)$ grows as a fraction of the whole distribution, or as a fraction of the "low-density" region.

If at any iteration we find that $\overline{\lambda}$ has some heavy ball $B(\mu, 2\eta + \varepsilon)$ so Lemma 2.1 would not apply, we add $B(\mu', 6\eta + 3\varepsilon)$ to $S$ , where $B(\mu', 2\eta + \varepsilon)$ is the heaviest ball before capping. We show that this ensures that no small heavy ball exists in the capped distribution $\overline{\lambda}$ . Expanding $S$ only increases the potential function, and then the lack of heavy ball implies the potential will continue to grow.

Thus the potential (2) starts at $-2\log |H|$ , and grows by $\Omega \left(\frac{1}{m^{}}\right)$ in each iteration. After $O(m^{}\log H)$ iterations, we will have $\phi_{i}\geq 0$ in expectation (and with high probability by Freedman's inequality). This is only possible if $h^{\ast}\in S$ , which means that one of the centers $\mu$ of the balls added to $S$ is a valid answer.

In fact, with some careful analysis we can show that with $1 - \delta$ probability that one of the first $O(\log \frac{H}{\delta})$ balls added to $S$ is a valid answer. The algorithm can then check all the centers of these balls, using the following active agnostic learning algorithm:

Theorem 2.2. Active agnostic learning can be solved for $\varepsilon = 3\eta$ with $O\left(|H| \log \frac{|H|}{\delta}\right)$ samples.

Proof. The algorithm is the following. Take any pair $h, h'$ with $| h - h' | \geq 3\eta$ . Sample $O\left(\log \frac{|H|}{\delta}\right)$ observations randomly from $(x \sim \mathcal{D}_x \mid h(x) \neq h'(x))$ . One of $h, h'$ is wrong on at least half the queries; remove it from $H$ and repeat. At the end, return any remaining $h$ .

To analyze this, let $h^* \in H$ be the hypothesis with error $\eta$ . If $h^*$ is chosen in a round, the other $h'$ must have error at least $2\eta$ . Therefore the chance we remove $h^*$ is at most $\delta / |H|$ . In each round we remove a hypothesis, so there are at most $|H|$ rounds and at most $\delta$ probability of ever crossing off $h^*$ . If we never cross off $h^*$ , at the end we output some $h$ with $| h - h^* | \leq 3\eta$ , which gives $\varepsilon = 3\eta$ .

The linear dependence on $|H|$ makes the Theorem 2.2 algorithm quite bad in most circumstances, but the dependence only on $|H|$ makes it perfect for our second stage (where we have reduced to $O(\log |H|)$ candidate hypotheses).

Overall, this argument gives an $O\left(m^{}\log \frac{|H|}{\delta} +\log \frac{|H|}{\delta}\log \frac{\log|H|}{\delta}\right)$ sample algorithm for agnostic active learning. One can simplify this bound by observing that the set of centers $C$ added by our algorithm form a packing, and must therefore all be distinguishable by the optimal algorithm, so $m^{}\geq \log C$ . This gives a bound of

O((m+log1δ)logHδ). O \left(\left(m ^ {*} + \log \frac {1}{\delta}\right) \log \frac {| H |}{\delta}\right).

By starting with an $\eta$ -net of size $N$ , we can reduce $|H|$ to $N$ with a constant factor increase in $\eta$ .

With some properly chosen constants $c_4$ and $c_5$ , the entire algorithm is formally described in Algorithm 1.

Remark 1: As stated, the algorithm requires knowing $m^*$ to set the target sample complexity / number of rounds $k$ . This restriction could be removed with the following idea. $m^*$ only enters the analysis through the fact that $O\left(\frac{1}{m^*}\right)$ is a lower bound on the expected increase of the potential

function in each iteration. However, the algorithm knows a bound on its expected increase in each round $i$ ; it is the value

τi=maxq[rˉi,Si(x)]c420ηmaxxq(x)DX(x). \tau_ {i} = \max _ {q} \underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S _ {i}} (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}.

optimized in the algorithm. Therefore, we could use an adaptive termination criterion that stops at iteration $k$ if $\sum_{i=1}^{k} \tau_i \geq O(\log \frac{|H|}{\delta})$ . This will guarantee that when terminating, the potential will be above 0 with high probability so our analysis holds.

Remark 2: The algorithm's running time is polynomial in $|H|$ . This is in general not avoidable, since the input is a truth table for $H$ . The bottleneck of the computation is the step where the algorithm checks if the heaviest ball has mass greater than $80%$ . This step could be accelerated by randomly sampling hypothesis and points to estimate and find heavy balls; this would improve the dependence to nearly linear in $|H|$ . If the hypothesis class has some structure, like the binary search example, the algorithm can be implemented more efficiently.

Algorithm 1 Competitive Algorithm for Active Agnostic Learning
Compute a (2\eta) maximal packing (H^{\prime})
Let (w_{0} = 1) for every (h\in H^{\prime})
(S_0\gets \emptyset)
(C\gets \emptyset)
for (i = 1,\ldots ,k = O\left(m^{*}\log \frac{|H^{\prime}|}{\delta}\right)) do Compute (\lambda_i(h) = \frac{w_{i - 1}(h)}{\sum_{h\in H}w_{i - 1}(h)}) for every (h\in H) if there exists (c_{4}\eta +c_{5}\varepsilon) ball with probability (>80%) over (\overline{\lambda}{i,S{i - 1}}) then (S_{i}\gets S_{i}\cup B(\mu^{\prime},3c_{4}\eta +3c_{5}\varepsilon)) where (B(\mu^{\prime},c_{4}\eta +c_{5}\varepsilon)) is the heaviest radius (c_{4}\eta +c_{5}\varepsilon) ball over (\lambda_{i}) (C\gets C\cup {\mu '}) else (\begin{array}{rl} & {\mathrm{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textbf{\textit{\textit{\textbf{\textit{\textbf{\textit{\textit{\textbf{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{~\textiti}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}

Generalization for Better Bounds. To get a better dependence for 1d threshold functions, we separate out the Lemma 2.1 bound on (1) from the analysis of the algorithm given a bound on (1). Then for particular instances like 1d threshold functions, we get a better bound on the algorithm by giving a larger bound on (1).

Theorem 2.3. Suppose that $\mathcal{D}_x$ and $H$ are such that, for any distribution $\lambda$ over $H$ such that no radius- $(c_4\eta + c_5\varepsilon)$ ball has probability more than $80%$ , there exists a distribution $q$ over $X$ such that

[r(x)]c420ηmaxxq(x)Dx(x)β \underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {x} (x)} \geq \beta

for some $\beta > 0$ . Then for $\varepsilon \geq c_1 \eta$ , $c_4 \geq 300$ , $c_5 = \frac{1}{10}$ and $c_1 \geq 90 c_4$ , let $N = N(H, \mathcal{D}_x, \eta)$ be the size of an $\eta$ -cover of $H$ . Algorithm 1 solves $(\eta, \varepsilon, \delta)$ active agnostic learning with $O\left(\frac{1}{\beta} \log \frac{N}{\delta} + \log \frac{N}{\delta} \log \frac{\log N}{\delta}\right)$ samples.

Corollary 2.4. There exists a constant $c_{1} > 1$ such that, for 1d threshold functions and $\varepsilon > c_{1}\eta$ , Algorithm 1 solves $(\eta, \varepsilon, \delta)$ active agnostic learning with $O\left(\log \frac{1}{\varepsilon\delta}\log \frac{\log\frac{1}{\varepsilon}}{\delta}\right)$ samples.

Proof. Because the problem is only harder if $\eta$ is larger, we can raise $\eta$ to be $\eta = \varepsilon / C$ , where $C > 1$ is a sufficiently large constant that Theorem 2.3 applies. Then $1d$ threshold functions have an $\eta$ -cover of size $N = O(1 / \varepsilon)$ . To get the result by Theorem 2.3, it suffices to show $\beta = \Theta(1)$ .

Each hypothesis is of the form $h(x) = 1_{x \geq \tau}$ , and corresponds to a threshold $\tau$ . So we can consider $\lambda$ to be a distribution over $\tau$ .

Let $\lambda$ be any distribution for which no radius- $R$ with probability greater than $80%$ ball exists, for $R = c_4\eta +c_5\varepsilon$ . For any percent $p$ between 0 and 100, let $\tau_{p}$ denote the pth percentile of $\tau$ under $\lambda$ (i.e., the smallest $t$ such that $\operatorname*{Pr}[\tau \leq t] \geq p / 100$ ). By the ball assumption, $\tau_{10}$ and $\tau_{90}$ do not lie in the same radius- $R$ ball. Hence $| h_{\tau_{10}} - h_{\tau_{90}}| > R$ , or

Prx[τ10x<τ90]>R. \operatorname * {P r} _ {x} [ \tau_ {1 0} \leq x < \tau_ {9 0} ] > R.

We let $q$ denote $(\mathcal{D}x \mid \tau{10} \leq x < \tau_{90})$ . Then for all $x \in \operatorname{supp}(q)$ we have $r(x) \geq 0.1$ and

q(x)Dx(x)=1PrxDx[xsupp(q)]<1R. \frac {q (x)}{D _ {x} (x)} = \frac {1}{\operatorname * {P r} _ {x \sim D _ {x}} [ x \in \operatorname {s u p p} (q) ]} < \frac {1}{R}.

Therefore we can set

β=xq[r(x)]c420ηmaxxq(x)Dx(x)0.1c4η20(c4η+c5ε)1, \beta = \mathop{\mathbb{E}}_{x\sim q}\left[ r (x)\right] - \frac{c_{4}}{20}\eta \max_{x}\frac{q(x)}{D_{x}(x)}\geq 0.1 - \frac{c_{4}\eta}{20(c_{4}\eta + c_{5}\varepsilon)}\gtrsim 1,

as needed.

3 Proof of Lemma 2.1

Lemma 2.1 (Connection to OPT). Define $| h - h' | = \operatorname{Pr}_{x \sim \mathcal{D}_x}[h(x) \neq h'(x)]$ . Let $\lambda$ be a distribution over $H$ such that no radius- $(2\eta + \varepsilon)$ ball $B$ centered on $h \in H$ has probability at least $80%$ . Let $m^* = m^* \left( H, \mathcal{D}_X, \eta, \varepsilon, \frac{99}{100} \right)$ . Then there exists a query distribution $q$ over $\mathcal{X}$ with

[r(x)]110ηmaxxq(x)DX(x)9100m. \underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {1}{1 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \frac {9}{1 0 0 m ^ {*}}.

Proof. WLOG, we assume that $\operatorname{Pr}{h\sim \lambda}[h(x) = 0] \geq \operatorname{Pr}{h\sim \lambda}[h(x) = 1]$ for every $x \in \mathcal{X}$ . This means $r(x) = \mathbb{E}_{h\sim \lambda}[h(x)]$ . This can be achieved by flipping all $h(x)$ and observations $y$ for all $x$ not satisfying this property; such an operation doesn't affect the lemma statement.

We will consider an adversary defined by a function $g: X \to [0,1]$ . The adversary takes a hypothesis $h \in H$ and outputs a distribution over $y \in {0,1}^X$ such that $0 \leq y(x) \leq h(x)$ always, and $\operatorname{err}(h) = \mathbb{E}_{x,y}[h(x) - y(x)] \leq \eta$ always. For a hypothesis $h$ , the adversary sets $y(x) = 0$ for all $x$ with $h(x) = 0$ , and $y(x) = 0$ independently with probability $g(x)$ if $h(x) = 1$ —unless $\mathbb{E}_x[h(x)g(x)] > \eta$ , in which case the adversary instead simply outputs $y = h$ to ensure the expected error is at most $\eta$ always.

We consider the agnostic learning instance where $x \sim \mathcal{D}_x$ , $h \sim \lambda$ , and $y$ is given by this adversary. Let $\mathcal{A}$ be an $(\eta, \varepsilon)$ algorithm which uses $m$ measurements and succeeds with $99%$ probability. Then it must also succeed with $99%$ probability over this distribution. For the algorithm to succeed on a sample $h$ , its output $\widehat{h}$ must have $| h - \widehat{h} | \leq 2\eta + \varepsilon$ . By the bounded ball assumption, for any choice of adversary, no fixed output succeeds with more than $80%$ probability over $h \sim \lambda$ .

Now, let $\mathcal{A}_0$ be the behavior of $\mathcal{A}$ if it observes $y = 0$ for all its queries, rather than the truth; $\mathcal{A}_0$ is independent of the input. $\mathcal{A}_0$ has some distribution over $m$ queries, outputs some distribution of answers $\widehat{h}$ . Let $q(x) = \frac{1}{m} \operatorname*{Pr}[\mathcal{A}_0 \text{ queries } x]$ , so $q$ is a distribution over $\mathcal{X}$ . Since $\mathcal{A}_0$ outputs a fixed distribution, by the bounded ball assumption, for $h \sim \lambda$ and arbitrary adversary function $g$ ,

Prhλ[A0succeeds]80%. \Pr_{h\sim \lambda}[\mathcal{A}_{0}\text{succeeds} ]\leq 80\% .

But $\mathcal{A}$ behaves identically to $\mathcal{A}_0$ until it sees its first nonzero $y$ . Thus,

99%Pr[Asucceeds]Pr[A0succeeds]+Pr[Asees a non-zero y] 99 \% \leq \Pr [ A \text {succeeds} ] \leq \Pr [ A _ {0} \text {succeeds} ] + \Pr [ A \text {sees a non-zero } y ]

and so

Pr[Asees a non-zero y]19% \Pr [ \mathcal{A} \text {sees a non-zero } y ] \geq 19 \%

Since $\mathcal{A}$ behaves like $\mathcal{A}_0$ until the first nonzero, we have

19%Pr[Asees a non - zero y]=Pr[A0m a k e s a q u e yxw i t hy(x)=1]E[N u m b e r q u e i e r sxb yA0w i t hy(x)=1]=m[y(x)]. \begin{array}{l} 19 \% \leq \Pr [ \mathcal {A} \text {sees a non - zero } y ] \\ = \Pr \left[ \mathcal {A} _ {0} \text {m a k e s a q u e y} x \text {w i t h} y (x) = 1 \right] \\ \leq \mathbb {E} [ \text {N u m b e r q u e i e r s} x \text {b y} \mathcal {A} _ {0} \text {w i t h} y (x) = 1 ] \\ = m \underset {h \sim \lambda} {\mathbb {E}} \underset {y} {\mathbb {E}} \underset {x \sim q} {\mathbb {E}} [ y (x) ]. \\ \end{array}

As an initial note, observe that $\mathbb{E}_{h,y}[y(x)]\leq \mathbb{E}_h[h(x)] = r(x)$ so

[r(x)]0.19m. \underset {x \sim q} {\mathbb {E}} [ r (x) ] \geq \frac {0 . 1 9}{m}.

Thus the lemma statement holds for $\eta = 0$ .

Handling $\eta > 0$ . Consider the behavior when the adversary's function $g: X \to [0,1]$ satisfies $\mathbb{E}_{x \sim \mathcal{D}_x}[g(x)r(x)] \leq \eta / 10$ . We denote the class of all adversary satisfying this condition as $G$ . We have that

[[g(x)h(x)]]=[g(x)r(x)]η/10. \underset {h \sim \lambda} {\mathbb {E}} \left[ \underset {x \sim \mathcal {D} _ {x}} {\mathbb {E}} [ g (x) h (x) ] \right] = \underset {x \sim \mathcal {D} _ {x}} {\mathbb {E}} [ g (x) r (x) ] \leq \eta / 1 0.

Let $E_{h}$ denote the event that $\mathbb{E}_{x\sim \mathcal{D}_x}[g(x)h(x)]\leq \eta$ , so $\operatorname*{Pr}[\overline{E}h]\leq 10%$ . Furthermore, the adversary is designed such that under $E{h}$ , $\mathbb{E}_y[y(x)] = h(x)(1 - g(x))$ for every $x$ . Therefore:

0.19Pr[A0m a k e s a q u e yxw i t hy(x)=1]Pr[Eh]+Pr[A0m a k e s a q u e yxw i t hy(x)=1Eh]0.1+E[N u m b e r q u e i r e sxb yA0w i t hy(x)=1a n dEh]=0.1+m[1Eh[Ey(x)]]=0.1+m[1Eh[h(x)(1g(x))]]0.1+m[Eh[h(x)](1g(x))]=0.1+m[r(x)(1g(x))]. \begin{array}{l} 0. 1 9 \leq \Pr [ \mathcal {A} _ {0} \text {m a k e s a q u e y} x \text {w i t h} y (x) = 1 ] \\ \leq \Pr [ \overline {{E}} _ {h} ] + \Pr [ \mathcal {A} _ {0} \text {m a k e s a q u e y} x \text {w i t h} y (x) = 1 \cap E _ {h} ] \\ \leq 0. 1 + \mathbb {E} [ \text {N u m b e r q u e i r e s} x \text {b y} \mathcal {A} _ {0} \text {w i t h} y (x) = 1 \text {a n d} E _ {h} ] \\ = 0. 1 + m \underset {h} {\mathbb {E}} \left[ \mathbb {1} _ {E _ {h}} \underset {x \sim q} {\mathbb {E}} \left[ \mathbb {E} y (x) \right] \right] \\ = 0. 1 + m \underset {h} {\mathbb {E}} \left[ \mathbb {1} _ {E _ {h}} \underset {x \sim q} {\mathbb {E}} [ h (x) (1 - g (x)) ] \right] \\ \leq 0. 1 + m \underset {x \sim q} {\mathbb {E}} [ \mathbb {E} _ {h} [ h (x) ] (1 - g (x)) ] \\ = 0. 1 + m \underset {x \sim q} {\mathbb {E}} [ r (x) (1 - g (x)) ]. \\ \end{array}

Thus

maxqmingGExq[r(x)(1g(x))]9100m(4) \max _ {q} \min _ {g \in G} \mathbb {E} _ {x \sim q} [ r (x) (1 - g (x)) ] \geq \frac {9}{1 0 0 m} \tag {4}

over all distributions $q$ and functions $g:X\to [0,1]$ satisfying $\mathbb{E}_{x\sim \mathcal{D}_x}[g(x)r(x)]\leq \eta /10$ . We now try to understand the structure of the $q,g$ optimizing the LHS of (4).

Let $g^{}$ denote an optimizer of the objective. First, we show that the constraint is tight, i.e., $\mathbb{E}_{x\sim \mathcal{D}_x}[g^ (x)r(x)] = \eta /10$ . Since increasing $g$ improves the constraint, the only way this could not happen is if the maximum possible function, $g(x) = 1$ for all $x$ , lies in $G$ . But for this function, the LHS of (4) would be 0, which is a contradiction; hence we know increasing $g$ to improve the objective at some point hits the constraint, and hence $\mathbb{E}_{x\sim \mathcal{D}_x}[g^* (x)r(x)] = \eta /10$ .

For any $q$ , define $\tau_q \geq 0$ to be the minimum threshold such that

[r(x)1q(x)DX(x)>τq]<η/10. \underset {x \sim \mathcal {D} _ {x}} {\mathbb {E}} \left[ r (x) \cdot 1 _ {\frac {q (x)}{\mathcal {D} _ {X} (x)} > \tau_ {q}} \right] < \eta / 1 0.

and define $g_{q}$ by

gq(x):={1q(x)DX(x)>τqαq(x)DX(x)=τq0q(x)DX(x)<τq g _ {q} (x) := \left\{ \begin{array}{l l} 1 & \frac {q (x)}{\mathcal {D} _ {X} (x)} > \tau_ {q} \\ \alpha & \frac {q (x)}{\mathcal {D} _ {X} (x)} = \tau_ {q} \\ 0 & \frac {q (x)}{\mathcal {D} _ {X} (x)} < \tau_ {q} \end{array} \right.

where $\alpha \in [0,1]$ is chosen such that $\mathbb{E}_{x\sim \mathcal{D}x}[r(x)g_q(x)] = \eta /10$ ; such an $\alpha$ always exists by the choice of $\tau{q}$ .

For any $q$ , we claim that the optimal $g^{*}$ in the LHS of (4) is $g_{q}$ . It needs to maximize

[q(x)DX(x)r(x)g(x)] \underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} \left[ \frac {q (x)}{\mathcal {D} _ {X} (x)} r (x) g (x) \right]

subject to a constraint on $\mathbb{E}_{x\sim \mathcal{D}_X}[r(x)g(x)]$ ; therefore moving mass to points of larger $\frac{q(x)}{\mathcal{D}X(x)}$ is always an improvement, and $g{q}$ is optimal.

We now claim that the $q$ maximizing (4) has $\max \frac{q(x)}{\mathcal{D}_X(x)} = \tau_q$ . If not, some $x'$ has $\frac{q(x')}{\mathcal{D}X(x')} > \tau_q$ . Then $g_q(x') = 1$ , so the $x'$ entry contributes nothing to $\mathbb{E}{x \sim q}[r(x)(1 - g_q(x))]$ ; thus decreasing $q(x)$ halfway towards $\tau_q$ (which wouldn't change $g_q$ ), and adding the savings uniformly across all $q(x)$ (which also doesn't change $g_q$ ) would increase the objective.

So there exists a $q$ satisfying (4) for which $\operatorname*{Pr}\left[\frac{q(x)}{\mathcal{D}X(x)} >\tau_q\right] = 0$ , and therefore the set $T = \left{x\mid \frac{q(x)}{\mathcal{D}X(x)} = \tau_q\right}$ satisfies $\mathbb{E}{\mathcal{D}X}[r(x)\mathbb{1}{x\in T}]\geq \eta /10$ and a $g{q}$ minimizing (4) is

gq(x)=η101xTEDX[r(x)1xT]. g _ {q} (x) = \frac {\eta}{1 0} \frac {\mathbb {1} _ {x \in T}}{\mathbb {E} _ {\mathcal {D} _ {X}} [ r (x) \mathbb {1} _ {x \in T} ]}.

Therefore

[r(x)gq(x)]=[q(x)DX(x)r(x)η101xTEDX[r(x)1xT]]η10maxxq(x)DX(x) \begin{array}{l} \underset {x \sim q} {\mathbb {E}} \left[ r (x) g _ {q} (x) \right] = \underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} \left[ \frac {q (x)}{\mathcal {D} _ {X} (x)} r (x) \frac {\eta}{1 0} \frac {\mathbb {1} _ {x \in T}}{\mathbb {E} _ {\mathcal {D} _ {X}} [ r (x) \mathbb {1} _ {x \in T} ]} \right] \\ \leq \frac {\eta}{1 0} \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \\ \end{array}

and so by (4),

[r(x)]η10maxxq(x)DX(x)9100m \underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {\eta}{1 0} \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \frac {9}{1 0 0 m}

as desired.

4 Conclusion

We have given an algorithm that solves agnostic active learning with (for constant $\delta$ ) at most an $O(\log |H|)$ factor more queries than the optimal algorithm. It is NP-hard to improve upon this $O(\log |H|)$ factor in general, but for specific cases it can be avoided. We have shown that 1d threshold functions, i.e. binary search with adversarial noise, is one such example where our algorithm matches the performance of disagreement coefficient-based methods and is within a log $\log \frac{1}{\varepsilon}$ factor of optimal.

5 Acknowledgments

Yihan Zhou and Eric Price were supported by NSF awards CCF-2008868, CCF-1751040 (CAREER), and the NSF AI Institute for Foundations of Machine Learning (IFML).

References

Mohammad Azad, Igor Chikalov, Shahid Hussain, Mikhail Moshkov, and Beata Zielosko. Decision Trees with Hypotheses. Springer International Publishing, 2022. doi: 10.1007/978-3-031-08585-7. URL https://doi.org/10.1007/978-3-031-08585-7.
Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning, pages 65-72, 2006.
Michael Ben-Or and Avinatan Hassidim. The bayesian learner is optimal for noisy binary search (and pretty good for quantum as well). In 2008 49th Annual IEEE Symposium on Foundations of Computer Science, pages 221-230, 2008. doi: 10.1109/FOCS.2008.58.

Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance weighted active learning. In Proceedings of the 26th annual international conference on machine learning, pages 49-56, 2009.
Alina Beygelzimer, Daniel J Hsu, John Langford, and Tong Zhang. Agnostic active learning without constraints. Advances in neural information processing systems, 23, 2010.
Marat Valievich Burnashev and Kamil'Shamil'evich Zigangirov. An interval estimation problem for controlled observations. Problemy Peredachi Informatsii, 10(3):51-61, 1974.
David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15:201-221, 1994.
Sanjoy Dasgupta. Analysis of a greedy active learning strategy. Advances in neural information processing systems, 17, 2004.
Sanjoy Dasgupta. Coarse sample complexity bounds for active learning. Advances in neural information processing systems, 18, 2005.
Sanjoy Dasgupta, Daniel J Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. Advances in neural information processing systems, 20, 2007.
Dariusz Dereniowski, Aleksander Lukasiewicz, and Przemyslaw Uznanski. Noisy searching: simple, fast and correct. CoRR, abs/2107.05753, 2021. URL https://arxiv.org/abs/2107.05753.
Irit Dinur and David Steurer. Analytical approach to parallel repetition. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 624-633, 2014.
Steve Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pages 353-360, 2007a.
Steve Hanneke. Teaching dimension and the complexity of active learning. In International conference on computational learning theory, pages 66-81. Springer, 2007b.
Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends® in Machine Learning, 7(2-3):131-309, 2014.
Steve Hanneke and Liu Yang. Minimax analysis of active learning. J. Mach. Learn. Res., 16(1): 3487-3602, 2015.
Tibor Hegedus. Generalized teaching dimensions and the query complexity of learning. In Proceedings of the eighth annual conference on Computational learning theory, pages 108-117, 1995.
Matti Kääräinen. Active learning in the non-realizable case. In Algorithmic Learning Theory: 17th International Conference, ALT 2006, Barcelona, Spain, October 7-10, 2006. Proceedings 17, pages 63-77. Springer, 2006.
Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07, page 881-890, USA, 2007. Society for Industrial and Applied Mathematics. ISBN 9780898716245.
Julian Katz-Samuels, Jifan Zhang, Lalit Jain, and Kevin Jamieson. Improved algorithms for agnostic pool-based active classification. In International Conference on Machine Learning, pages 5334-5344. PMLR, 2021.
S Rao Kosaraju, Teresa M Przytycka, and Ryan Borgstrom. On an optimal split tree problem. In Algorithms and Data Structures: 6th International Workshop, WADS'99 Vancouver, Canada, August 11-14, 1999 Proceedings, pages 157-168. Springer, 2002.
Hyafil Laurent and Ronald L Rivest. Constructing optimal binary decision trees is np-complete. Information processing letters, 5(1):15-17, 1976.
David D Lewis. A sequential algorithm for training text classifiers: Corrigendum and additional data. In Acm Sigir Forum, volume 29, pages 13-19. ACM New York, NY, USA, 1995.

David D Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In Machine learning proceedings 1994, pages 148-156. Elsevier, 1994.
Robert Nowak. Generalized binary search. In 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pages 568-574. IEEE, 2008.
Robert D Nowak. The geometry of generalized binary search. IEEE Transactions on Information Theory, 57(12):7893-7906, 2011.
Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009.
Joel Tropp. Freedman's inequality for matrix martingales. 2011.
Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.

A Query Complexity Upper Bound

In this section we present the whole proof of the query complexity upper bound of Algorithm 1, as stated in Theorem 1.1.

A.1 Notation

We remind the readers about some definitions first. Remember that $w_{i}(h)$ denote the weight of hypothesis $h$ in iteration $i$ and $\lambda_{i,S}(h) = \frac{w_i(h)}{\sum_{h' \in S} w_i(h')}$ for some $S \subseteq H$ denote the proportion of $h$ in $S$ . We view $\lambda_{i,S}$ as a distribution of hypotheses in $S$ so for $h \notin S$ , $\lambda_{i,S}(h) = 0$ . For a set $S \subseteq H$ of hypotheses, we define $w_{i}(S) := \sum_{h \in S} w(h)$ and $\lambda_{i}(h) = \lambda_{i,H}(h)$ .

Define $r_{\lambda,h^*}(x) \coloneqq \operatorname{Pr}{h \sim \lambda}[h(x) \neq h^*(x)]$ , and $r_\lambda(x) = \min{y \in {0,1}} \operatorname{Pr}{h \sim \lambda}[h(x) \neq y]$ , so $r_\lambda(x) = \min(r{\lambda,h^*}(x), 1 - r_{\lambda,h^*}(x))$ .

Define

λˉi,S(h):=12λi(h)+12λi,H\S(h)={12λi(h)hSλi(h)112Prhλi[hS]1Prhλi[hS]hS(5) \bar {\lambda} _ {i, S} (h) := \frac {1}{2} \lambda_ {i} (h) + \frac {1}{2} \lambda_ {i, H \backslash S} (h) = \left\{ \begin{array}{l l} \frac {1}{2} \lambda_ {i} (h) & h \in S \\ \lambda_ {i} (h) \cdot \frac {1 - \frac {1}{2} \Pr_ {h \sim \lambda_ {i}} [ h \in S ]}{1 - \Pr_ {h \sim \lambda_ {i}} [ h \in S ]} & h \notin S \end{array} \right. \tag {5}

as the "capped" distribution in iteration $i$ .

Finally, for notational convenience define $r_{i,S} \coloneqq r_{\lambda_{i,S}}$ , $r_{i,S,h} \coloneqq r_{\lambda_{i,S},h}$ and $\bar{r}{i,S} \coloneqq r{\overline{\lambda}_{i,S}}$ .

The main focus of our proof would be analyzing the potential function

ϕi(h)={logλi(h)+logλi,HSi(h)hSi0hSi, \phi_ {i} (h ^ {*}) = \left\{ \begin{array}{l l} \log \lambda_ {i} (h ^ {*}) + \log \lambda_ {i, H \setminus S _ {i}} (h ^ {*}) & h ^ {*} \notin S _ {i} \\ 0 & h ^ {*} \in S _ {i}, \end{array} \right.

where $h^*$ is the best hypothesis in $H$ . We would like to show that $\phi_{i+1}(h^*) - \phi_i(h^*)$ is growing at a proper rate in each iteration. We pick $S_i$ to be an expanding series of sets, i.e., $S_i \subseteq S_{i+1}$ for any $i \geq 1$ . However, the change of the "capped" set $S_i$ makes this task challenging. Therefore, we instead analyze the following quantity defined as

Δi(h):={logλi+1(h)λi(h)+logλi+1,HSi(h)λi,HSi(h)hSi0hSi, \Delta_ {i} (h ^ {*}) := \left\{ \begin{array}{l l} \log \frac {\lambda_ {i + 1} (h ^ {*})}{\lambda_ {i} (h ^ {*})} + \log \frac {\lambda_ {i + 1 , H \setminus S _ {i}} (h ^ {*})}{\lambda_ {i , H \setminus S _ {i}} (h ^ {*})} & h ^ {*} \notin S _ {i} \\ 0 & h ^ {*} \in S _ {i}, \end{array} \right.

and $\phi_{i + 1}(h^{}) - \phi_{i}(h^{}) = \Delta_{i}(h^{}) + \log \frac{\lambda_{i + 1,H\setminus S_{i + 1}}(h^{})}{\lambda_{i + 1,H\setminus S_{i}}(h^{})}$ if $h^\notin S_{i + 1}$ . Further, we define $\psi_k(h^*)\coloneqq \sum_{i < k}\Delta_i(h^*)$ so by definition $\phi_k(h^*) = \phi_0(h^*) + \psi_k(h^*) + \sum_{i < k}\log \frac{\lambda_{i + 1,H\setminus S_{i + 1}}(h^*)}{\lambda_{i + 1,H\setminus S_{i}}(h^*)}$ if $h^\notin S_{i + 1}$ . In the following text, we will drop the parameter $h^$ when the context is clear and just use $\phi_i,\Delta_i$ and $\psi_{i}$ instead.

A.2 Potential Growth

We will lower bound the conditional per iteration potential increase by first introducing a lemma that relates the potential change to the optimization problem (3).

Lemma A.1. Assume that $\mathrm{err}(h^{})\leq \eta$ , then for any set $S$ of hypotheses containing $h^$ and query distribution $q$ , we have

E[logλi+1,S(h)λi,S(h)Fi]0.9α([ri,S,h(x)]2.3ηmaxxq(x)DX(x)) \mathbb {E} \left[ \log \frac {\lambda_ {i + 1 , S} (h ^ {*})}{\lambda_ {i , S} (h ^ {*})} \Bigg | \mathcal {F} _ {i} \right] \geq 0. 9 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ r _ {i, S, h} (x) ] - 2. 3 \eta \max _ {x} \frac {q (x)}{D _ {X} (x)}\right)

for $\alpha \leq 0.2$ . Moreover,

E[max{0,logλi+1,S(h)λi,S(h)}Fi]α[ri,S,h(x)]. \mathbb {E} \left[ \max \left\{0, \log \frac {\lambda_ {i + 1 , S} \left(h ^ {*}\right)}{\lambda_ {i , S} \left(h ^ {*}\right)} \right\} \mid \mathcal {F} _ {i} \right] \leq \alpha \underset {x \sim q} {\mathbb {E}} [ r _ {i, S, h ^ {*}} (x) ].

Proof. For notational convenience, define $\widetilde{r}(x) \coloneqq r_{i,S,h^*}(x)$ .

Observe that

λi,S(h)λi+1,S(h)=wi(h)wi+1(h)hSwi+1,S(h)hSwi,S(h)=wi(h)wi+1(h)[wi+1,S(h)wi,S(h)]. \frac {\lambda_ {i , S} (h ^ {*})}{\lambda_ {i + 1 , S} (h ^ {*})} = \frac {w _ {i} (h ^ {*})}{w _ {i + 1} (h ^ {*})} \frac {\sum_ {h \in S} w _ {i + 1 , S} (h)}{\sum_ {h \in S} w _ {i , S} (h)} = \frac {w _ {i} (h ^ {*})}{w _ {i + 1} (h ^ {*})} \underset {h \sim \lambda_ {i, S}} {\mathbb {E}} \left[ \frac {w _ {i + 1 , S} (h)}{w _ {i , S} (h)} \right].

Let $p(x) = \operatorname*{Pr}_{y\sim (Y|X)}[y\neq h^{*}(x)]$ denote the probability of error if we query $x$ , so

[p(x)]η. \underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} [ p (x) ] \leq \eta .

Suppose we query a point $x$ and do not get an error. Then the hypotheses that disagree with $h^*$ are downweighted by an $e^{-\alpha}$ factor, so

λi,S(h)λi+1,S(h)=[1+(eα1)1h(x)h(x)]=1(1eα)r~(x). \frac {\lambda_ {i , S} (h ^ {*})}{\lambda_ {i + 1 , S} (h ^ {*})} = \underset {h \sim \lambda_ {i, S}} {\mathbb {E}} [ 1 + (e ^ {- \alpha} - 1) 1 _ {h (x) \neq h ^ {*} (x)} ] = 1 - (1 - e ^ {- \alpha}) \widetilde {r} (x).

On the other hand, if we do get an error then the disagreeing hypotheses are effectively upweighted by $e^{\alpha}$ :

λi,S(h)λi+1,S(h)=1+(eα1)r~(x). \frac {\lambda_ {i , S} \left(h ^ {*}\right)}{\lambda_ {i + 1 , S} \left(h ^ {*}\right)} = 1 + \left(e ^ {\alpha} - 1\right) \widetilde {r} (x).

Therefore

[logλi+1,S(h)λi,S(h)|Fi]=(1p(x))log(1(1eα)r~(x))p(x)log(1+(eα1)r~(x))(1p(x))(1eα)r~(x)p(x)(eα1)r~(x)=(1eα)r~(x)p(x)r~(x)(eαeα).(6) \begin{array}{l} \underset {y | x} {\mathbb {E}} \left[ \log \frac {\lambda_ {i + 1 , S} \left(h ^ {*}\right)}{\lambda_ {i , S} \left(h ^ {*}\right)} \middle | \mathcal {F} _ {i} \right] \\ = - (1 - p (x)) \log \left(1 - \left(1 - e ^ {- \alpha}\right) \widetilde {r} (x)\right) - p (x) \log \left(1 + \left(e ^ {\alpha} - 1\right) \widetilde {r} (x)\right) \tag {6} \\ \geq (1 - p (x)) \left(1 - e ^ {- \alpha}\right) \widetilde {r} (x) - p (x) \left(e ^ {\alpha} - 1\right) \widetilde {r} (x) \\ = (1 - e ^ {- \alpha}) \widetilde {r} (x) - p (x) \widetilde {r} (x) \left(e ^ {\alpha} - e ^ {- \alpha}\right). \\ \end{array}

Using that $\widetilde{r}(x) \leq 1$ , we have

E[logλi+1,S(h)λi,S(h)Fi](1eα)Exq[r~(x)](eαeα)Exq[p(x)]0.9α[r~(x)2.3p(x)], \begin{array}{l} \mathbb {E} \left[ \log \frac {\lambda_ {i + 1 , S} \left(h ^ {*}\right)}{\lambda_ {i , S} \left(h ^ {*}\right)} \mid \mathcal {F} _ {i} \right] \geq (1 - e ^ {- \alpha}) \mathbb {E} _ {x \sim q} [ \widetilde {r} (x) ] - (e ^ {\alpha} - e ^ {- \alpha}) \mathbb {E} _ {x \sim q} [ p (x) ] \\ \geq 0. 9 \alpha \underset {x \sim q} {\mathbb {E}} [ \widetilde {r} (x) - 2. 3 p (x) ], \\ \end{array}

where the last step uses $\alpha \leq 0.2$ . Finally,

[p(x)]=[p(x)q(x)DX(x)]ηmaxxq(x)DX(x). \underset {x \sim q} {\mathbb {E}} [ p (x) ] = \underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} \left[ p (x) \frac {q (x)}{\mathcal {D} _ {X} (x)} \right] \leq \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}.

This proves the first desired result. For the second, note that if we query $x$ , then conditioned on $\mathcal{F}_i$

max{0,logλi+1,S(h)λi,S(h)}={0w i t h p r o b a b i l i t yp(x),log(1+(1eα)r~(x))o t h e r w i s e. \max \left\{0, \log \frac {\lambda_ {i + 1 , S} (h ^ {*})}{\lambda_ {i , S} (h ^ {*})} \right\} = \left\{ \begin{array}{l l} 0 & \text {w i t h p r o b a b i l i t y} p (x), \\ \log (1 + (1 - e ^ {- \alpha}) \widetilde {r} (x)) & \text {o t h e r w i s e}. \end{array} \right.

Since $\log (1 + (1 - e^{-\alpha})\widetilde{r} (x))\leq (1 - e^{-\alpha})\widetilde{r} (x)\leq \alpha \widetilde{r} (x)$ , taking the expectation over $x$ gives the result.

The above lemma, combined with Lemma 2.1, proves the potential will grow at desired rate at each iteration. But remember that Lemma 2.1 requires the condition that no ball has probability greater than $80%$ , so we need to check this condition is satisfied. The following lemma shows that if we cap the set $S_{i}$ , then the probability is not concentrated on any small balls.

Lemma A.2. In Algorithm 1, for every iteration $i$ , $S_{i}$ is such that no radius $c_{4}\eta + c_{5}\varepsilon$ ball has more than $80%$ probability under $\overline{\lambda}_{i,S_i}$ .

Proof. If $S_{i} = S_{i - 1}$ , then by the construction of $S_{i}$ , there are no radius $c_{4}\eta +c_{5}\varepsilon$ balls have probability greater than $80%$ under $\overline{\lambda}{i,S{i - 1}} = \overline{\lambda}{i,S_i}$ . Otherwise, we have $S{i - 1}\neq S_{i}$ and a ball $B(\mu ,3c_4\eta +3c_5\varepsilon)$ is added to $S_{i}$ in this iteration. We first prove a useful claim below.

Claim A.3. If a ball $B' = (\mu, 3c_4\eta + 3c_5\varepsilon)$ is added to $S_i$ at some iteration $i$ , $\lambda_i(B(\mu, c_4\eta + c_5\varepsilon)) \geq 0.6$ .

Proof. If $B'$ is added to $S_{i}$ at the iteration $i$ , then there exists some ball $D$ with radius $c_{4}\eta + c_{5}\varepsilon$ such that $\bar{\lambda}{i,S{i-1}}(D) \geq 0.8$ . If a set of hypotheses gains probability after capping, the gained probability comes from the reduced probability of other hypotheses not in this set. Therefore, the gained probability of any set is upper bounded by half of the probability of the complement of that set before capping. This means $\lambda_{i}(D) \geq 0.6$ because otherwise after capping $\bar{\lambda}{i,S{i-1}}(D) < 0.8$ , which is a contradiction. As a result, $\lambda_{i}(B(\mu, c_{4}\eta + c_{5}\varepsilon)) \geq \lambda_{i}(D) \geq 0.6$ .

By Claim A.3, the probability of $B(\mu, c_4\eta + c_5\varepsilon)$ is at least 0.6 over the uncapped distribution $\lambda_i$ . So any ball not intersecting $B(\mu, c_4\eta + c_5\varepsilon)$ has probability at most 0.4 before capping. After capping these balls will have probability no more than 0.7. At the same time, any ball intersects $B(\mu, c_4\eta + c_5\varepsilon)$ would be completely inside $B(\mu, 3c_4\eta + 3c_5\varepsilon)$ so its probability would be at most 0.5 after capping.

Now we are ready to apply Lemma A.1 and Lemma 2.1 except one caution. Remember that in the beginning of the algorithm, we compute a $2\eta$ -packing $H' \subseteq H$ of the instance. From the well-known relationship between packing and covering (for example, see Vershynin [2018, Lemma 4.2.8]), we have $|H'| \leq N(H, \eta)$ . Every hypothesis in $H$ is within $2\eta$ to some hypothesis in $H'$ , so there exists a hypothesis in $H'$ with error less than $3\eta$ . This means that the best hypothesis $h^* \in H'$ has error $3\eta$ instead of $\eta$ . The following lemma serves as the cornerstone of the proof of the query complexity upper bound, which states that the potential grows at rate $\Omega\left(\frac{1}{m^*}\right)$ in each iteration.

Lemma A.4. Given $c_{4} \geq 300$ and $\mathrm{err}(h^{*}) \leq 3\eta$ , there exists a sampling distribution $q$ such that

E[ΔiFi]E[ΔiFi]2αηmaxxq(x)DX(x)αm(H,DX,c4η,c5ε2η,99100)ifhSi, \mathbb {E} [ \Delta_ {i} | \mathcal {F} _ {i} ] \geq \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \gtrsim \frac {\alpha}{m ^ {*} \left(H , \mathcal {D} _ {X} , c _ {4} \eta , c _ {5} \varepsilon - 2 \eta , \frac {9 9}{1 0 0}\right)} i f h ^ {*} \notin S _ {i},

as well as $|\Delta_i| \leq \alpha$ always and $\operatorname{Var}[\Delta_i|\mathcal{F}_i] \leq \alpha \mathbb{E}[|\Delta_i||\mathcal{F}_i|] \lesssim \alpha \mathbb{E}[\Delta_i|\mathcal{F}_i]$ .

Proof. For the sake of bookkeeping, we let $m^{} = m^{}\left(H,\mathcal{D}{X},c{4}\eta ,c_{5}\varepsilon -2\eta ,\frac{99}{100}\right)$ in this proof and the following text. We first bound the expectation. By Lemma A.1 applied to $S\in {H,H\setminus S_i}$ with $3\eta$ , we have

E[ΔiFi]2αηmaxxq(x)DX(x)0.9α(Exq[ri,H,h(x)+ri,H\Si,h(x)]13.8ηmaxxq(x)DX(x))2αηmaxxq(x)DX(x), \begin{array}{l} \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq 0. 9 \alpha \left(\mathbb {E} _ {x \sim q} \left[ r _ {i, H, h *} (x) + r _ {i, H \backslash S _ {i}, h *} (x) \right] - 1 3. 8 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}, \\ \end{array}

where $q$ is the query distribution of the algorithm at iteration $i$ . Now, by the definition of

λi,S=12λi+12λi,H\S, \overline {{\lambda}} _ {i, S} = \frac {1}{2} \lambda_ {i} + \frac {1}{2} \lambda_ {i, H \backslash S},

we have for any $x$ that

rˉi,Si,h(x)=12(ri,h(x)+ri,H\Si,h(x)) \bar {r} _ {i, S _ {i}, h ^ {*}} (x) = \frac {1}{2} \left(r _ {i, h ^ {*}} (x) + r _ {i, H \backslash S _ {i}, h ^ {*}} (x)\right)

and thus

E[ΔiFi]2αηmaxxq(x)DX(x)1.8α([rˉi,Si,h(x)]6.9ηmaxxq(x)DX(x))2αηmaxxq(x)DX(x)1.8α([ri,Si(x)]8.1ηmaxxq(x)DX(x)).(7) \begin{array}{l} \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \\ \geq 1. 8 \alpha \left(\underset {x \sim q} {\mathbb {E}} \left[ \bar {r} _ {i, S _ {i}, h ^ {*}} (x) \right] - 6. 9 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \tag {7} \\ \geq 1.8 \alpha \left(\underset {x \sim q} {\mathbb {E}} \left[ \overline {{r}} _ {i, S _ {i}} (x) \right] - 8. 1 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right). \\ \end{array}

Algorithm 1 chooses the sampling distribution $q$ to maximize $\mathbb{E}{x\sim q}[\overline{r}{i,S_i}(x)] - \frac{c_4}{20}\eta \max_x\frac{q(x)}{\mathcal{D}X(x)}\leq$ $\mathbb{E}{x\sim q}[\overline{r}{i,S_i}(x)] - 15\eta \max_x\frac{q(x)}{\mathcal{D}X(x)}$ because $c{4}\geq 300$ . By Lemma A.2, $\overline{\lambda}{i,S_i}$ over $H^{\prime}$ has no radius- $(c_{4}\eta +c_{5}\varepsilon)$ ball with probability larger than $80%$ , so by Lemma 2.1 $q$ satisfies

[ri,Si(x)]15ηmaxxq(x)DX(x)[ri,Si(x)]c420ηmaxxq(x)DX(x)1m(H,DX,c4η,c5ε,99100). \underset {x \sim q} {\mathbb {E}} [ \overline {{r}} _ {i, S _ {i}} (x) ] - 1 5 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \underset {x \sim q} {\mathbb {E}} [ \overline {{r}} _ {i, S _ {i}} (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \gtrsim \frac {1}{m ^ {*} \left(H ^ {\prime} , \mathcal {D} _ {X} , c _ {4} \eta , c _ {5} \varepsilon , \frac {9 9}{1 0 0}\right)}.

Because $H' \subseteq H$ is a maximal $2\eta$ -packing, every hypothesis in $H$ is within $2\eta$ of some hypothesis in $H'$ . The problem $\left(H, \mathcal{D}_X, c_4\eta, c_5\varepsilon - 2\eta, \frac{99}{100}\right)$ is harder than the problem $\left(H', \mathcal{D}_X, c_4\eta, c_5\varepsilon, \frac{99}{100}\right)$ because we can reduce the latter to the former by simply adding more hypotheses and solve

it then map the solution back by returning the closest hypothesis in $H'$ . Hence, $m^* \geq m^* \left(H', \mathcal{D}_X, c_4\eta, c_5\varepsilon, \frac{99}{100}\right)$ . Therefore,

E[ΔiFi]2αηmaxxq(x)DX(x)1.8α(Exq[rˉi,Si(x)]8.1ηmaxxq(x)DX(x))αm. \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq 1. 8 \alpha \left(\mathbb {E} _ {x \sim q} [ \bar {r} _ {i, S _ {i}} (x) ] - 8. 1 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \gtrsim \frac {\alpha}{m ^ {*}}.

We now bound the variance. The value of $\Delta_{i}$ may be positive or negative, but it is bounded by $|\Delta_i| \leq \alpha$ . Thus

Var[ΔiFi]E[Δi2Fi]αE[ΔiFi]. \operatorname {V a r} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] \leq \mathbb {E} \left[ \Delta_ {i} ^ {2} \mid \mathcal {F} _ {i} \right] \leq \alpha \mathbb {E} \left[ \left| \Delta_ {i} \right| \mid \mathcal {F} _ {i} \right].

By Lemma A.1 and (7) we have

E[ΔiFi]=E[2max{Δi,0}ΔiFi]4α[rˉi,Si(x)]1.8α([rˉi,Si(x)]8.1ηmaxxq(x)DX(x))2.2α([rˉi,S,h(x)]+6.7ηmaxxq(x)DX(x))2.2α1.8αE[ΔiFi]+2.2α6.9ηmaxxq(x)DX(x)+2.2α6.7ηmaxxq(x)DX(x)1.3E[ΔiFi]+30αηmaxxq(x)DX(x). \begin{array}{l} \mathbb {E} [ | \Delta_ {i} | | \mathcal {F} _ {i} ] = \mathbb {E} [ 2 \max \{\Delta_ {i}, 0 \} - \Delta_ {i} | \mathcal {F} _ {i} ] \\ \leq 4 \alpha \underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S _ {i}} (x) ] - 1. 8 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S _ {i}} (x) ] - 8. 1 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ \leq 2. 2 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S, h ^ {*}} (x) ] + 6. 7 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ \leq \frac {2 . 2 \alpha}{1 . 8 \alpha} \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] + 2. 2 \alpha \cdot 6. 9 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} + 2. 2 \alpha \cdot 6. 7 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \\ \leq 1. 3 \mathbb {E} [ \Delta_ {i} | \mathcal {F} _ {i} ] + 3 0 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}. \\ \end{array}

Since $\mathbb{E}_{x\sim q}[\Delta_i|\mathcal{F}_i] - 2\alpha \eta \max_x\frac{q(x)}{\mathcal{D}_X(x)}\gtrsim \frac{1}{m^*}\geq 0$ we have

ηmaxxq(x)DX(x)12α[ΔiFi], \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \leq \frac {1}{2 \alpha} \underset {x \sim q} {\mathbb {E}} [ \Delta_ {i} | \mathcal {F} _ {i} ],

and thus

Var[ΔiFi]αE[ΔiFi]αE[ΔiFi]. \operatorname {V a r} [ \Delta_ {i} | \mathcal {F} _ {i} ] \leq \alpha \mathbb {E} [ | \Delta_ {i} | | \mathcal {F} _ {i} ] \lesssim \alpha \mathbb {E} [ \Delta_ {i} | \mathcal {F} _ {i} ].

A.3 Concentration of potential

We have showed that the potential will grow at $\Omega\left(\frac{1}{m^{}}\right)$ per iteration, but only in expectation, while our goal is to obtain a high probability bound. Let $\mu_{k} \coloneqq \sum_{i < k} \mathbb{E}[\Delta_{i}|\mathcal{F}_{i-1}] \gtrsim k/m^{}$ , then

E[(ψkμk)(ψk1μk1)Fk1]=E[ψkψk1Fk1](μkμk1)=E[ΔiFi]E[ΔiFi]0. \begin{array}{l} \mathbb {E} \left[ \left(\psi_ {k} - \mu_ {k}\right) - \left(\psi_ {k - 1} - \mu_ {k - 1}\right) | \mathcal {F} _ {k - 1} \right] = \mathbb {E} \left[ \psi_ {k} - \psi_ {k - 1} | \mathcal {F} _ {k - 1} \right] - \left(\mu_ {k} - \mu_ {k - 1}\right) \\ = \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] - \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] \geq 0. \\ \end{array}

Apparently $|\psi_k - \mu_k|$ is upper bounded, so $\psi_k - \mu_k$ is a supermartingale. To show a high probability bound, we will use Freedman's inequality. A version is stated in Tropp [2011]. We slighted modify it so it can be applied to supermartingale as the following. (XXX Not sure if the following is correct. I can't find a version of supermartingale.)

Theorem A.5 (Freedman's Inequality). Consider a real-valued supermartingale ${Y_k : k = 0,1,2,\dots}$ that is adapted to the filtration $\mathcal{F}_0 \subseteq \mathcal{F}_1 \subseteq \mathcal{F}_2 \subseteq \dots \subseteq \mathcal{F}$ with difference sequence ${X_k : k = 1,2,3,\dots}$ . Assume that the difference sequence is uniformly bounded:

XkRa l m o s t s u r e l y f o rk=1,2,3, X _ {k} \leq R \text {a l m o s t s u r e l y f o r} k = 1, 2, 3, \dots

Define the predictable quadratic variation process of the supermartingale:

Wk:=j=1kE[Xj2Fj1]fork=1,2,3, W _ {k} := \sum_ {j = 1} ^ {k} \mathbb {E} \left[ X _ {j} ^ {2} | \mathcal {F} _ {j - 1} \right] f o r k = 1, 2, 3, \dots

Then, for all $t \geq 0$ and $\sigma^2 > 0$ ,

Pr(k0:YktandWkσ2)exp(t2/2σ2+Rt/3). \Pr \left(\exists k \geq 0: Y _ {k} \leq - t a n d W _ {k} \leq \sigma^ {2}\right) \leq \exp \left(- \frac {t ^ {2} / 2}{\sigma^ {2} + R t / 3}\right).

Then we can prove a high probability bound as the following.

Lemma A.6. With probability $1 - \delta$ , $\phi_{i} = 0$ for some $i = O\left(m^{}\log \frac{|H|}{\delta}\right)$ so $h^ \in S_i$ .

Proof. Remember we have that

ϕk=ϕ0+ψk+i<klogλi,H\Si+1(h)λi,H\Si(h). \phi_ {k} = \phi_ {0} + \psi_ {k} + \sum_ {i < k} \log \frac {\lambda_ {i , H \backslash S _ {i + 1}} (h ^ {*})}{\lambda_ {i , H \backslash S _ {i}} (h ^ {*})}.

Since $S_{i + 1}\supseteq S_i$ for all $i$ , $\lambda_{i,H\backslash S_{i + 1}}(h^{})\geq \lambda_{i,H\backslash S_{i}}(h^{})$ if $h^*\notin S_{i + 1}$ , we have

ϕkϕ0+ψki fhSk. \phi_ {k} \geq \phi_ {0} + \psi_ {k} \quad \text {i f} h ^ {*} \notin S _ {k}.

Let $K = O\left(m^{}\log \frac{|H|}{\delta}\right)$ . Let's assume by contradiction that $\phi_K < 0$ for for, then $h^ \notin S_i$ for $i \leq K$ . We know by Lemma A.4 that

μk:=i<kE[ΔiFi1]km \mu_ {k} := \sum_ {i < k} \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i - 1} \right] \gtrsim \frac {k}{m ^ {*}}

and that $\sum_{i < k} \operatorname{Var}\left[\Delta_i\right] \leq \frac{1}{4} \mu_k$ by picking $\alpha$ small enough. Moreover, $|\Delta_i| \leq \alpha$ always. To use Freedman's inequality, let's set the RHS

exp(t2/2σ2+Rt/3)δ. \exp \left(- \frac {t ^ {2} / 2}{\sigma^ {2} + R t / 3}\right) \leq \delta .

Solving the above quadratic equation, one solution is that $t \geq \frac{R}{3} \log \frac{1}{\delta} + \sqrt{\frac{R^2}{9} \log^2 \frac{1}{\delta} + 2 \sigma^2 \log \frac{1}{\delta}}$ . Let's substitute in $R = \alpha$ and $\sigma^2 = \sum_{i < k} \operatorname{Var}_{i-1}(\Delta_i)$ , with $1 - \delta$ probability we have for any $k > O(m^* \log \frac{1}{\delta})$ that

ψkμkα29log21δ+2i<kVar(Δi)log1δα3log1δμkα29log21δ+12μklog1δα3log1δμkmax{2α3log1δ,μklog1δ}α3log1δ12μkkm. \begin{array}{l} \psi_ {k} \geq \mu_ {k} - \sqrt {\frac {\alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + 2 \sum_ {i < k} \operatorname {V a r} (\Delta_ {i}) \log \frac {1}{\delta}} - \frac {\alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \sqrt {\frac {\alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + \frac {1}{2} \mu_ {k} \log \frac {1}{\delta}} - \frac {\alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \max \left\{\frac {\sqrt {2} \alpha}{3} \log \frac {1}{\delta}, \sqrt {\mu_ {k} \log \frac {1}{\delta}} \right\} - \frac {\alpha}{3} \log \frac {1}{\delta} \\ \geq \frac {1}{2} \mu_ {k} \\ \gtrsim \frac {k}{m ^ {*}}. \\ \end{array}

The second last inequality is because the first term outscales all of the rest. Since $K = O\left(m^{*}\log \frac{|H|}{\delta}\right)$ , we have

ψK2logH \psi_ {K} \geq 2 \log | H |

with $1 - \delta$ probability. Then $\phi_K \geq \phi_0 + \psi_k$ because $\phi_0 \geq \log \frac{1}{2|H|} \geq -2\log |H|$ and this contradicts $h^* \notin S_K$ . Therefore, with probability at least $1 - \delta$ , $h^* \in S_K$ and by definition, $\phi_i = 0$ for some $i \leq K$ as desired.

A.4 Bounding the Size of $|C|$

So far we've shown that after $O\left(m^{}\log \frac{|H|}{\delta}\right)$ iterations, $h^$ will be included in the set $S_{i}$ . The last thing we need to prove Theorem 1.1 is that with high probability, $C$ is small, which is equivalent to show that not many balls will be added to $S_{i}$ after $O\left(m^{}\log \frac{|H|}{\delta}\right)$ iterations. To show this, we first need to relate the number of balls added to $S_{i}$ to $\psi_{i}$ . Let $\mathcal{E}_i$ denote the number of errors $h^$ made up to iteration $i$ (and set $\mathcal{E}i = \mathcal{E}{i-1}$ if $h^* \in S_i$ ) and $\mathcal{N}_i$ denote the number of balls added to $S_i$ up to iteration $i$ (again set $\mathcal{N}i = \mathcal{N}{i-1}$ if $h^* \in S_i$ ).

Lemma A.7. The following inequality holds for every $i$ :

Ni5(ψi+2αEi)+1. \mathcal {N} _ {i} \leq 5 (\psi_ {i} + 2 \alpha \mathcal {E} _ {i}) + 1.

Proof. We divide the $i$ iterations into phases. A new phase begins and an old phase ends if at this iteration a new ball is added to the set $S_{i}$ . We use $p_1, \ldots, p_k$ for $k \leq i$ to denote phases and $i_1, \ldots, i_k$ to denote the starting iteration of the phases. We analyse how the potential changes from the phase $p_j$ to the phase $p_{j+1}$ . Let's say the ball $B_2 = (\mu_2, 3c_4\eta + 3c_5\varepsilon)$ is added at the beginning of $p_{j+1}$ and $B_1 = (\mu_1, 3c_4\eta + 3c_5\varepsilon)$ is the ball added at the beginning of $p_j$ . Then the ball $B_2' = (\mu_2, c_4\eta + c_5\varepsilon)$ and the ball $B_1' = (\mu_1, c_4\eta + c_5\varepsilon)$ are disjoint. Otherwise, $B_2' \subseteq B_1$ so $B_2$ would not have been added by the algorithm. At the beginning of $p_j$ , $B_1'$ has probability no less than 0.6 by Claim A.3. Therefore, $B_2'$ has probability no more than 0.4. Similarly, at the beginning of $p_{j+1}$ , $B_2'$ has probability at least 0.6 by Claim A.3. Since during one iteration the weight of a hypothesis cannot change too much, at iteration $i_{j+1} - 1$ , $B_2'$ has weight at least 0.5 by picking $\alpha$ small enough. Therefore, we have $\log \lambda_{i_{j+1}-1}(B_2') - \log \lambda_{i_j}(B_2') \geq \log \frac{0.5}{0.4} \geq \frac{1}{5}$ . Moreover, note that $S_i$ does not change from iteration $i_j$ to iteration $i_{j+1} - 1$ by the definition of phases. Now we compute

l=ijij+11Δl=logλij+11(h)λij(h)+logλij+11,HSij(h)λij,HSij(h),=logwij+11(h)wi1(h)hHwi1(h)hHwij+11(h)+logwij+11(h)wij(h)wij(HSij)wij+11(HSij). \begin{array}{l} \sum_ {l = i _ {j}} ^ {i _ {j + 1} - 1} \Delta_ {l} = \log \frac {\lambda_ {i _ {j + 1} - 1} (h ^ {*})}{\lambda_ {i _ {j}} (h ^ {*})} + \log \frac {\lambda_ {i _ {j + 1} - 1 , H \setminus S _ {i _ {j}}} (h ^ {*})}{\lambda_ {i _ {j} , H \setminus S _ {i _ {j}}} (h ^ {*})}, \\ = \log \frac {w _ {i _ {j + 1} - 1} (h ^ {*})}{w _ {i _ {1}} (h ^ {*})} \frac {\sum_ {h \in H} w _ {i _ {1}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} + \log \frac {w _ {i _ {j + 1} - 1} (h ^ {*})}{w _ {i _ {j}} (h ^ {*})} \frac {w _ {i _ {j}} (H \setminus S _ {i _ {j}})}{w _ {i _ {j + 1} - 1} (H \setminus S _ {i _ {j}})}. \\ \end{array}

The change of the weight of $h^*$ is

wij+1(h)wij(h)=eαEpj, \frac {w _ {i _ {j + 1}} (h ^ {*})}{w _ {i _ {j}} (h ^ {*})} = e ^ {- \alpha \mathcal {E} _ {p _ {j}}},

where $\mathcal{E}_{p_j}$ is the number of errors $h^*$ made in $p_j$ . Consequently,

l=ijij+11Δl=2αEpj+loghHwij(h)hHwij+11(h)+logwij(HSij)wij+11(HSij)2αEpj+15. \begin{array}{l} \sum_ {l = i _ {j}} ^ {i _ {j + 1} - 1} \Delta_ {l} = - 2 \alpha \mathcal {E} _ {p _ {j}} + \log \frac {\sum_ {h \in H} w _ {i _ {j}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} + \log \frac {w _ {i _ {j}} (H \setminus S _ {i _ {j}})}{w _ {i _ {j + 1} - 1} (H \setminus S _ {i _ {j}})} \\ \geq - 2 \alpha \mathcal {E} _ {p _ {j}} + \frac {1}{5}. \\ \end{array}

The last step above comes from

loghHwij(h)hHwij+11(h)loghB2wij+11(h)hB2wij(h)hHwij(h)hHwij+11(h)=logλij+11(B2)λij(B2)15, \log \frac {\sum_ {h \in H} w _ {i _ {j}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} \geq \log \frac {\sum_ {h \in B _ {2} ^ {\prime}} w _ {i _ {j + 1} - 1} (h)}{\sum_ {h \in B _ {2} ^ {\prime}} w _ {i _ {j}} (h)} \frac {\sum_ {h \in H} w _ {i _ {j}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} = \log \frac {\lambda_ {i _ {j + 1} - 1} \left(B _ {2} ^ {\prime}\right)}{\lambda_ {i _ {j}} \left(B _ {2} ^ {\prime}\right)} \geq \frac {1}{5},

and

logwij(HSij)wij+11(HSij)0 \log \frac {w _ {i _ {j}} (H \setminus S _ {i _ {j}})}{w _ {i _ {j + 1} - 1} (H \setminus S _ {i _ {j}})} \geq 0

because the weight $w(h)$ only decreases. Summing over all phases $j$ and we get

ψi2αEi+15(Ni1). \psi_ {i} \geq - 2 \alpha \mathcal {E} _ {i} + \frac {1}{5} \left(\mathcal {N} _ {i} - 1\right).

Since $i$ may not exactly be the end of a phase, the last phase may end early so we have $\mathcal{N}_i - 1$ instead of $\mathcal{N}_i$ . Rearrange and the proof finishes.

We have already bounded $\psi_{i}$ , so we just need to bound $\mathcal{E}_i$ in order to bound $\mathcal{N}_i$ by the following lemma.

Lemma A.8. For every $k$ , with probability at least $1 - \delta$

Ek1α(ψk+2log1δ). \mathcal {E} _ {k} \leq \frac {1}{\alpha} \left(\psi_ {k} + \sqrt {2} \log \frac {1}{\delta}\right).

Proof. Let $q$ be the query distribution at iteration $i - 1$ and $p(x)$ be the probability that $x$ is corrupted by the adversary. Then the conditional expectation of $\mathcal{E}i - \mathcal{E}{i-1}$ is

E[EiEi1Fi]=Prxq[h(x)i s w r o n g]=Exq[p(x)]=ExD[p(x)q(x)D(x)]ηmaxxq(x)DX(x). \mathbb {E} \left[ \mathcal {E} _ {i} - \mathcal {E} _ {i - 1} | \mathcal {F} _ {i} \right] = \Pr_ {x \sim q} \left[ h ^ {*} (x) \text {i s w r o n g} \right] = \mathbb {E} _ {x \sim q} \left[ p (x) \right] = \mathbb {E} _ {x \sim \mathcal {D}} \left[ p (x) \frac {q (x)}{\mathcal {D} (x)} \right] \leq \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}.

Then if $h^* \notin S$ , from Lemma A.4

E[Δi2α(EiEi1)Fi]E[ΔiFi]2αηmaxxq(x)DX(x)1m. \mathbb {E} \left[ \Delta_ {i} - 2 \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \mid \mathcal {F} _ {i} \right] \geq \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \gtrsim \frac {1}{m ^ {*}}.

Therefore, $\mathbb{E}[\alpha (\mathcal{E}i - \mathcal{E}{i - 1})|\mathcal{F}i]\leq \frac{1}{2}\mathbb{E}[\Delta_i|\mathcal{F}i]$ and $\mathbb{E}[\Delta_i - \alpha (\mathcal{E}i - \mathcal{E}{i - 1})|\mathcal{F}i]\geq \frac{1}{2}\mathbb{E}[\Delta_i|\mathcal{F}i]$ . This means that $\psi{k} - \alpha \mathcal{E}{k} - \frac{1}{2}\mu{k}$ is a supermartingale. We then bound Var $[\Delta_i - \alpha (\mathcal{E}i - \mathcal{E}{i - 1})|\mathcal{F}i]$ . Note that $|\Delta{i} - \alpha (\mathcal{E}{i} - \mathcal{E}_{i - 1})|\leq 2\alpha$ , so

Var[Δiα(EiEi1)Fi]E[(Δiα(EiEi1))2Fi]2αE[Δiα(EiEi1)Fi]. \operatorname {V a r} \left[ \Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \mid \mathcal {F} _ {i} \right] \leq \mathbb {E} \left[ \left(\Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right)\right) ^ {2} \mid \mathcal {F} _ {i} \right] \leq 2 \alpha \mathbb {E} \left[ \left| \Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \right| \mid \mathcal {F} _ {i} \right].

Furthermore,

E[Δiα(EiEi1)Fi]E[ΔiFi]+αηmaxxq(x)DX(x). \mathbb {E} \left[ | \Delta_ {i} - \alpha (\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}) | | \mathcal {F} _ {i} \right] \leq \mathbb {E} \left[ | \Delta_ {i} | | \mathcal {F} _ {i} \right] + \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}.

As a result,

Var[Δiα(EiEi1)Fi]2α(E[ΔiFi]+αηmaxxq(x)DX(x))2α(E[ΔiFi]+12E[ΔiFi])3αE[ΔiFi]αE[ΔiFi]. \begin{array}{l} \operatorname {V a r} \left[ \Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \mid \mathcal {F} _ {i} \right] \leq 2 \alpha \left(\mathbb {E} \left[ | \Delta_ {i} | \mid \mathcal {F} _ {i} \right] + \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ \leq 2 \alpha \left(\mathbb {E} \left[ | \Delta_ {i} | | \mathcal {F} _ {i} \right] + \frac {1}{2} \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right]\right) \\ \leq 3 \alpha \mathbb {E} \left[ | \Delta_ {i} | | \mathcal {F} _ {i} \right] \\ \lesssim \alpha \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right]. \\ \end{array}

By picking $\alpha$ small enough, $\sum_{i < k} \operatorname{Var}\left[\Delta_i - \alpha(\mathcal{E}i - \mathcal{E}{i-1})|\mathcal{F}_i\right] \leq \frac{1}{8}\mu_k$ . Moreover, $|\Delta_i - \alpha(\mathcal{E}i - \mathcal{E}{i-1})| \leq 2\alpha$ always. Therefore by Freedman's inequality, with $1 - \delta$ probability we have for any $k$ that

ψkαEkμk4α29log21δ+2i<kVari1[Δiα(EiEi1)]log1δ2α3log1δμk4α29log21δ+14μklog1δ2α3log1δμkmax{22α3log1δ,22μklog1δ}2α3log1δμkmax{22log1δ,22μk}2α3log1δ(122)μk2log1δ2log1δ \begin{array}{l} \psi_ {k} - \alpha \mathcal {E} _ {k} \geq \mu_ {k} - \sqrt {\frac {4 \alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + 2 \sum_ {i < k} \operatorname {V a r} _ {i - 1} [ \Delta_ {i} - \alpha (\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}) ] \log \frac {1}{\delta}} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \sqrt {\frac {4 \alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + \frac {1}{4} \mu_ {k} \log \frac {1}{\delta}} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \max \left\{\frac {2 \sqrt {2} \alpha}{3} \log \frac {1}{\delta}, \frac {\sqrt {2}}{2} \sqrt {\mu_ {k} \log \frac {1}{\delta}} \right\} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \max \left\{\frac {\sqrt {2}}{2} \log \frac {1}{\delta}, \frac {\sqrt {2}}{2} \mu_ {k} \right\} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \left(1 - \frac {\sqrt {2}}{2}\right) \mu_ {k} - \sqrt {2} \log \frac {1}{\delta} \\ \geq - \sqrt {2} \log \frac {1}{\delta} \\ \end{array}

Rearrange and we proved the lemma.

Combining Lemma A.7 and Lemma A.8, we can show $C$ is small with high probability as the lemma follows.

Lemma A.9. For $k = O\left(m^{}\log \frac{|H|}{\delta}\right)$ , with probability at least $1 - 2\delta$ , $h^ \in S_k$ and $|C| \leq O\left(\log \frac{|H|}{\delta}\right)$ at iteration $k$ .

Proof. By union bound, with probability at least $1 - 2\delta$ , Lemma A.6 and A.8 will hold at the same time. This means $h^*$ is added to $S_k$ . By definition, $0 \geq \phi_k \geq \phi_0 + \psi_k$ , so $\psi_k \leq 2\log |H|$ . Therefore, by Lemma A.7 and A.8, the number of balls added $|C|$ is $O\left(\log |H| + \log \frac{1}{\delta}\right) = O\left(\log \frac{|H|}{\delta}\right)$ .

A.5 Putting Everything Together

We proved that after $O\left(m^{}\log \frac{|H|}{\delta}\right)$ iterations, $h^ \in S_i$ and $C$ is small with high probability. Hence, running the stage two algorithm to return a desired hypothesis will not take much more queries. We are ready to put everything together and finally prove Theorem 1.1.

Theorem 1.1 (Competitive Bound). There exist some constants $c_{1}, c_{2}$ and $c_{3}$ such that for any instance $(H, \mathcal{D}_X, \eta, \varepsilon, \delta)$ with $\varepsilon \geq c_1\eta$ , Algorithm 1 solves the instance with sample complexity

m(H,DX,η,ε,δ)(m(H,DX,c2η,c3ε,99100)+log1δ)logN(H,DX,η)δ m (H, \mathcal {D} _ {X}, \eta , \varepsilon , \delta) \lesssim \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {2} \eta , c _ {3} \varepsilon , \frac {9 9}{1 0 0}\right) + \log \frac {1}{\delta}\right) \cdot \log \frac {N (H , \mathcal {D} _ {X} , \eta)}{\delta}

and polynomial time.

Proof. Let's pick $c_{1}, c_{4}, c_{5}$ as in Theorem 2.3 and pick the confidence parameter to be $\frac{\delta}{3}$ . Then by Lemma A.9, with probability $1 - \frac{2\delta}{3}$ , the first $O\left(\log \frac{|H|}{\delta}\right)$ ball added to $S_{i}$ will contain $h^{*}$ . Since each ball added to $C$ has radius $3c_{4}\eta + 3c_{5}\varepsilon$ , the best hypothesis in $C$ has error $(3 + 3c_{4})\eta + 3c_{5}\varepsilon$ . By Theorem 2.2, with probability $1 - \frac{\delta}{3}$ , the algorithm will return a hypothesis with error $(9 + 9c_{4})\eta + 9c_{5}\varepsilon \leq \eta + \varepsilon$ . Therefore, by union bound, the algorithm will return a desired hypothesis with probability $1 - \delta$ . This proves the correctness of the algorithm.

The stage one algorithm makes

O(m(H,DX,c4η,c5ε2η,99100)logHδ)O(m(H,DX,c4η,c52ε,99100)logHδ) O \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {4} \eta , c _ {5} \varepsilon - 2 \eta , \frac {9 9}{1 0 0}\right) \log \frac {| H |}{\delta}\right) \leq O \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {4} \eta , \frac {c _ {5}}{2} \varepsilon , \frac {9 9}{1 0 0}\right) \log \frac {| H |}{\delta}\right)

queries. The stage two algorithm makes $O\left(|C|\log \frac{|C|}{\delta}\right)$ queries by Theorem 2.2. Note that $C$ is a $c_{4}\eta +c_{5}\varepsilon$ -packing because the center of added balls are at least $c_{4}\eta +c_{5}\varepsilon$ away, so $m^{}\left(H,\mathcal{D}{X},\frac{c{4}}{2}\eta ,\frac{c_{5}}{2}\varepsilon ,\frac{99}{100}\right)\geq \log |C|$ . Since $|C|\leq \log \frac{|H|}{\delta}$ by Lemma A.9, stage two algorithm takes $O\left(\left(m^{}\left(H,\mathcal{D}{X},\frac{c{4}}{2}\eta ,\frac{c_{5}}{2}\varepsilon ,\frac{99}{100}\right) + \log \frac{1}{\delta}\right)\log \frac{|H|}{\delta}\right)$ queries. Picking $c_{2} = c_{4},c_{3} = \frac{c_{5}}{2}$ , we get the desired sample complexity bound.

To compute the packing at the beginning of the algorithm, we need to compute the distance of every pair of hypotheses, which takes $O(|H|^2|\mathcal{X}|)$ time. Computing $r$ in each round takes $O(|H||\mathcal{X}|)$ time and solving the optimization problem takes $O(|\mathcal{X}|)$ time. Therefore, the remaining steps in stage one take $O\left(m^{*}|H||\mathcal{X}|\log \frac{|H|}{\delta}\right)$ time. Stage two takes $O\left(\log \frac{|H|}{\delta}\log \frac{\log\frac{|H|}{\delta}}{\delta}\right)$ time. Therefore, the overall running time is polynomial of the size of the problem.

Similarly, we can prove Theorem 2.3, which is a stronger and more specific version of Theorem 1.1.

Theorem 2.3. Suppose that $\mathcal{D}_x$ and $H$ are such that, for any distribution $\lambda$ over $H$ such that no radius- $(c_4\eta + c_5\varepsilon)$ ball has probability more than $80%$ , there exists a distribution $q$ over $X$ such that

[r(x)]c420ηmaxxq(x)Dx(x)β \underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {x} (x)} \geq \beta

for some $\beta > 0$ . Then for $\varepsilon \geq c_1 \eta$ , $c_4 \geq 300$ , $c_5 = \frac{1}{10}$ and $c_1 \geq 90 c_4$ , let $N = N(H, \mathcal{D}_x, \eta)$ be the size of an $\eta$ -cover of $H$ . Algorithm 1 solves $(\eta, \varepsilon, \delta)$ active agnostic learning with $O\left(\frac{1}{\beta} \log \frac{N}{\delta} + \log \frac{N}{\delta} \log \frac{\log N}{\delta}\right)$ samples.

Proof. By Lemma A.9 (with $m^*$ replaced by $\frac{1}{\beta}$ and setting confidence parameter to $\frac{\delta}{3}$ ) after $O\left(\frac{1}{\beta}\log \frac{N}{\delta}\right)$ queries, with probability at least $1 - \frac{2\delta}{3}$ , a hypothesis in $C$ will be within $c_{4}\eta +c_{5}\varepsilon$ to $h^{*}$ and $|C| = O\left(\log \frac{N}{\delta}\right)$ . From Theorem 2.2, with probability at least $1 - \frac{\delta}{3}$ , stage two algorithm then outputs a hypothesis $\hat{h}$ that is $9c_{4}\eta +9c_{5}\varepsilon$ from $h^\prime$ so err $(\hat{h})\leq 9c_4\eta +9c_5\varepsilon \leq \eta +\varepsilon$ by the choice of the constants. The stage two algorithm makes $O\left(\log \frac{N}{\delta}\log \frac{\log\frac{N}{\delta}}{\delta}\right)$ queries. Overall, the algorithm makes $O\left(\frac{1}{\beta}\log \frac{N}{\delta} +\log \frac{N}{\delta}\log \frac{\log\frac{N}{\delta}}{\delta}\right)$ queries and succeeds with probability at least $1 - \delta$ .

B Query Complexity Lower Bound

In this section we derive a lower bound for the agnostic binary classification problem, which we denote by AGNOSTICLEARNING. The lower bound is obtained from a reduction from minimum set cover, which we denote by SETCOVER. The problem SETCOVER consists a pair $(U, S)$ , where $U$ is a ground set and $S$ is a collection of subsets of $U$ . The goal is to find a set cover $C \subseteq S$ such that $\bigcup_{S \in C} S = U$ of minimal size $|C|$ . We use $K$ to denote the cardinality of the minimum set cover.

Lemma B.1 (Dinur and Steurer [2014], Corollary 1.5). There exists hard instances SETCOVERHARD with the property $K \geq \log |U|$ such that for every $\gamma > 0$ , it is NP-hard to approximate SETCOVERHARD to within $(1 - \gamma)\ln |U|$ .

Proof. This lemma directly follows from Dinur and Steurer [2014, Corollary 1.5]. In their proof, they constructed a hard instance of SETCOVER from LABELCOVER. The size of the minimum cover $K \geq |V_1| = Dn_1$ and $\log |U| = (D + 1)\ln n_1 \leq K$ . So the instance in their proof satisfies the desired property.

Then we prove the following lemma by giving a ratio-preserving reduction from SETCOVER to AGNOSTICLEARNING.

Lemma B.2. If there exists a deterministic $\alpha$ -approximation algorithm for AGNOSTICLEARNING $\left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|}\right)$ , there exists a deterministic $2\alpha$ -approximation algorithm for SETCOVERHARD.

Proof. Given an instance of SETCOVERHARD, for each $s \in S$ , number the elements $u \in s$ in an arbitrary order; let $f(s, u)$ denote the index of $u$ in $s$ 's list (and padding 0 to the left with the extra bit). We construct an instance of AGNOSTICLEARNING as the following:

  1. Let the domain $\mathcal{X}$ have three pieces: $U, V \coloneqq {(s,j) \mid s \in S, j \in [1 + \log |s|]}$ , and $D = {1, \dots, \log |U|}$ , an extra set of $\log |U|$ more coordinates.
  2. On this domain, we define the following set of hypotheses:

(a) For $u \in U$ , define $h_u$ which only evaluates 1 on $u \in U$ and on $(s, j) \in V$ if $u \in s$ and the $j$ 'th bit of $(2f(s, u) + 1)$ is 1.
(b) For $d \in D$ , define $h_d$ which only evaluates 1 on $d$ .
(c) Define $h_0$ which evaluates everything to 0.

  1. Let $\mathcal{D}_X$ be uniform distribution over $\mathcal{X}$ and set $\eta = \frac{1}{3|\mathcal{X}|}$ and $\varepsilon = \frac{1}{3|\mathcal{X}|}$ . Set $\delta = \frac{1}{4|H|}$ .

Any two hypotheses satisfy $| h_1 - h_2 | \geq \frac{1}{|\mathcal{X}|} > \varepsilon = \eta$ , so $\mathrm{err}(h^*) = 0$ . First we show that $m^* \left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|} \right) \leq K + \log |U|$ . Indeed there exists a deterministic algorithm using $K + \log |U|$ queries to identify any hypothesis with probability 1. Given a smallest set cover $C$ , the algorithm first queries all $(s, 0) \in V$ for $s \in C$ . If $h^* = h_u$ for some $u$ , then for the $s \in S$ that covers $u$ , $(s, 0)$ will evaluate to true. The identity of $u$ can then be read out by querying $(s, j)$ for all $j$ . The other possibilities $-h_d$ for some $d$ or 0—can be identified by evaluating on all of $D$ with $\log U$ queries. The total number of queries is then at most $K + \log |U|$ in all cases, so $m^* \leq K + \log |U| \leq 2K$ .

We now show how to reconstruct a good approximation to set cover from a good approximate query algorithm. We feed the query algorithm $y = 0$ on every query it makes, and let $C$ be the set of all $s$ for which it queries $(s, j)$ for some $j$ . Also, every time the algorithm queries some $u \in U$ , we add an arbitrary set containing $u$ to $C$ . Then the size of $C$ is at most the number of queries. We claim that $C$ is a set cover: if $C$ does not cover some element $u$ , then $h_u$ is zero on all queries made by the algorithm, so $h_u$ is indistinguishable from $h_0$ and the algorithm would fail on either input $h_0$ or $h_u$ . Thus if $A$ is a deterministic $\alpha$ -approximation algorithm for AGNOSTICLEARNING, we will recover a set cover of size at most $\alpha m^* \leq \alpha (K + \log |U|) \leq 2\alpha K$ , so this gives a deterministic $2\alpha$ -approximation algorithm for SETCOVERHARD.

Similar results also hold for randomized algorithms, we just need to be slightly careful about probabilities.

Lemma B.3. If there exists a randomized algorithm for AGNOSTICLEARNING $\left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|}\right)$ , there exists a randomized $2\alpha$ -approximation algorithm for SETCOVERHARD with success probability at least $\frac{2}{3}$ .

Proof. We use the same reduction as in Lemma B.2. Let $A$ be an algorithm solves AGNOSTICLEARNING $\left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|}\right)$ . To obtain a set cover using $A$ , we keeping giving $A$ label 0 and construct the set $C$ as before. Let $q_C$ be a distribution over the reconstructed set $C$ . Assume that by contradiction with probability at least $\frac{1}{3}$ , $C$ is not a set cover. Then, with probability at least $1/3$ , there is some element $v$ such that both $h_v$ and $h_0$ are consistent on all queries the algorithm made; call such a query set "ambiguous".

Then what is the probability that the agnostic learning algorithm fails on the input distribution that chooses $h^*$ uniformly from $H$ ? Any given ambiguous query set is equally likely to come from any of the consistent hypotheses, so the algorithm's success probability on ambiguous query sets is at most $1/2$ . The chance the query set is ambiguous is at least $\frac{2}{3|H|}$ : a $\frac{1}{3H}$ chance that the true $h^*$ is $h_0$ and the query set is ambiguous, and at least as much from the other hypotheses making it ambiguous. Thus the algorithm's fails to learn the true hypothesis with at least $\frac{1}{3|H|}$ probability, contradicting the assumed $\frac{1}{4|H|}$ failure probability.

Therefore, a set cover of size at most $2\alpha K$ can be recovered with probability at least $\frac{1}{3}$ using the agnostic learning approximation algorithm.

The following theorem will then follow.

Theorem 1.2 (Lower Bound). It is NP-hard to find a query strategy for every agnostic active learning instance within an $c \log |H|$ for some constant $c > 0$ factor of the optimal sample complexity.

Proof. Let's consider the instance of set cover constructed in Lemma B.2. Let $c = 0.1$ and note that $0.1\log |H|\leq 0.49\log \frac{|H|}{2}$ . If there exists a polynomial time $0.49\log \frac{|H|}{2}$ approximation algorithm for the instance, then there exists a polynomial time $0.98\log \frac{|H|}{2}\leq 0.98\log |U|$ approximation algorithm for SETCOVERHARD, which is a contradiction to Lemma B.1.