SlowGuess commited on
Commit
af630b7
·
verified ·
1 Parent(s): 80dc177

Add Batch fd5ff38a-ce4a-4a55-82ea-2074a941b3a4

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. activelearningformultipletargetmodels/7e9504f2-f4ca-4200-871c-b2325840cec0_content_list.json +3 -0
  2. activelearningformultipletargetmodels/7e9504f2-f4ca-4200-871c-b2325840cec0_model.json +3 -0
  3. activelearningformultipletargetmodels/7e9504f2-f4ca-4200-871c-b2325840cec0_origin.pdf +3 -0
  4. activelearningformultipletargetmodels/full.md +313 -0
  5. activelearningformultipletargetmodels/images.zip +3 -0
  6. activelearningformultipletargetmodels/layout.json +3 -0
  7. activelearninghelpspretrainedmodelslearntheintendedtask/3d6384fb-d348-4cec-a9c8-bc16eb1fa578_content_list.json +3 -0
  8. activelearninghelpspretrainedmodelslearntheintendedtask/3d6384fb-d348-4cec-a9c8-bc16eb1fa578_model.json +3 -0
  9. activelearninghelpspretrainedmodelslearntheintendedtask/3d6384fb-d348-4cec-a9c8-bc16eb1fa578_origin.pdf +3 -0
  10. activelearninghelpspretrainedmodelslearntheintendedtask/full.md +305 -0
  11. activelearninghelpspretrainedmodelslearntheintendedtask/images.zip +3 -0
  12. activelearninghelpspretrainedmodelslearntheintendedtask/layout.json +3 -0
  13. activelearningofclassifierswithlabelandseedqueries/f9840103-2625-43db-a3c2-be46606225f0_content_list.json +3 -0
  14. activelearningofclassifierswithlabelandseedqueries/f9840103-2625-43db-a3c2-be46606225f0_model.json +3 -0
  15. activelearningofclassifierswithlabelandseedqueries/f9840103-2625-43db-a3c2-be46606225f0_origin.pdf +3 -0
  16. activelearningofclassifierswithlabelandseedqueries/full.md +319 -0
  17. activelearningofclassifierswithlabelandseedqueries/images.zip +3 -0
  18. activelearningofclassifierswithlabelandseedqueries/layout.json +3 -0
  19. activelearningpolynomialthresholdfunctions/cd9cbb09-1885-49c0-b011-29403e1bcaf6_content_list.json +3 -0
  20. activelearningpolynomialthresholdfunctions/cd9cbb09-1885-49c0-b011-29403e1bcaf6_model.json +3 -0
  21. activelearningpolynomialthresholdfunctions/cd9cbb09-1885-49c0-b011-29403e1bcaf6_origin.pdf +3 -0
  22. activelearningpolynomialthresholdfunctions/full.md +431 -0
  23. activelearningpolynomialthresholdfunctions/images.zip +3 -0
  24. activelearningpolynomialthresholdfunctions/layout.json +3 -0
  25. activelearningthroughacoveringlens/ba2c5b63-9688-4cda-8726-15c897314b0a_content_list.json +3 -0
  26. activelearningthroughacoveringlens/ba2c5b63-9688-4cda-8726-15c897314b0a_model.json +3 -0
  27. activelearningthroughacoveringlens/ba2c5b63-9688-4cda-8726-15c897314b0a_origin.pdf +3 -0
  28. activelearningthroughacoveringlens/full.md +424 -0
  29. activelearningthroughacoveringlens/images.zip +3 -0
  30. activelearningthroughacoveringlens/layout.json +3 -0
  31. activelearningwithneuralnetworksinsightsfromnonparametricstatistics/e9178624-4230-484c-a4e9-8608b7b59f16_content_list.json +3 -0
  32. activelearningwithneuralnetworksinsightsfromnonparametricstatistics/e9178624-4230-484c-a4e9-8608b7b59f16_model.json +3 -0
  33. activelearningwithneuralnetworksinsightsfromnonparametricstatistics/e9178624-4230-484c-a4e9-8608b7b59f16_origin.pdf +3 -0
  34. activelearningwithneuralnetworksinsightsfromnonparametricstatistics/full.md +363 -0
  35. activelearningwithneuralnetworksinsightsfromnonparametricstatistics/images.zip +3 -0
  36. activelearningwithneuralnetworksinsightsfromnonparametricstatistics/layout.json +3 -0
  37. activelearningwithsafetyconstraints/7bfa4ba0-1aef-4f31-9e0a-e6c240480ef1_content_list.json +3 -0
  38. activelearningwithsafetyconstraints/7bfa4ba0-1aef-4f31-9e0a-e6c240480ef1_model.json +3 -0
  39. activelearningwithsafetyconstraints/7bfa4ba0-1aef-4f31-9e0a-e6c240480ef1_origin.pdf +3 -0
  40. activelearningwithsafetyconstraints/full.md +380 -0
  41. activelearningwithsafetyconstraints/images.zip +3 -0
  42. activelearningwithsafetyconstraints/layout.json +3 -0
  43. activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/24020f84-ae3b-43a2-8fed-9595ccc6eb74_content_list.json +3 -0
  44. activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/24020f84-ae3b-43a2-8fed-9595ccc6eb74_model.json +3 -0
  45. activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/24020f84-ae3b-43a2-8fed-9595ccc6eb74_origin.pdf +3 -0
  46. activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/full.md +299 -0
  47. activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/images.zip +3 -0
  48. activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/layout.json +3 -0
  49. activerankingwithoutstrongstochastictransitivity/ecbc8857-633b-44d0-b585-da6b70f4b9d0_content_list.json +3 -0
  50. activerankingwithoutstrongstochastictransitivity/ecbc8857-633b-44d0-b585-da6b70f4b9d0_model.json +3 -0
activelearningformultipletargetmodels/7e9504f2-f4ca-4200-871c-b2325840cec0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c5593daa499f3d7cf4406fa2b612bbc621b234771d350f08a23959ec0ea6032
3
+ size 78518
activelearningformultipletargetmodels/7e9504f2-f4ca-4200-871c-b2325840cec0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17226400613bce10be15d0f6115fb909175dab27e676b50dc883adfdb84a7071
3
+ size 99376
activelearningformultipletargetmodels/7e9504f2-f4ca-4200-871c-b2325840cec0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d6f4a47c6a41ac48ed004d8f7448895a1882810aec0c83eaad53559c4050cba
3
+ size 401445
activelearningformultipletargetmodels/full.md ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning for Multiple Target Models
2
+
3
+ Ying-Peng Tang and Sheng-Jun Huang *
4
+
5
+ College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics Collaborative Innovation Center of Novel Software Technology and Industrialization MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China {tangyp,huangsj}@nuaa.edu.cn
6
+
7
+ # Abstract
8
+
9
+ We describe and explore a novel setting of active learning (AL), where there are multiple target models to be learned simultaneously. In many real applications, the machine learning system is required to be deployed on diverse devices with varying computational resources (e.g., workstation, mobile phone, edge devices), which leads to the demand of training multiple target models on the same labeled dataset. However, it is generally believed that AL is model-dependent and untransferable, i.e., the data queried by one model may be less effective for training another model. This phenomenon naturally raises a question "Does there exist an AL method that is effective for multiple target models?" In this paper, we answer this question by theoretically analyzing the label complexity of active and passive learning under the setting with multiple target models, and conclude that AL does have potential to achieve better label complexity under this novel setting. Based on this insight, we further propose an agnostic AL sampling strategy to select the examples located in the joint disagreement regions of different target models. The experimental results on the OCR benchmarks show that the proposed method can significantly surpass the traditional active and passive learning methods under this challenging setting.
10
+
11
+ # 1 Introduction
12
+
13
+ Data labeling is expensive due to the involving of human annotator. Active learning (AL) is one of the main approaches to reduce the labeling cost [28]. It evaluates the utility of the unlabeled data based on the target model, and actively queries the labels from the oracle for the examples that is the most beneficial to the performance improvement of the target model.
14
+
15
+ Existing active learning methods assume that there is only one specific target model, and try to fit it with least queries. However, in many real applications, the machine learning system is required to be deployed on multiple types of devices with different resource constraints [6]. For example, a speech recognition software needs to support diverse machines with varying hardware efficiency, ranging from high-performance workstation to the mobile-phone. Due to the different computational resources, the applicable model architectures vary a lot, e.g., a deep model which is well-performed on the cloud server may not be deployed on the edge device. It thus raises the demand of training multiple models with different complexity to accommodate these devices.
16
+
17
+ Given multiple target models, how to effectively improve them with least labeled data becomes a practical and challenging problem. It is generally believed that AL is usually model-dependent and untransferable [22, 24, 38], i.e., the best query strategy for different target models varies a lot [39]. In other words, the data queried by one model may be less effective for training another model [22]. These observations imply that the existing active query strategies can hardly benefit all target models simultaneously, and the design of AL algorithm for multi-models can be rather difficult. A natural
18
+
19
+ question might be asked: "Does there exist an active learning method which queries a set of labeled data, such that all the target models can be effectively trained with them?"
20
+
21
+ In this paper, we formally define the active learning for multiple target models problem, and reveal the potential improvement of AL under this novel setting. Based on this insight, we further propose an agnostic disagreement-based selection criterion. Specifically, we first define and analyze the label complexity for both active and passive learning under the setting with multiple target models. This label complexity characterizes the number of labeled examples sufficient to train an $\varepsilon$ -good classifier with probability at least $1 - \delta$ for each target model. Moreover, we find that the label complexity of single model has a close relation to that of multiple models under the realizable case, e.g., the former provides an upper bound of the label complexity for multiple models, which also implies the potential improvement of AL under this setting. To further explore the agnostic case, we propose an active selection method DIAM (i.e., DIsagreement-based AL for Multi-models) to effectively select the best examples that are beneficial to all target models. It prefers the data located in the joint disagreement regions of different models, which is expected to have higher potential to reduce the soft version space (i.e., the set of hypotheses with lower errors). Experiments are conducted on the OCR benchmarks to validate the necessity of designing active query method under this practical setting and the effectiveness of the proposed approach. The results show that the DIAM method can significantly reduce the number of queries to achieve a higher mean accuracy for multiple models compared to the traditional active and passive learning methods.
22
+
23
+ The rest of the paper is organized as follows. related work is first reviewed in the following section, then we formally define the AL for multiple target models problem and provide a general result to bridge the label complexity between single and multiple models. Then, we reveal the potential improvement of AL under this novel setting. After that, an agnostic active selection criterion is proposed and analyzed, followed by the empirical studies. And at last, we conclude this work.
24
+
25
+ # 2 Related work
26
+
27
+ Active learning has received much attention in recent years due to the greatly increasing demands of labeling data to effectively train more complex models (e.g., deep models) [25]. One of the cores of AL is how to evaluate the potential contribution to the performance improvement of the target model for each candidate query. Most of existing criteria for active learning can be categorized into informativeness and representativeness. The informativeness-based methods [13, 36, 17] prefer the data which is near the decision boundary, and the representativeness-based methods [27, 30, 21] impose the constraints to regularize the queried data to be dissimilar with each other or conform to the latent data distribution. Many works also try to combine both criteria to obtain better performances [11, 37, 31]. Beyond these hand-crafted selection criteria, several meta-active-learning methods [18, 23, 35] are proposed to learn a generalizable query strategy across tasks. Most of the existing active learning query strategies target on improving one specific target model. They are less applicable to the multiple target models setting.
28
+
29
+ From the theoretical view, active learning theory has also been widely studied under certain conditions (e.g., binary classification, finite VC dimension) [16]. One of the interested properties of an active learning algorithm is the label complexity, which characterizes the number of queries needed to obtain an $\varepsilon$ -good classifier with probability at least $1 - \delta$ [14]. To bound this value, disagreement coefficient [5, 4] and Shattering [15, 7] are two commonly used techniques. While most works deal with the single model setting, Balcan et al. [3] study the label complexity of the hypothesis space and its subclasses, which sheds light on this work. However, they mainly focus on how to construct subclasses to achieve a certain label complexity, while we aim to find an effective active learning algorithm on the given target models.
30
+
31
+ Recently, some AL studies tackle a related problem that the target model prior cannot be obtained. In this setting, they will not only search the appropriate target model for the current task, but also avoid noneffective querying. To this end, ALMS [1] either randomly labels data to calculate the unbiased validation error for model selection, or queries by the expected error reduction to improve the models. Active-iNAS [12] considers the deep learning setting, the authors on one hand perform Neural Architecture Search (NAS) to find the appropriate model architecture, on the other hand query the examples by the searched network. Recently, Tang and Huang [32] propose a unified framework DUAL to solve this problem. They query the data that is beneficial to not only the winner model, but
32
+
33
+ also the model search to identify the high potential model with least queries. All these methods try to search effective model configurations, but not improve multiple target models, which are different from our work.
34
+
35
+ # 3 Label Complexity of Single Model and Multiple Models
36
+
37
+ # 3.1 Notations and Definitions
38
+
39
+ Suppose the data is sampled from an unknown distribution $\mathcal{D}_{XY}$ over the feature space $\mathcal{X}$ and label space $\mathcal{Y}$ . Denote by $\mathcal{D}_X$ the marginal data distribution, and $\mathcal{D}_Y$ the marginal label distribution. Given a dataset with $n$ instances, which includes a small labeled set $\mathcal{L} = \{(x_i,y_i)\}_{i = 1}^{n_l}$ with $n_l$ instances, and a large unlabeled set $\mathcal{U} = \{\pmb {x}_i\}_{i = n_l + 1}^{n_l + n_u}$ with $n_u$ instances, where $n_l\ll n_u$ and $n = n_l + n_u$ . At each iteration, the active learning method will select a batch of $b$ examples $\mathcal{Q}$ from $\mathcal{U}$ for querying.
40
+
41
+ In the single model setting, we are given a hypothesis space $\mathcal{C}$ before querying. While in the multiple target models setting, there are $k$ hypothesis spaces $\mathcal{T} = \{\mathcal{C}_i | i = 1,2,\dots,k\}$ with $\tilde{\mathcal{T}} = \bigcup_{i=1}^{k} \mathcal{C}_i$ , our goal is to actively query a set of examples to output a well-performed hypothesis $\hat{h}_i$ from each $\mathcal{C}_i$ , $\forall i = 1,\dots,k$ . We define the true error of a hypothesis as $\operatorname{er}(h) = \mathbb{P}_{\boldsymbol{x} \sim \mathcal{D}_X}(h(\boldsymbol{x}) \neq h^*(\boldsymbol{x}))$ , where $h^*$ is the target concept, $h(\boldsymbol{x})$ is the model prediction on the data $\boldsymbol{x}$ . The empirical error of $h$ on $\mathcal{L}$ is defined by $\operatorname{er}_{\mathcal{L}}(h) = \frac{1}{|\mathcal{L}|} \sum_{\boldsymbol{x} \in \mathcal{L}} \mathbb{I}[h(\boldsymbol{x}) \neq h^*(\boldsymbol{x})]$ , where $\mathbb{I}[\cdot]$ is the indicator function. Let $\nu = \min_{h \in \mathcal{C}} \operatorname{er}(h)$ , and $\nu_i = \min_{h \in \mathcal{C}_i} \operatorname{er}(h)$ , $\operatorname{Log}(a) = \max \{\ln(a), 1\}$ , $\forall a > 0$ .
42
+
43
+ Here we introduce the definition of the pseudo-metric between hypotheses, which is frequently used in the subsequent proof.
44
+
45
+ Definition 1. Pseudo-metric between Hypotheses: Given $\mathcal{D}_X$ , the probability of disagreement between two classifiers $h_1$ and $h_2$ is defined as $d(h_1, h_2) = \mathbb{P}_{\boldsymbol{x} \sim \mathcal{D}_X}(h_1(\boldsymbol{x}) \neq h_2(\boldsymbol{x}))$ .
46
+
47
+ Then we introduce the label complexity of AL for single target model [16].
48
+
49
+ Definition 2. Label Complexity of AL for Single Target Model: For any active learning algorithm $\mathcal{A}$ , we say $\mathcal{A}$ achieves label complexity $\Lambda$ on the hypothesis space $\mathcal{C}$ if, for every $\varepsilon, \delta \in (0,1)$ , every distribution $\mathcal{D}_{XY}$ over $\mathcal{X} \times \mathcal{Y}$ , and every integer $t \geq \Lambda$ ( $\varepsilon, \delta, \mathcal{D}_{XY}$ ), if $h_{t,\delta}$ is the classifier produced by running $\mathcal{A}$ with budget $t$ , then with probability at least $1 - \delta$ , $\operatorname{er}(h_{t,\delta}) - \nu \leq \varepsilon$ .
50
+
51
+ Now we formally define the label complexity of active learning for multiple target models. It is defined on multiple hypothesis spaces, and the goal is to output an $\varepsilon$ -good classifier for each target model. Specifically, the label complexity for the AL with multiple target models is defined as
52
+
53
+ Definition 3. Label Complexity of AL for Multiple Target Models: Given a set of target models $\mathcal{T} = \{\mathcal{C}_i|i = 1,2,\dots ,k\}$ . For any active learning algorithm $\mathcal{A}$ , we say $\mathcal{A}$ achieves label complexity $\tilde{\Lambda}$ for multiple target models if, for every $\varepsilon, \delta \in (0,1)$ , every distribution $\mathcal{D}_{XY}$ over $\mathcal{X} \times \mathcal{Y}$ , and every integer $t \geq \tilde{\Lambda} (\varepsilon, \delta, \mathcal{D}_{XY}, \mathcal{T})$ , if $\{h_i^{t,\delta} \in \mathcal{C}_i|i = 1,\ldots ,k\}$ is the classifiers produced by running $\mathcal{A}$ with budget $t$ , then with probability at least $1 - \delta$ , $\mathrm{er}(h_i^{t,\delta}) - \nu_i \leq \varepsilon, \forall i = 1,\dots,k$ .
54
+
55
+ In the following, we take the passive learning (PL), i.e., random sampling, as a trivial case of active learning, and use the notations $\Lambda^{AL}$ , $\Lambda^{PL}$ and $\tilde{\Lambda}^{AL}$ , $\tilde{\Lambda}^{PL}$ to distinguish the label complexity of them, respectively. We hide the superscript when the context is clear.
56
+
57
+ # 3.2 Translating the Label Complexity of Single Model to Multiple Models
58
+
59
+ Denote by $\Lambda_{i}$ the label complexity of the $i$ -th target model $\mathcal{C}_i$ . Trivially, the AL label complexity for multiple models has $\tilde{\Lambda}^{AL} \leq \sum_{i} \Lambda_{i}^{AL}$ (applying the AL algorithm $\mathcal{A}$ on each of the target model $i$ to get the result). For the passive learning, however, its label complexity for multiple models has a much tighter upper bound $\tilde{\Lambda}^{PL} \leq \max_{i} \Lambda_{i}^{PL}$ . Because the data is randomly sampled, if $t \geq \max_{i} \Lambda_{i}^{PL} (\varepsilon, \delta, \mathcal{D}_{XY})$ examples are queried, according to the definition of label complexity, passive learning will output an $\varepsilon$ -good classifier with probability at least $1 - \delta$ for each target model. Such result implicitly indicates that the AL can hardly outperform PL under this setting.
60
+
61
+ To break this curse, the following theorem is provided to show that, we can expect a much better $\tilde{\Lambda}^{AL}$ for AL under the realizable case (i.e., $h^*$ is in the combined hypothesis space $\tilde{\mathcal{T}}$ ). It generally says
62
+
63
+ that, given arbitrary set of target models $\mathcal{T} = \{\mathcal{C}_i|i = 1,2,\dots ,k\}$ , if a learning method has label complexity $\Lambda^{AL}$ on the combined hypothesis space $\tilde{\mathcal{T}}$ , it also has the ability to output good classifiers for each $\mathcal{C}_i$ , i.e., after querying at most $t$ examples to output $\varepsilon$ -good classifiers with probability at least $1 - \delta$ for each $\mathcal{C}_i$ .
64
+
65
+ Theorem 1. Considering binary classification tasks and realizable case, given target models $\mathcal{T} = \{\mathcal{C}_i|i = 1,2,\ldots ,k\}$ , assume that active learning algorithm $\mathcal{A}$ achieves label complexity $\Lambda$ on $\tilde{\mathcal{T}}$ . Then, there exists an active learning algorithm $\mathcal{A}'$ which achieves the label complexity $\tilde{\Lambda}$ such that $\tilde{\Lambda} (\varepsilon ,\delta ,\mathcal{D}_{XY},\mathcal{T}) = \Lambda (\varepsilon /2,\delta ,\mathcal{D}_{XY})$ .
66
+
67
+ Proof. Define an algorithm $\mathcal{A}'$ that can output the required classifier $\hat{h}_i \in \mathcal{C}_i, \forall i = 1, \dots, k$ as follows. First, run the algorithm $\mathcal{A}$ on $(\tilde{\mathcal{T}}, \mathcal{D}_{XY})$ to query $t \geq \Lambda(\varepsilon/2, \delta, \mathcal{D}_{XY})$ labels and output a classifier $h_A$ . According to the definition, $d(h_A, h^*)$ is bounded by $\varepsilon/2$ with probability at least $1 - \delta$ . Then, for any $\mathcal{C}_i$ , output the classifier $\hat{h}_i \in \mathcal{C}_i$ such that $\hat{h}_i = \arg \min_{h_i \in \mathcal{C}_i} d(h_i, h_A)$ . Next, we prove that $\operatorname{er}(\hat{h}_i) - \nu_i \leq \varepsilon$ holds with probability at least $1 - \delta$ .
68
+
69
+ To bound $\operatorname{er}(\hat{h}_i)$ , it is equivalent to bound $d(\hat{h}_i, h^*)$ by Definition 1. Let $h_i^* = \arg \min_{h_i \in c_i} \operatorname{er}(h_i)$ . It is easy to verify that, $d(\cdot)$ satisfies triangle inequality in binary classification problems, i.e.,
70
+
71
+ $$
72
+ d \left(\hat {h} _ {i}, h ^ {*}\right) \leq d \left(\hat {h} _ {i}, h _ {A}\right) + d \left(h _ {A}, h ^ {*}\right). \tag {1}
73
+ $$
74
+
75
+ For the $d(\hat{h}_i, h_A)$ , we know that $\hat{h}_i = \arg \min_{h_i \in \mathcal{C}_i} d(h_i, h_A)$ , which means
76
+
77
+ $$
78
+ d \left(\hat {h} _ {i}, h _ {A}\right) \leq d \left(h _ {i} ^ {*}, h _ {A}\right). \tag {2}
79
+ $$
80
+
81
+ Again, by the triangle inequality, we have
82
+
83
+ $$
84
+ d \left(h _ {i} ^ {*}, h _ {A}\right) \leq d \left(h _ {i} ^ {*}, h ^ {*}\right) + d \left(h ^ {*}, h _ {A}\right). \tag {3}
85
+ $$
86
+
87
+ Combining Eq. (1)(2)(3), we have
88
+
89
+ $$
90
+ d \left(\hat {h} _ {i}, h ^ {*}\right) \leq d \left(h _ {i} ^ {*}, h ^ {*}\right) + 2 d \left(h _ {A}, h ^ {*}\right). \tag {4}
91
+ $$
92
+
93
+ Since $d(h_A, h^*)$ is bounded by $\varepsilon / 2$ with probability at least $1 - \delta$ , we can get $\operatorname{er}(\hat{h}_i) - \nu_i \leq \varepsilon$ holds with probability at least $1 - \delta$ .
94
+
95
+ Theorem 1 says that if we can find an active learning method to obtain a classifier $h_A \in \tilde{\mathcal{T}}$ such that $\mathrm{er}(h_A) \leq \varepsilon / 2$ with probability $1 - \delta$ , then we can obtain $\varepsilon$ -good classifier $\hat{h}_i$ with probability at least $1 - \delta$ for $\mathcal{C}_i, \forall i = 1, \dots, k$ , where $\hat{h}_i = \arg \min_{h_i \in \mathcal{C}_i} d(h_i, h_A)$ . This result provides a general guarantee that if an algorithm can achieve a label complexity on the combined hypothesis space of different models, it also can achieve a bounded label complexity on these models (i.e., the label complexity for multiple models). It can be served as a baseline of AL under multi-models setting.
96
+
97
+ # 4 Potential Improvements of Active over Passive
98
+
99
+ Although Theorem 1 bridges the traditional label complexity to that of multiple models setting, it does not reveal the improvement of active over passive learning. Next, we will show the potential of AL under this setting in the realizable case.
100
+
101
+ According to the theoretical analysis of the passive learning algorithm empirical risk minimization (ERM) [16] for single hypothesis space $\mathcal{C}$ with VC dimension $d$ [34], we know that
102
+
103
+ Lemma 1. Considering the binary classification, given the hypothesis space $\mathcal{C}$ with VC dimension $d$ . The passive learning algorithm ERM achieves a label complexity $\Lambda^{PL}$ such that, for any $\mathcal{D}_{XY}$ in the realizable case, $\forall \varepsilon, \delta \in (0,1)$ ,
104
+
105
+ $$
106
+ \Lambda^ {P L} (\varepsilon , \delta , \mathcal {D} _ {X Y}) \lesssim \left(\frac {1}{\varepsilon}\right) (d \operatorname {L o g} (\theta (\varepsilon)) + \operatorname {L o g} (1 / \delta)). \tag {5}
107
+ $$
108
+
109
+ For the agnostic case, ERM achieves a label complexity $\Lambda^{PL}$ such that,
110
+
111
+ $$
112
+ \Lambda^ {P L} (\nu + \varepsilon , \delta , \mathcal {D} _ {X Y}) \lesssim \left(\frac {\nu + \varepsilon}{\varepsilon^ {2}}\right) (d \log (\theta (\nu + \varepsilon)) + \log (1 / \delta)), \tag {6}
113
+ $$
114
+
115
+ where $\theta (\cdot)$ is the disagreement coefficient which is formally defined as
116
+
117
+ Definition 4. Disagreement Coefficient: For any $r_0 \geq 0$ and classifier $h$ , define the disagreement coefficient of $h$ with respect to $\mathcal{C}$ on $\mathcal{D}_{XY}$ as
118
+
119
+ $$
120
+ \theta_ {h} ^ {\mathcal {C}} \left(r _ {0}\right) = \sup _ {r > r _ {0}} \frac {\mathbb {P} \left(\operatorname {D I S} \left(\mathrm {B} _ {\mathcal {C}} (h , r)\right)\right)}{r} \vee 1.
121
+ $$
122
+
123
+ Where $\vee$ is the max operator. For a set of hypotheses $\mathcal{H}$ , $\mathrm{DIS}(\mathcal{H}) = \{\pmb{x} \in \mathcal{X} \mid \exists h, h' \in \mathcal{H}, s.t. h(\pmb{x}) \neq h'(\pmb{x})\}$ , and $\mathrm{B}_{\mathcal{H}}(h,r) = \{g \in \mathcal{H} \mid d(h,g) \leq r\}$ .
124
+
125
+ This value roughly characterizes the behavior of the size of disagreement region $\mathrm{DIS}(\cdot)$ as a function of the hypotheses within a radius $r$ around the classifier $h$ . As aforementioned, passive learning has the label complexity for multiple models $\tilde{\Lambda}^{PL} \leq \max_{i} \Lambda_{i}^{PL}$ . We note that the target concept $h^*$ will usually not be included by every hypothesis space $\mathcal{C}_i$ , thus its label complexity $\tilde{\Lambda}^{PL}$ will usually be the agnostic form in Lemma 1 under the realizable case.
126
+
127
+ To show the potential of AL under this setting, we take the CAL method [9] as an example, which is a representative and well-analyzed approach in the active learning literature [16]. CAL queries the examples from the disagreement region of a set of consistent hypotheses, i.e., $\mathrm{DIS}(V) = \{\pmb{x} \in \mathcal{X} \mid \exists h, h' \in V \text{ s.t. } h(\pmb{x}) \neq h'(\pmb{x})\}$ , where $V = \{h \in \mathcal{C} \mid h(\pmb{x}) = h^{*}(\pmb{x}), \forall \pmb{x} \in \mathcal{L}\}$ . It achieves the label complexity $O(\theta(\varepsilon) \log(1/\varepsilon) \log(\theta(\varepsilon) \log(1/\varepsilon)))$ for the realizable case. According to Theorem 1, it will have the following label complexity for the multiple target models
128
+
129
+ Corollary 1. Given target models $\mathcal{T} = \{\mathcal{C}_i|i = 1,2,\dots ,k\}$ . Suppose $\tilde{\mathcal{T}}$ has VC dimension $d < \infty$ . CAL achieves a label complexity $\tilde{\Lambda}^{AL}$ for multiple target models such that, for $\mathcal{D}_{XY}$ in the realizable case, for any $\forall \varepsilon, \delta \in (0,1)$ ,
130
+
131
+ $$
132
+ \tilde {\Lambda} ^ {A L} (\varepsilon , \delta , \mathcal {D} _ {X Y}, \mathcal {T}) \leq \theta_ {h ^ {*}} ^ {\tilde {\mathcal {T}}} (\varepsilon / 2) \log (2 / \varepsilon) \left(d \log \left(\theta_ {h ^ {*}} ^ {\tilde {\mathcal {T}}} (\varepsilon / 2)\right) + \log \left(\frac {\log (2 / \varepsilon)}{\delta}\right)\right). \tag {7}
133
+ $$
134
+
135
+ Proof. By combining the label complexity of CAL for single model from [16] and Theorem 1, we can get the result.
136
+
137
+ To reveal the potential improvement, note that the label complexity for passive learning heavily depends on the property of the worst hypothesis space, i.e., the value of $\max_i\min_{h\in \mathcal{C}_i}er(h)$ . Assume that $\max_i\min_{h\in \mathcal{C}_i}er(h) > \varepsilon$ . Then according to Lemma 1 and Corollary 1, the label complexity of passive learning for multiple target models $\tilde{\Lambda}^{PL}\left(\varepsilon ,\delta ,\mathcal{D}_{XY},\mathcal{T}\right)$ is $\Omega (2 / \varepsilon)$ . On the other side, CAL has a label complexity $\Omega (\mathrm{Log}(2 / \varepsilon))$ , which implies the potential of the improvement of active learning under this setting. We leave the guarantee of strict improvement of AL under this setting an interesting future work. Next we further study the agnostic case (i.e., $h^* \notin \tilde{\mathcal{T}}$ ).
138
+
139
+ # 5 An Agnostic Disagreement-based AL method for Multiple Models
140
+
141
+ Define the set $V_{i}$ for each $\mathcal{C}_i$ as $\{h\in \mathcal{C}_i\mid h(\pmb {x}) = h^* (\pmb {x}),\forall \pmb {x}\in \mathcal{L}\}$ , we propose to query the examples located in the joint disagreement regions for all $\mathcal{C}_i,\forall i = 1,2,\ldots ,k$ , i.e., $\mathrm{DIS}(V_1)\cap \mathrm{DIS}(V_2)\cap \dots \mathrm{DIS}(V_k)$ . Intuitively, we know that $V_{i}$ must be a subset of $V$ , if such data exists, we can expect it has higher potential to reduce $V$ . This statement can be simply implied by the Bayesian formula.
142
+
143
+ Proposition 1. Considering binary classification problem. Given hypothesis space $\mathcal{C}$ . Let $V_{+}(\pmb{x}) = \{h\in V|h(\pmb{x}) = +1\}$ , $V_{-}(\pmb{x}) = \{h\in V|h(\pmb{x}) = -1\}$ , where $V = \{h\in \mathcal{C}|h(\pmb{x}) = h^{*}(\pmb{x}),\forall \pmb{x}\in \mathcal{L}\}$ . Denote $\lambda (\pmb {x}) = \frac{|V_{+}(\pmb{x})|}{|V_{-}(\pmb{x})|}$ , where $|\cdot |$ is the number of elements of a set. The ideal case is to query the $\pmb{x}$ which has $\lambda (\pmb {x}) = 1$ . Given any sequences of subset $V_{1},V_{2},\ldots ,V_{k}$ randomly sampled from $V$ , define the event $E_{\pmb{x}}$ that data $\pmb{x}$ falls into $\mathrm{DIS}(V_1)\cap \mathrm{DIS}(V_2)\dots \cap \mathrm{DIS}(V_k)$ . According to the Bayesian formula, we have
144
+
145
+ $$
146
+ \begin{array}{l} \mathbb {P} (\lambda (\boldsymbol {x}) = 1 \mid E _ {\boldsymbol {x}}) = \frac {\mathbb {P} (E _ {\boldsymbol {x}} \mid \lambda (\boldsymbol {x}) = 1) \mathbb {P} (\lambda (\boldsymbol {x}) = 1)}{\mathbb {P} (E _ {\boldsymbol {x}})} \tag {8} \\ \geq \mathbb {P} (\lambda (\boldsymbol {x}) = 1). \\ \end{array}
147
+ $$
148
+
149
+ Algorithm 1 The DIAM-online Algorithm
150
+ Initialize: hyperparameter $q$ , constants $\sigma_{i}$ ; $m \gets 0, \hat{V}_{i} \gets \mathcal{C}_{i}, \forall i = 1, \dots, k$ .
151
+
152
+ Output: Any $h \in \hat{V}_i, \forall i = 1, \dots, k$ .
153
+
154
+ 1: while Labeling budget is not run out do
155
+ 2: $m\gets m + 1$
156
+ 3: Request an unlabeled data $\mathbf{x}_m$
157
+ 4: if $\sum_{i}\mathbb{I}[\pmb{x}_m\in \mathrm{DIS}(\hat{V}_i)]\geq q$ then
158
+ 5: Query: $\mathcal{L}\gets \mathcal{L}\cup \{(x_m,h^* (x_m))\}$
159
+ 6: end if
160
+ 7: if $\log_2m\in \mathbb{N}$ then
161
+ 8: $\hat{V}_i\gets \{h\in \hat{V}_i|\operatorname {er}_{\mathcal{L}}(h) -$ $\min_{g\in \hat{V}_i}\operatorname {er}_{\mathcal{L}}(g)\leq \sigma_i\} ,\forall i = 1,\ldots ,k.$
162
+ 9: end if
163
+ 10: end while
164
+
165
+ Algorithm 2 The DIAM-pool Algorithm
166
+ Initialize: labeled set $\mathcal{L}$ , unlabeled set $\mathcal{U}$ , hyperparameters $\hat{\sigma}_i$ , $\hat{V}_i \gets \mathcal{C}_i, \forall i = 1, \ldots, k$ .
167
+
168
+ Output: $\{\hat{h}_i|i = 1,\dots ,k\}$
169
+
170
+ 1: while Labeling budget is not run out do
171
+ 2: $\pmb{x}^{*} = \underset {\pmb{x}\in \mathcal{U}}{\arg \max}\sum_{i}\mathbb{I}[\pmb {x}\in \mathrm{DIS}(\hat{V}_{i})]$
172
+ 3: Query $\pmb{x}^*$ from the oralce: $\mathcal{L} \gets \mathcal{L} \cup \{(\pmb{x}^*, h^*(\pmb{x}^ {*})))\}$
173
+ 4: $\mathcal{U}\gets \mathcal{U}\setminus \{\pmb{x}^{*}\}$
174
+ 5: for $i = 1,\dots ,k$ do
175
+ 6: $\hat{h}_i\gets \min_{g\in \hat{V}_i}\operatorname {er}_{\mathcal{L}}(g)$
176
+ 7: $\hat{V}_i\gets \{h\in \hat{V}_i|\left(\operatorname {er}_{\mathcal{L}}(h) - \hat{h}_i\right)\leq \hat{\sigma}_i\}$
177
+ 8: end for
178
+ 9: end while
179
+
180
+ Proof. Since each $V_{i}$ is randomly sampled from $V$ , $\mathbb{P}(E_{\pmb{x}})$ will reach its maximum value when $\lambda(\pmb{x}) = 1$ , thus we have $\mathbb{P}(E_{\pmb{x}}|\lambda(\pmb{x}) = 1) / \mathbb{P}(E_{\pmb{x}}) \geq 1$ , which leads to the conclusion.
181
+
182
+ Following this principle, we would like to query the examples located in the joint disagreement regions of $\mathcal{C}_i, \forall i = 1,2,\dots,k$ . However, since we have multiple target models, the target concept $h^*$ might not be included by every $\mathcal{C}_i$ in practice, which turns the learning problem to the agnostic setting. Inspired by the RobustCAL method [2], which is a disagreement-based AL algorithm for the agnostic setting, we propose DIAM (i.e., DIsagreement-based AL for Multi-models) query strategy for the multiple target models problem. Note that we define a new form of $V_i$ as $\hat{V}_i$ to tackle the noisy setting, i.e., $\hat{V}_i = \{h\in \mathcal{C}_i\mid \operatorname {er}_{\mathcal{L}}(h) - \min_{g\in \mathcal{C}_i}\operatorname {er}_{\mathcal{L}}(g)\leq \sigma_i\}$ , where $\sigma_{i}$ is a constant. To simplify the theoretical analysis, we first propose an online version of DIAM, then we define the DIAM method for the pool-based setting and empirically validate its effectiveness. They are summarized at Algorithm 1 and 2, respectively. The hyperparameter $q$ controls the conservativeness of the algorithm. With a larger $q$ , it will reject more less informative unlabeled data in the online setting.
183
+
184
+ Now let us analyze the DIAM method. Since we are considering the agnostic setting, it is necessary to model the noise. Here we employ the commonly used Tsybakov noise condition [33].
185
+
186
+ Condition 1. [33, Tsybakov noise] For some $a \in [1,\infty)$ and $\alpha \in [0,1]$ , assume that $f^{\star}$ achieves $\inf_{h\in \mathcal{C}}\operatorname {er}(h)$ , for every $h\in \mathcal{C}$ ,
187
+
188
+ $$
189
+ \mathbb {P} \left(\boldsymbol {x}: h (\boldsymbol {x}) \neq f ^ {\star} (\boldsymbol {x})\right) \leq a \left(\operatorname {e r} (h) - \operatorname {e r} \left(f ^ {\star}\right)\right) ^ {\alpha}.
190
+ $$
191
+
192
+ We assume that there exists a pair of $a_i$ and $\alpha_{i}$ for each target model $\mathcal{C}_i$ . Considering a conservative situation that the hyperparameter $q = 1$ , by further taking the constants $\sigma_{i}$ in DIAM-online algorithm as the same form in the RobustCAL method [16], which relates to the properties of the noise, hypothesis space, and disagreement coefficient, we can have the following result. The proof is deferred to the appendix.
193
+
194
+ Theorem 2. Considering agnostic setting and binary classification tasks. Given a set of target models $\mathcal{T} = \{\mathcal{C}_i|i = 1,2,\dots ,k\}$ , in which each $\mathcal{C}_i$ has VC dimensions $d_{i} < \infty$ and meet Condition 1. Let $h_i^* = \arg \min_{h_i\in \mathcal{C}_i}\operatorname {er}(h_i)$ . For any $\varepsilon, \delta \in (0,1)$ , if $q = 1$ , DIAM-online algorithm achieves a label complexity $\tilde{\Lambda} (\varepsilon ,\delta ,\mathcal{D}_{XY},\mathcal{T})$ for multiple target models such that, for $a_i$ and $\alpha_{i}$ as in Condition 1, for any $\mathcal{D}_{XY}$ , $\tilde{\Lambda} (\varepsilon ,\delta ,\mathcal{D}_{XY},\mathcal{T})$ is no larger than
195
+
196
+ $$
197
+ \sum_ {i = 1} ^ {k} a _ {i} ^ {2} \theta_ {h _ {i} ^ {*}} ^ {\mathcal {C} _ {i}} \left(a _ {i} \varepsilon^ {\alpha_ {i}}\right) \varepsilon^ {2 \alpha_ {i} - 2} \left(d _ {i} \log \left(\theta_ {h _ {i} ^ {*}} ^ {\mathcal {C} _ {i}} \left(a _ {i} \varepsilon^ {\alpha_ {i}}\right)\right) + \log \left(\frac {\log \left(a _ {i} / \varepsilon\right)}{\delta}\right)\right) \log \left(\frac {1}{\varepsilon}\right), \tag {9}
198
+ $$
199
+
200
+ and no larger than,
201
+
202
+ $$
203
+ \sum_ {i = 1} ^ {k} \theta_ {h _ {i} ^ {*}} ^ {\mathcal {C} _ {i}} (\nu_ {i} + \varepsilon) \left(\frac {\nu_ {i} ^ {2}}{\varepsilon^ {2}} + \operatorname {L o g} \left(\frac {1}{\varepsilon}\right)\right) \left(d _ {i} \operatorname {L o g} \left(\theta_ {h _ {i} ^ {*}} ^ {\mathcal {C} _ {i}} (\nu_ {i} + \varepsilon)\right) + \operatorname {L o g} \left(\frac {\log (1 / \varepsilon)}{\delta}\right)\right). \tag {10}
204
+ $$
205
+
206
+ Theorem 2 provides an upper bound of the label complexity of the DIAM-online method when $q = 1$ . It considers a general situation with arbitrary target models and data distributions, even the unlabeled data will never fall into the joint disagreement regions. However, one may be more interested in the situation that if we can always query the $\mathbf{x}$ such that $\sum_{i}\mathbb{I}[\mathbf{x}\in \mathrm{DIS}(\hat{V}_i)] = k$ . Next, we prove that if such ideal situation exists, DIAM-online will achieve a better label complexity than applying CAL on the multiple target models setting even under the realizable setting.
207
+
208
+ Theorem 3. Considering binary classification tasks and realizable case. Given a set of target models $\mathcal{T} = \{\mathcal{C}_i|i = 1,2,\dots ,k\}$ , in which each $\mathcal{C}_i$ has VC dimensions $d_{i} < \infty$ and meet Condition 1, and $\tilde{\mathcal{T}}$ with VC dimension $d < \infty$ . Assume that, if a data point falls into the disagreement region of any $\mathcal{C}_i$ , it also falls into the disagreement regions of the others $\{\mathcal{C}_j|j\neq i,j = 1,2,\ldots ,k\}$ . Assume the m-th target model achieves the highest label complexity. Let $h_m^* = \arg \min_{h_m\in \mathcal{C}_m}\mathrm{er}(h_m)$ and $\nu_{m} = \mathrm{er}(h_{m}^{*})$ . For any $\delta \in (0,1)$ , $\varepsilon \in (0,1 / e)$ , $h^{*}\in \tilde{\mathcal{T}}$ . If $\nu_{m}\leq \frac{\ln 2}{2}\varepsilon$ , DIAM-online achieves a better upper bound of $\tilde{\Lambda}$ than that of applying CAL method on $\tilde{\mathcal{T}}$ .
209
+
210
+ The key of the proof is comparing the disagreement coefficients defined on different functions and hypothesis spaces, i.e., $\theta_{h_m^*}^{\mathcal{C}_m}$ and $\theta_{h^*}^{\tilde{\mathcal{T}}}$ . We defer the proof to the appendix. Although Theorem 3 holds with somewhat strict conditions, we note that Theorem 1 only works in the realizable case, while DIAM does not require this condition. Next, we discuss how to implement DIAM in the real applications for deep models.
211
+
212
+ It is generally believed that finding the disagreed pair of classifiers from a set of hypotheses for a given $\pmb{x}$ is non-trivial. Most existing methods randomly sample functions from the hypothesis space for validation, or turn to select the data close to the decision boundary (e.g., uncertainty), which can be expensive or inaccurate. This problem becomes more prohibitive to the deep models.
213
+
214
+ To efficiently estimate the disagreement regions for the neural networks, we propose to exploit the predictions of the unlabeled data during the later epochs in the training phase, typically after the network converging. Recall the definition of disagreement region $\mathrm{DIS}(\hat{V}_i)$ , we should firstly find the hypotheses that are basically consistent with the labeled data, then validate whether there exists a pair of hypotheses that disagree on the given unlabeled data. For the first characteristic, the models on the later epochs, i.e., has smaller training errors, can represent the well-learned hypotheses. For the second characteristic, if there exists models $i,j$ from the later epochs such that the model trained at epoch $i$ has inconsistent prediction with the model trained at epoch $j$ on the unlabeled data $\mathbf{x}$ , we can say that the example $\mathbf{x}$ falls into $\mathrm{DIS}(\hat{V}_i)$ .
215
+
216
+ More concretely, according to the training loss curve, we empirically take the models in the latter half of the training epochs as the well-performed hypotheses set, and estimate the disagreement region with them. Since there may be multiple examples have the same value of $\sum_{i}\mathbb{I}[\pmb {x}\in \mathrm{DIS}(\hat{V}_i)]$ , we further calculate the vote entropy [10] of the well-performed hypotheses on the specific data, and take it as the secondary sort key in the data selection phase. We also note that the query batch size in deep learning is usually large. tT avoid overmuch information redundancy, we heuristically keep the top-rated unlabeled data with 5 times the size of the batch size, and randomly sample $20\%$ from it for querying. We leave the more advanced diversity measurement a future work.
217
+
218
+ By applying the above heuristics to the deep models, DIAM method is quite efficient. It evaluates the unlabeled data with the models trained with later epochs, which roughly takes the size of the well-performed hypotheses set times of that of the entropy method to make the data selection. However, we also note that the DIAM method has some limitations. It only considers the informativeness of the unlabeled data, which may be less effective for the batch-mode selection. For the potential negative social impact, DIAM may reduce the cost of training multiple malicious machine learning models. Nevertheless, we believe the positive contribution is more significant.
219
+
220
+ # 6 Experiment
221
+
222
+ # 6.1 Empirical Settings
223
+
224
+ To construct the multiple target models scenario, we introduce the results of a recent NAS method OFA [6], which tries to efficiently search model architectures for different devices by training only one super-net. They report the searched effective architectures that meet the hardware constraints
225
+
226
+ ![](images/a227b362b0ce820a0a0f95031f2ce9bb3ea5451ebfd960b2289ecf9359dd5e36.jpg)
227
+ (a) MNIST
228
+
229
+ ![](images/70e8c9d0acc5d08018f2063a9af6933f68120920a8c33c3673423f8f93d20f98.jpg)
230
+ (b) Kuzushiji-MNIST
231
+ Figure 1: The learning curves with the mean accuracy of the target models of the compared methods. The error bars indicate the standard deviation of the performances of target models.
232
+
233
+ of various machines on the GitHub $^{2}$ , which is well suited to our problem setting. Specifically, we take 12 specialized model architectures with different prediction accuracies and speeds that target on Samsung S7 Edge, Samsung Note8 and Samsung Note10 as our target models. They are pruned from a MobileNetV3 (which is the super-net), but have very different prediction time and accuracies. Their Multiply-Accumulate Operations (MACs) range from 66M to 237M, which denote the diversity of the architectures. The model specifications are listed in the appendix.
234
+
235
+ We compare the following query strategies in our experiments.
236
+
237
+ - DIAM: The proposed method of this paper, which queries the data located in the joint disagreement regions of multiple target models.
238
+ - CAL [9]: Query the data falls into the disagreement region of any target models. It has a bounded label complexity for the multiple target models setting according to Theorem 1.
239
+ - Entropy [20]: Query the data with the highest prediction entropies. We take the mean entropies calculated by all target models to support the novel problem setting.
240
+ - Least Confidence [29]: Query the data with the least prediction confidence. We take the mean values calculated by all target models to support the novel problem setting.
241
+ - Margin [26]: Query the data with the minimum prediction margin. We take the mean margin values calculated by all target models to support the novel problem setting.
242
+ - Coreset [27]: Query the most representative data. The distance is calculated by the features extracted by a pretrained MobileNetV3, which is the super-net in OFA [6].
243
+ - Random: Query data randomly. Note that this is a highly competitive baseline.
244
+
245
+ Since Optical Character Recognition (OCR) is one of the representative machine learning systems that are required to be deployed on diverse devices, two commonly used hand-writing characters classification benchmarks are employed in our experiments, i.e., the MNIST [19] and Kuzushiji-MNIST [8] datasets. They are under the CC BY-SA 3.0 and CC BY-SA 4.0 licenses, respectively. Here we consider the prevalent pool-based active learning setting. Specifically, we randomly take 3,000 training data as our initially labeled data, and the rest as the unlabeled pool. At each iteration, the compared sampling methods will select 1,500 unlabeled examples for querying, then re-train the models. The mean and standard deviation of the accuracies of multiple target models are reported. Note that more results can be found in the appendix.
246
+
247
+ For the model training, We mainly follow the training config of OFA. Specifically, the hyperparameters are set by the default values in the project. For example, the learning rate is set by $7.5e - 3$ , batch size is 128, SGD optimizer is employed with momentum 0.9. Since the initially labeled data is limited, a small number of training epochs is taken to avoid over-fitting. Specifically, we employ the
248
+
249
+ Table 1: The mean of the learning curves and the mean of standard deviation values with different numbers of target models on the OCR benchmarks achieved by the compared methods (mean accuracy $\pm$ mean standard deviation). The best performance is highlighted in boldface.
250
+
251
+ <table><tr><td rowspan="2">Methods</td><td colspan="5">Number of Target Models</td></tr><tr><td>2</td><td>4</td><td>6</td><td>8</td><td>12</td></tr><tr><td colspan="6">MNIST</td></tr><tr><td>DIAM</td><td>98.16 ± 0.13</td><td>97.29 ± 0.99</td><td>97.55 ± 0.85</td><td>97.34 ± 1.09</td><td>97.34 ± 1.04</td></tr><tr><td>CAL</td><td>97.79 ± 0.14</td><td>97.04 ± 0.92</td><td>97.24 ± 0.89</td><td>96.95 ± 1.07</td><td>96.98 ± 1.10</td></tr><tr><td>Entropy</td><td>97.83 ± 0.10</td><td>96.94 ± 1.01</td><td>97.15 ± 0.98</td><td>96.92 ± 1.06</td><td>96.98 ± 1.00</td></tr><tr><td>Margin</td><td>97.79 ± 0.13</td><td>96.94 ± 1.02</td><td>97.19 ± 0.96</td><td>96.81 ± 1.19</td><td>97.00 ± 1.02</td></tr><tr><td>Least conf.</td><td>97.84 ± 0.11</td><td>96.89 ± 1.02</td><td>97.23 ± 0.92</td><td>96.88 ± 1.05</td><td>96.96 ± 1.07</td></tr><tr><td>Coreset</td><td>97.64 ± 0.13</td><td>96.69 ± 1.07</td><td>97.03 ± 0.97</td><td>96.36 ± 1.82</td><td>96.56 ± 1.40</td></tr><tr><td>Random</td><td>97.81 ± 0.12</td><td>96.93 ± 0.97</td><td>97.21 ± 0.94</td><td>96.83 ± 1.12</td><td>97.03 ± 0.99</td></tr><tr><td colspan="6">Kuzushiji-MNIST</td></tr><tr><td>DIAM</td><td>90.38 ± 0.21</td><td>85.76 ± 4.69</td><td>86.91 ± 4.38</td><td>86.23 ± 4.68</td><td>86.85 ± 4.25</td></tr><tr><td>CAL</td><td>87.06 ± 0.34</td><td>83.61 ± 4.29</td><td>84.70 ± 3.88</td><td>83.40 ± 4.32</td><td>83.31 ± 4.53</td></tr><tr><td>Entropy</td><td>87.09 ± 0.34</td><td>83.22 ± 4.16</td><td>84.39 ± 3.85</td><td>83.28 ± 4.28</td><td>83.33 ± 4.33</td></tr><tr><td>Margin</td><td>86.91 ± 0.35</td><td>83.20 ± 4.10</td><td>84.31 ± 4.03</td><td>83.11 ± 4.37</td><td>83.16 ± 4.29</td></tr><tr><td>Least conf.</td><td>86.71 ± 0.26</td><td>83.38 ± 4.25</td><td>84.36 ± 3.71</td><td>83.20 ± 4.42</td><td>83.04 ± 4.33</td></tr><tr><td>Coreset</td><td>87.49 ± 0.36</td><td>82.97 ± 5.16</td><td>84.80 ± 4.58</td><td>83.00 ± 4.93</td><td>82.91 ± 5.03</td></tr><tr><td>Random</td><td>87.34 ± 0.31</td><td>82.97 ± 4.38</td><td>84.22 ± 3.98</td><td>83.02 ± 4.19</td><td>83.26 ± 4.36</td></tr></table>
252
+
253
+ pretrained weights on the image-net dataset for initialization, then finetune 20 epochs on the labeled data.
254
+
255
+ # 6.2 Results
256
+
257
+ We report the trend of mean accuracy of multiple target models with the number of queries increasing in Fig. 1. The error bars indicate the standard deviation of the performances of multiple target models. First of all, the high deviation of the performances of the initial point shows the diversity of the target models, which symbolizes the practicability and difficulty of the experimental settings. It is conceivable that different target models will have various preferences of training data due to the diverse architectures. Under this challenging setting, it can be observed from the figure that our method can significantly outperform the traditional active and passive learning methods. It shows a great potential of improvements over the random sampling, which is a very competitive baseline in this novel setting. This result sufficiently reveals the effectiveness of DIAM and the necessity of designing active query method under this practical setting. The uncertainty-based methods, i.e., entropy, least confidence and margin, achieve comparable performances with random sampling. These results meet our expectation. Because traditional AL methods are usually model-dependent, i.e., the data queried by one model may be less effective for training another model. By taking the mean uncertainty scores of diverse target models, the data selection may tend to be non-informative. The coreset method is less stable than random. We note that coreset is still a model-based selection method in deep learning. Because the features of the data will be optimized along with the training procedures. Thus it may also suffer from the model dependence problem.
258
+
259
+ # 6.3 Study on Different Numbers of Target Models
260
+
261
+ We further explore the influence of the number of target models to the data selection methods. Due to the space limitation, we report the mean of the learning curves and the mean of standard deviations in Table 1, and defer the whole learning curves to the appendix. The results show that our method can consistently outperform the other compared methods, which demonstrate its robustness to the number of models. This property also denotes that the DIAM method has the potential to tackle more challenging situations, i.e., improving sufficient numbers of target models simultaneously. It is essential to the machine learning systems which have a wide range of applications. The performances of the other compared methods have similar trends with more target models setting. It again verifies that the traditional AL methods are usually model-dependent, and emphasizes the necessity of designing novel selection approaches under this practical setting.
262
+
263
+ # 7 Conclusion
264
+
265
+ In this paper, we propose to study active learning in a novel setting, where the task is to select and label the most useful examples that are beneficial to multiple target models. We firstly analyze the label complexity of active and passive learning to reveal the potential improvement of AL under this novel setting. Based on this insight, we further propose an active selection criterion DIAM that prefers the data located in the joint disagreement regions of different target models. Empirical studies on the OCR benchmarks, which is one of the representative applications that are required to accommodate different devices, show the effectiveness of the proposed method. In the future, we will tackle more complex and important learning tasks (e.g., face recognition, object detection), and design effective query strategies which incorporate both informativeness and representativeness under the multiple target models setting.
266
+
267
+ # Acknowledgments and Disclosure of Funding
268
+
269
+ This research was supported by the National Key R&D Program of China (2020AAA0107000), NSFC (62222605, 62076128), and Natural Science Foundation of Jiangsu Province of China (BK20211517, BK2022050029).
270
+
271
+ # References
272
+
273
+ [1] Alnur Ali, Rich Caruana, and Ashish Kapoor. Active learning with model selection. In AAAI Conference on Artificial Intelligence, pages 1673-1679, 2014.
274
+ [2] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78-89, 2009.
275
+ [3] Maria-Florina Balcan, Steve Hanneke, and Jennifer Wortman Vaughan. The true sample complexity of active learning. Machine Learning, 80(2):111-139, 2010.
276
+ [4] Alina Beygelzimer, Daniel J. Hsu, John Langford, and Chicheng Zhang. Search improves label for active learning. In Advances in Neural Information Processing Systems, pages 3342-3350, 2016.
277
+ [5] Alina Beygelzimer, Daniel J. Hsu, John Langford, and Tong Zhang. Agnostic active learning without constraints. In Advances in Neural Information Processing Systems, pages 199-207, 2010.
278
+ [6] Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations, 2020.
279
+ [7] Xiaofeng Cao and Ivor W Tsang. Shattering distribution for active learning. IEEE Transactions on Neural Networks and Learning Systems, 33(1):215-228, 2022.
280
+ [8] Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical japanese literature, 2018.
281
+ [9] David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine Learning, 15(2):201-221, 1994.
282
+ [10] Ido Dagan and Sean P. Engelson. Committee-based sampling for training probabilistic classifiers. In International Conference on Machine Learning, pages 150-157, 1995.
283
+ [11] Bo Du, Zengmao Wang, Lefei Zhang, Liangpei Zhang, Wei Liu, Jialie Shen, and Dacheng Tao. Exploring representativeness and informativeness for active learning. IEEE Transactions on Cybernetics, 47(1):14-26, 2015.
284
+ [12] Yonatan Geifman and Ran El-Yaniv. Deep active learning with a neural architecture search. In Advances in Neural Information Processing Systems, pages 5976-5986, 2019.
285
+
286
+ [13] Ran Gilad-Bachrach, Amir Navot, and Naftali Tishby. Query by committee made real. In Advances in Neural Information Processing Systems, pages 443-450, 2005.
287
+ [14] Steve Hanneke. A bound on the label complexity of agnostic active learning. In International Conference on Machine Learning, pages 353-360, 2007.
288
+ [15] Steve Hanneke. Activized learning: Transforming passive to active with improved label complexity. The Journal of Machine Learning Research, 13(1):1469-1587, 2012.
289
+ [16] Steve Hanneke. Theory of active learning. Foundations and Trends in Machine Learning, 7(2-3), 2014.
290
+ [17] Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In Advances in Neural Information Processing Systems, pages 7026-7037, 2019.
291
+ [18] Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. Learning active learning from data. In Advances in Neural Information Processing Systems, pages 4225-4235, 2017.
292
+ [19] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
293
+ [20] David D Lew is and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In Machine Learning: Proceedings of the 11th International Conference, pages 148-156. Elsevier, 1994.
294
+ [21] Changsheng Li, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, and Guoren Wang. On deep unsupervised active learning. In International Joint Conferences on Artificial Intelligence, pages 2626-2632, 2020.
295
+ [22] David Lowell, Zachary C. Lipton, and Byron C. Wallace. Practical obstacles to deploying active learning. In EMNLP-IJCNLP, pages 21-30, 2019.
296
+ [23] Kunkun Pang, Mingzhi Dong, Yang Wu, and Timothy Hospedales. Meta-learning transferable active learning policies by deep reinforcement learning. arXiv preprint arXiv:1806.04798, 2018.
297
+ [24] Davi Pereira-Santos, Ricardo Bastos Cavalcante Prudência, and André CPLF de Carvalho. Empirical investigation of active learning strategies. Neurocomputing, 326:15-27, 2019.
298
+ [25] Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. A survey of deep active learning. arXiv preprint arXiv:2009.00236, 2020.
299
+ [26] Tobias Scheffer, Christian Decomain, and Stefan Wrobel. Active hidden markov models for information extraction. In International Symposium on Intelligent Data Analysis, pages 309-318. Springer, 2001.
300
+ [27] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018.
301
+ [28] Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison, 2009.
302
+ [29] Burr Settles and Mark Craven. An analysis of active learning strategies for sequence labeling tasks. In Conference on Empirical Methods in Natural Language Processing, pages 1070-1079, 2008.
303
+ [30] Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational adversarial active learning. In International Conference on Computer Vision, pages 5972-5981, 2019.
304
+ [31] Ying-Peng Tang and Sheng-Jun Huang. Self-paced active learning: Query the right thing at the right time. In AAAI Conference on Artificial Intelligence, volume 33, pages 5117-5124, 2019.
305
+ [32] Ying-Peng Tang and Sheng-Jun Huang. Dual active learning for both model and data selection. In International Joint Conference on Artificial Intelligence, pages 3052-3058, 2021.
306
+
307
+ [33] Alexander B Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135-166, 2004.
308
+ [34] VN Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264, 1971.
309
+ [35] Thuy-Trang Vu, Ming Liu, Dinh Phung, and Gholamreza Haffari. Learning how to active learn by dreaming. In Annual Meeting of the Association for Computational Linguistics, pages 4091-4101, 2019.
310
+ [36] Yifan Yan and Sheng-Jun Huang. Cost-effective active learning for hierarchical multi-label classification. In International Joint Conferences on Artificial Intelligence, pages 2962-2968, 2018.
311
+ [37] Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113-127, 2015.
312
+ [38] Xueying Zhan, Huan Liu, Qing Li, and Antoni B. Chan. A comparative survey: Benchmarking for pool-based active learning. In International Joint Conferences on Artificial Intelligence, pages 4679-4686, 2021.
313
+ [39] Yilun Zhou, Adithya Renduchintala, Xian Li, Sida Wang, Yashar Mehdad, and Asish Ghoshal. Towards understanding the behaviors of optimal deep active learning algorithms. In International Conference on Artificial Intelligence and Statistics, pages 1486-1494. PMLR, 2021.
activelearningformultipletargetmodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4a50bca7edfb79c4197b4004e53283616a5d9c73526f5c7ffdc9c3151f0a3c9
3
+ size 284304
activelearningformultipletargetmodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85edcb12acfea9f36eed14b2bdc39a682f46ad3982012cb93e3fad0045264ef3
3
+ size 548402
activelearninghelpspretrainedmodelslearntheintendedtask/3d6384fb-d348-4cec-a9c8-bc16eb1fa578_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a88cf0d24eefc5ea261ef251b30fbf69df2041ec28d3189dcbcb851a1654edc
3
+ size 75783
activelearninghelpspretrainedmodelslearntheintendedtask/3d6384fb-d348-4cec-a9c8-bc16eb1fa578_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34538381ee2131f7c74b38046fc80146202a0bed08c4c20952af9aee98aaff9b
3
+ size 99218
activelearninghelpspretrainedmodelslearntheintendedtask/3d6384fb-d348-4cec-a9c8-bc16eb1fa578_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6f111d3c7aeedaff291019e77b0c90a227f504bc1523dfd91d561831a8a1f4f
3
+ size 612758
activelearninghelpspretrainedmodelslearntheintendedtask/full.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning Helps Pretrained Models Learn the Intended Task
2
+
3
+ Alex Tamkin* Dat Nguyen* Salil Deshpande* Jesse Mu Noah Goodman
4
+
5
+ Stanford University
6
+
7
+ # Abstract
8
+
9
+ Models can fail in unpredictable ways during deployment due to task ambiguity, when multiple behaviors are consistent with the provided training data. An example is an object classifier trained on red squares and blue circles: when encountering blue squares, the intended behavior (classifying shape vs. color) is ambiguous. We investigate whether pretrained models are better active learners, capable of choosing examples that improve robustness to such spurious correlations and domain shifts. Intriguingly, we find that better active learning is an emergent property of the pretraining process: pretrained models require up to $5 \times$ fewer labels when using uncertainty-based active learning, while non-pretrained models see no or even negative benefit. We find these gains come from an ability to select examples with attributes that disambiguate the intended behavior, such as rare product categories or atypical backgrounds. These attributes are far more linearly separable in pretrained model's representation spaces vs non-pretrained models, suggesting a possible mechanism for this behavior. Code and training scripts are available at: https://github.com/alextamkin/active-learning-pretrained-models.
10
+
11
+ ![](images/9af24bfb5d38840d0afdd8a1a65121f92b07cdf382042bc5837818960d76366a.jpg)
12
+ Figure 1: Active learning can resolve task ambiguity in datasets. Here, the provided training data leaves the model unsure of the intended task: is it to predict the shape or the color of the object? Pretraining enables models to identify and weigh various rich features, eliciting labels from informative examples (e.g. blue squares) that clarify the user's intention.
13
+
14
+ # 1 Introduction
15
+
16
+ Modern pretrained models can be adapted to new tasks with remarkably little data, enabling downstream applications for tasks with only tens or hundreds of examples [8, 46]. However these low-data applications magnify a fundamental problem in machine learning—namely, that datasets are often incomplete proxies for the desired behavior. For example, a sentiment classifier may behave
17
+
18
+ unpredictably during the holidays if its training data lacks examples of toy reviews. Likewise, an object classifier may struggle to classify objects in atypical environments (e.g. camels in the Arctic) if the training data does not make it adequately clear which features are salient for the task. While these dataset flaws can be solved by labeling better examples, identifying such examples can be challenging, especially as the nature of these flaws may not be known in advance.
19
+
20
+ One way to describe these kinds of challenges is through the lens of task ambiguity: the failure of the training data to fully specify the user's intended behavior for all possible inputs. We consider whether pretrained models can automatically resolve their own task ambiguity through active learning (AL). In principle, AL allows models to resolve task ambiguity by identifying examples whose labels would be informative; for example, in Figure 1, the provided training data only contains red squares and blue circles, leaving the intended behavior for blue squares ambiguous. Asking for labels of blue squares resolves this ambiguity. This kind of interaction could clarify the intended behavior for different kinds of examples without the expectation that model developers must anticipate all potential gaps in the model's abilities.
21
+
22
+ However, AL has often seen limited success in practice. In traditional settings with smaller, non-pretrained models, several problems limit the effectiveness of AL, including label noise, examples that are challenging for models to learn, and a lack of generalizability of AL heuristics [38, 30]. But pretrained models possess several strengths which may enable them to overcome these challenges: they extract high-level semantic features, such as shape and color [44, 22], which can be used to identify informative examples, and they produce more calibrated uncertainty estimates which can be used for selecting ambiguous inputs [24, 14]. Moreover, in many real-world problems some data points are much more informative than others [21], suggesting the potential utility of AL in practical few-shot learning applications.
23
+
24
+ We consider the use of active learning (AL) on a range of real-world image and text datasets where task ambiguity arises. We compare several AL acquisition functions against a random-sampling baseline, and compare the difference in performance with and without the use of pretrained models. Our contributions demonstrate that:
25
+
26
+ 1. Pretrained models trained with AL can select examples that resolve task ambiguity in the finetuning data
27
+ 2. The resulting accuracy gains can be quite large in practice: up to $5 \times$ reduction in labeled data points for the same performance, or $+11\%$ absolute gain for the same labeling budget
28
+ 3. This ability to actively learn is an emergent property of pretraining: AL has a neutral or even harmful effect on non-pretrained models
29
+
30
+ # 2 Method
31
+
32
+ We study the pool-based active learning (AL) setup common in the literature [52], where we have a (possibly pretrained) model $\mathcal{M}$ , a small seed set of training data $\mathcal{S} = \{(x_i, y_i)\}$ , and a larger pool of unlabeled data $\mathcal{P} = \{x_i\}$ . The AL procedure proceeds as follows: first, finetune $\mathcal{M}$ on $\mathcal{S}$ until convergence; then, select points $x_i$ from $\mathcal{P}$ that are deemed most informative according to an acquisition function $a(x; \mathcal{M})$ , obtaining the corresponding labels $y_i$ , until some budget $k$ of data points is exhausted. This newly labeled batch $\mathcal{B} = \{(x_i, y_i)\}$ is then added to the existing data $\mathcal{S}$ , and the original model is retrained on $\mathcal{S}$ for the next acquisition step. This process is repeated for a fixed number of acquisition steps.
33
+
34
+ For our acquisition function, we adopt the classic uncertainty sampling approach to AL, in particular the least confidence heuristic, where we acquire points for which our model is least confident in its predicted label [52]. Specifically, treating the outputs of the model $\mathcal{M}$ as a probability distribution over possible labels $p(y\mid x;\mathcal{M})$ , we define the acquisition function to be
35
+
36
+ $$
37
+ a (x; \mathcal {M}) = - \max _ {i} p \left(y _ {i} \mid x; \mathcal {M}\right) \tag {1}
38
+ $$
39
+
40
+ This least confidence heuristic has shown to be simple and effective in a variety of settings [52, 23, 40], and we similarly find good results here (we refer to least confidence sampling as "uncertainty
41
+
42
+ sampling" except in Section 5.2 where we explore other uncertainty-based acquisition functions including entropy and margin sampling).
43
+
44
+ On top of this standard AL pipeline, we propose the following change to improve the practical applicability of pretrained models in data-scarce settings:
45
+
46
+ Removing the need for a separate validation set The AL cycle begins by finetuning the pretrained model on the seed set. If the size of the seed set is large enough, the seed set may be partitioned into a training set and a validation set, and early stopping may be performed on the validation set. However, in few-shot settings, labeling costs may be high, and the seed set may be too small to meaningfully partition.
47
+
48
+ Instead of an arbitrary fixed number of finetuning steps, we propose an alternative method to terminate finetuning in the absence of a validation set. Specifically, we find that a simple but effective heuristic is to stop finetuning when the training loss decreases to $0.1\%$ of the original training loss at the start of finetuning. In our experiments, this heuristic performs as well as early stopping on an actual validation set (see Appendix D for more details). We also leverage standard learning rates and other hyperparameters recommended by model developers (see Appendix B).
49
+
50
+ By using a standardized recipe across tasks and removing the need for a separate validation set, our AL pipeline is more robust to the real-world difficulties of deploying AL where use of a validation set is impractical [38, 45], although further work is needed to capture the full extent of this recipe's generalizability.
51
+
52
+ # 3 Datasets
53
+
54
+ We consider a variety of datasets where task ambiguity manifests through a scarcity of particular kinds of examples. We consider two such kinds of examples: those defined by combinations of causal and spurious features (typical vs atypical backgrounds) as well as those defined by unseen attributes that shift during deployment (product categories and camera trap locations). These datasets provide an empirical testbed for the ability of pretrained models to choose disambiguating examples using active learning (AL).
55
+
56
+ # 3.1 Distinguishing causal from spurious features
57
+
58
+ Spurious correlations arise when multiple features are predictive of the label in a training dataset, yet it is ambiguous which ones are causally linked to the task label [21]. We consider two such datasets, and see whether AL can choose the disambiguating examples where the spurious features are not copresent with the causal features:
59
+
60
+ Waterbirds The Waterbirds dataset [49] consists of photographs of landbirds or waterbirds digitally edited onto land or water backgrounds. The task is to classify whether the bird is a landbird or a waterbird. In the train set, $77\%$ of the pictures feature landbirds and $23\%$ waterbirds. $95\%$ of both landbirds and waterbirds appear on land and water backgrounds, respectively. In the validation and test sets, this percentage is decreased to $50\%$ , instead. Thus, the image background is a spurious feature the model may come to rely on when making the prediction.
61
+
62
+ Treeperson As the Waterbirds dataset was synthetically generated, we also consider a dataset where we perform classification over real, unedited images with spuriously correlated objects. We use the object annotations in Visual Genome [33] to create a new dataset of 8,638 images called Treeperson, for which the task is to predict whether a person is in a given image. While $50\%$ of the images contain a person in this dataset, each image also contains either a tree or a building, and the presence of these objects is spuriously correlated with the presence of people. At train time, $90\%$ of training images with people contain a building, while $90\%$ of training images without people contain a tree. Thus, a model may be incentivized to form representations that classify according to the presence of trees and buildings, rather than the presence of the actual causal variable of interest (people). These values are changed to $50\%$ at test time, removing this correlation to evaluate how well the model learned the actual task of interest. For more details on this dataset, see Appendix C.
63
+
64
+ # 3.2 Measuring robustness to distribution shift
65
+
66
+ Distribution shifts occur when algorithms are evaluated on different data distributions than the ones they were trained on. Examples include changing the location or time of day that photos were taken, or changing the topic or author of a particular textual source. These shifts can reduce performance, and we consider whether AL can help choose diverse, informative examples that clarify how the model should behave over a range of natural distribution shifts.
67
+
68
+ iWildCam2020-WILDS This dataset considers the task of species classification from a database of photos taken from wildlife camera traps [5, 31]. The dataset is unbalanced, with most images containing no animal, and the distribution of camera locations and species changes between the in domain (ID) and out-of-domain (OOD) subsets.
69
+
70
+ Amazon-WILDS This dataset considers the task of predicting the number of stars associated with the text of a given Amazon review [43, 31]. The reviewers are different in the training set versus the test set, and the task is to perform as well as possible on this set of new reviewers. In addition to the number of stars, we also consider model performance stratified by different product types, which highlights minority subgroups whose categorization is not visible to the model.
71
+
72
+ # 4 Experimental Setup
73
+
74
+ # 4.1 Models and Training
75
+
76
+ Vision For computer vision datasets, we finetune BiT [32], a recently-proposed family of vision models which have achieved state-of-the-art performance on several vision tasks. We primarily consider the BiT-M-R50x1 model, pretrained on ImageNet-21k [13]. To explore the effectiveness of larger architectures and pretraining sources, in Section 6.2 we also consider performance achieved by the same-size BiT-S-R50x1, trained on ImageNet-1k, and the deeper BiT-M-R101x1 model, also trained on ImageNet-21k. These models have been shown to have emergent few-shot learning abilities, where strong classifiers for new tasks can be obtained by simply finetuning on tens or hundreds of examples with typical gradient descent techniques (rather than meta-learning techniques, for example).
77
+
78
+ Text For the text dataset (Amazon), we use RoBERTa-Large [37], another pretrained model with similar properties as BiT, and a representative of the BERT [15] family of models which together have obtained state-of-the-art scores on modern NLP benchmarks [60].
79
+
80
+ Other details, including hyperparameters and seed set/acquisition sizes are deferred to Appendix B.
81
+
82
+ Random acquisition baseline As a running baseline, we compare to the same model finetuned with a random acquisition function (equivalent to not doing AL). That is, $a(x;\mathcal{M}) = \mathrm{rand}(0,1)$ , so we simply sample a random batch of data from the pool at each acquisition step.
83
+
84
+ Comparison with non-pretrained models To examine whether effective AL is a result of the pretraining process, we also compare to the performance observed when applying AL to a randomly initialized, instead of pretrained, BiT-M-R50x1.
85
+
86
+ # 5 Results
87
+
88
+ # 5.1 Accuracy per acquisition
89
+
90
+ For a general measure of success, in Figure 2 we plot the accuracy of AL versus random sampling on the validation datasets as a function of the number of samples acquired during training.
91
+
92
+ Waterbirds Waterbirds is evaluated on a balanced dataset where the foreground and background are not correlated. In this setting, uncertainty sampling achieves a $+11\%$ improvement in average validation accuracy over random sampling (Figure 2a). This comes primarily from a $+25\%$ average increase across the landbird-on-water and waterbird-on-land images (i.e. those without the spurious
93
+
94
+ ![](images/4cd9ef1156bd00f3d148ea70dfe4a29af036aa52bc24eb0bc60ae00980d132ce.jpg)
95
+ (a) Overall accuracy
96
+
97
+ ![](images/fd0b1ad35476366b4e20f5ec97a067d953e5cd1abedf997c4b8593f9cdfaf6e4.jpg)
98
+ (b) Mismatched accuracy
99
+
100
+ ![](images/2a12380951fe47ad31703861ed48508bc0dd5cdc58e3d459b53c3b9041885a40.jpg)
101
+ (c) Overall accuracy
102
+
103
+ ![](images/e9e7fe6a2ba943f6126506ed3ca5ef2c3be0f926f12d1a63168f6336c07e76d2.jpg)
104
+ (d) Mismatched accuracy
105
+
106
+ ![](images/7160b8ac0b7365661a45188d194cee020e2e258e465deef830ceffbb3f2ccf1e.jpg)
107
+ (e) Overall accuracy
108
+
109
+ ![](images/5f2c8b9ef531a3ac200914de210c58aea938e0a973d5c7dad6ff65a3261504c7.jpg)
110
+ (f) Minority class accuracy
111
+
112
+ ![](images/7f97f22c9f84e7c0b83daff3aa567b313f00f6cfdb5707836f8aee506e071bd1.jpg)
113
+ (g) Overall accuracy
114
+
115
+ ![](images/07dc5d06747ee8aed164f33ecb4af14bd9a22ad50c8612eec5965656e72bc548.jpg)
116
+ (h) Worst 10th percentile
117
+
118
+ ![](images/e2ae3e306eb6742d53cc8b6641c0ed7cbaa98e0599c71c4f9395e84d72859a08.jpg)
119
+ Figure 2: Uncertainty sampling outperforms random sampling on all datasets, especially on minority classes. Class-balanced accuracies displayed for Figure 2f. Shaded regions represent $95\%$ CIs (Gaussian approx.).
120
+ Figure 3: All types of uncertainty sampling outperform random sampling on iWildCam. Class 0 represents the majority class in iWildCam (no animal present).
121
+
122
+ correlation; Figure 2b). Uncertainty sampling required $5\mathbf{x}$ fewer labels than random sampling to achieve random sampling's final accuracy.
123
+
124
+ Amazon In the Amazon dataset, we also see gains from AL, including $+1\%$ on average across reviewers, and $+2.5\%$ on the worst 10th percentile (Figure 2g). This suggests that our AL recipe may be of use outside of BiT or vision settings more broadly. While the final difference between uncertainty and random sampling is not large, it is statistically significant. Uncertainty sampling required 1.3x fewer labels than random sampling to achieve the same final accuracy. This smaller effect size may also be due to the less dramatic distribution shift in the Amazon dataset. It is perhaps noteworthy that AL still succeeds in such a setting.
125
+
126
+ iWildCam With the iWildCam dataset, uncertainty sampling achieved a $+9\%$ improvement upon random sampling. Uncertainty sampling also required 1.8x fewer labels than random sampling to achieve random sampling's final accuracy (Figure 2e).
127
+
128
+ Treeperson In the Treeperson dataset, uncertainty sampling is $+2\%$ improved over random sampling by the end of training (Figure 2c). Uncertainty sampling required 1.6x fewer labels than random sampling to achieve random sampling's final accuracy.
129
+
130
+ ![](images/9c057ff152b4105b9ac31251797f8e15ffaed7beecee2c9d0e59bc3e1ce359f7.jpg)
131
+ (a) Waterbirds
132
+
133
+ ![](images/525960aa75a7e43683437e84ee909fb1841ed972c70f58007eff625aa63ed0e0.jpg)
134
+ (b) Treeperson
135
+ Figure 4: Uncertainty sampling identifies and upsamples disambiguating examples. For both Waterbirds and Treeperson, uncertainty sampling selectively acquires examples where the spurious and core features disagree. Y-axis: frequency of class in acquisitions. Oversampling is visible for subgroups where uncertainty sampling acquires examples above random chance.
136
+
137
+ # 5.2 Additional AL methods
138
+
139
+ We consider two additional AL methods in addition to least confidence sampling: 1) entropy sampling, which chooses the example that maximizes the entropy of the model's predictive distribution, and 2) margin sampling, which chooses the example with the smallest difference between the first and second most probable classes [51, 52]. We run experiments with all methods on the 182-class iWildCam dataset. All methods significantly outperform random sampling (Figure 3). Furthermore, margin sampling appears to slightly outperform the other two AL methods we consider, suggesting that it may a superior AL approach in multiclass settings.
140
+
141
+ # 6 Analysis
142
+
143
+ # 6.1 AL with pretrained models selects examples that resolve task ambiguity
144
+
145
+ Overall, we attribute improved performance to pretrained models' ability to identify and preferentially sample examples that resolve task ambiguity in the data. For example, for Waterbirds and Treeperson, we see the model select examples with atypical background combinations, as one might intuitively hope. Similarly, for Amazon and iWildCam we see the model upsample rare types of examples, even when they are not explicitly marked in the data.
146
+
147
+ Waterbirds Figure 4a depicts the rate at which uncertainty sampling acquires examples of each subgroup compared to the expected rate at which random sampling would acquire examples from those same subgroup. Examples where the bird and background are mismatched are heavily oversampled. We emphasize that these minority examples are not simply members of the minority class (waterbirds). Instead, the model identifies and preferentially upsamples disambiguating examples where the spurious feature (background) and the causal feature (bird type) disagree.
148
+
149
+ Treeperson For Treeperson we see the same pattern as in Waterbirds: the model identifies and upsamples examples where only one of the spurious or causal features is present, despite the spurious feature being latent (Figure 4b).
150
+
151
+ Amazon We also see similar behavior in the Amazon dataset, indicating our method's applicability to multiple modalities and pretrained models. Not only does the model upsample lower star ratings, which are less common, it can also upsample rarer product categories—a latent attribute (Figure 5).
152
+
153
+ # 6.2 Pretraining is the key ingredient in our experiments
154
+
155
+ What drives the success of AL in our experiments? We hypothesize that better AL is an emergent property of the pretraining process, and evaluate this hypothesis by comparing pretrained models
156
+
157
+ ![](images/3a6275f54beeb99cf1f0a81f5d1c0fec754c07b5807aa903e8b60e6c03d7e63e.jpg)
158
+ (a) Acquisitions by star rating
159
+
160
+ ![](images/0a224d007e2a0766c1dd76a51b37ca14e52769d6408b5a325cd78112c14eab79.jpg)
161
+ (b) Acquisitions by product category
162
+
163
+ ![](images/bf9d0a10bc819f128939d9b6f54e0c7c31eb855b27ba38880aeed698d32ffbb7.jpg)
164
+ Figure 5: Uncertainty sampling upsamples both visible and latent minority subgroups. Fraction of Amazon examples acquired by random and uncertainty sampling, stratified by star rating and product category. Upsampling is visible when the bar for uncertainty sampling is greater than the base prevalence in the unlabeled dataset available during training. Uncertainty sampling preferentially acquires examples with lower star ratings and rarer product categories, despite the latter attribute not being visible to the model. Note the separate y-axis for product categories 1 and 2 in (b).
165
+ (a) Waterbirds
166
+ Figure 6: Uncertainty sampling only provides gains when using pretrained models. S-R50, M-R50, and M-R101 correspond to the BiT-S-R50x1, BiT-M-R50x1, and BiT-M-R101x1 pretrained models, respectively, while R50-NP and R101NP correspond to ResNet models which are not pretrained. Error bars represent $95\%$ CIs (Gaussian approx.).
167
+
168
+ ![](images/a6769e48988aab68c10f32946b3781c18e5b4d2ea9bc1edec424be77879a3dda.jpg)
169
+ (b) iWildCam
170
+
171
+ ![](images/0d4467b3760cc13bc73f1947c981ee86c83535b588421c2bb085db69e0ce14f1.jpg)
172
+ (c) Treeperson
173
+
174
+ to their corresponding non-pretrained counterparts. We also examine the effect of model scale on success at AL: if a model has been trained on more data (and has presumably learned to extract more semantically-relevant features), does this enable more effective AL?
175
+
176
+ Concretely, we conduct AL experiments with 3 pretrained models: BiT-S-R50x1, BiT-M-R50x1, BiT-M-R101x1, and their corresponding non-pretrained versions. We find that for our experiments with pretrained models, uncertainty acquisition outperformed random acquisition (Figures 6a, 6b and 6c). Importantly, AL on the non-pretrained models provided no or even negative benefit, even after exploring a range of different hyperparameter configurations. These controlled experiments provide strong evidence that pretraining is indeed crucial in our setting. That said, we do not claim that pretraining is the only way to enable good AL in settings of task ambiguity; other methods of addressing the so-called "cold start" problem (e.g. [20, 63]) may also prove fruitful or complementary.
177
+
178
+ Effect of scale In one case, we also see a demonstrable effect of scale on the efficacy of the AL process: the BiT-S-R50x1 model, which was pretrained on a smaller dataset than the BiT-M models (ImageNet-1k vs ImageNet-21k) fails to outperform random sampling on iWildCam, in contrast to the two other models pretrained on more data. This suggests a potential scaling trend for AL, where gains from AL may continue to grow as pretrained models are trained for longer on more data. However, we did not see a difference between BiT-M-50x1 and BiT-M-101x1, which were trained on the same dataset but have different numbers of parameters. This may be because dataset size must be increased jointly with parameter count to see continued gains from scaling [29, 26].
179
+
180
+ Impact of pretraining on acquisition patterns As an additional cross-check, we also observe that pretrained models acquire disambiguating subgroups much more efficiently than their non-pretrained counterparts. See Appendix G for additional figures and results.
181
+
182
+ <table><tr><td></td><td>No Finetuning</td><td>Finetune On Seed Set (40)</td><td>Finetune On Seed Set (40) + 20</td></tr><tr><td>Average</td><td>0.402</td><td>0.42</td><td>0.424</td></tr><tr><td>Landbird /LandBG</td><td>0.389</td><td>0.416</td><td>0.42</td></tr><tr><td>Waterbird /LandBG</td><td>0.63</td><td>0.653</td><td>0.733</td></tr><tr><td>Landbird /WaterBG</td><td>0.426</td><td>0.467</td><td>0.447</td></tr><tr><td>Waterbird /WaterBG</td><td>0.435</td><td>0.418</td><td>0.419</td></tr></table>
183
+
184
+ (a) Group accuracies for linear classifier on Waterbirds image embeddings attained from a pretrained BiT model after various degrees of finetuning
185
+
186
+ <table><tr><td></td><td>No Finetuning</td><td>Finetune On Seed Set (40)</td><td>Finetune On Seed Set (40) + 20</td></tr><tr><td>Average</td><td>0.311</td><td>0.32</td><td>0.34</td></tr><tr><td>Landbird /LandBG</td><td>0.306</td><td>0.321</td><td>0.369</td></tr><tr><td>Waterbird /LandBG</td><td>0.194</td><td>0.316</td><td>0.325</td></tr><tr><td>Landbird /WaterBG</td><td>0.286</td><td>0.248</td><td>0.262</td></tr><tr><td>Waterbird /WaterBG</td><td>0.337</td><td>0.33</td><td>0.255</td></tr></table>
187
+
188
+ (b) Group accuracies for linear classifier on Waterbirds image embeddings attained from an non-pretrained BiT model after various degrees of finetuning
189
+
190
+ # 6.3 Pretraining yields a better feature space for AL
191
+
192
+ While pretraining clearly improves the AL process, the mechanisms behind this improvement remain unclear. Given the strong theoretical results AL enjoys in the linear setting [2, 3, 41], we hypothesize that pretraining may aid AL by linearizing the features salient for task ambiguity. This hypothesis is further inspired by recent studies finding that a wide range of features are linearly separable in the feature spaces of large pretrained models [10].
193
+
194
+ To quantify this, we train linear classifiers on the second to last layer of the BiT models. The classifiers are trained to predict each image's bird type and background type (4 classes total, rebalanced to comprise $25\%$ of the data). As shown in Figure 7, these classes are indeed far more linearly separable in pretrained models, providing evidence for this hypothesis.
195
+
196
+ We present additional investigations of t-SNE plots for pretrained and non-pretrained models in Appendix H, which demonstrates increased separation of latent classes for pretrained models, as well as how the acquired examples are closer to the class boundaries.
197
+
198
+ ![](images/f4de194776da912cbf431f52ad055ff4d05eb4bf5eeb859e67c37205e047d45f.jpg)
199
+ Figure 7: Both causal and spurious features are more linearly separable in pretrained models.
200
+ Figure 8: Task ambiguity is the key factor driving the success of AL with pretrained models.
201
+
202
+ Accuracy on Waterbirds out-of-domain validation set for pretrained BiT-M models finetuned on datasets with different fractions of matched backgrounds. As disambiguating examples become more scarce, uncertainty sampling experiences far less than accuracy drop vs random sampling.
203
+
204
+ # 6.4 How does the degree of task ambiguity impact AL?
205
+
206
+ Finally, we measure the impact of the strength of task ambiguity on AL. To do so, we construct variants of the Waterbirds dataset where the percentage of mismatched examples range from $95\%$ to $50\%$ but the marginal class probabilities remain fixed. We then proceed with AL and report results in Section 6.3. We observe a clear trend where the gains from AL gradually increase as the percentage of mismatched examples increases to $95\%$ . We find similarly clear trends in the upsampling ratio of mismatched backgrounds, shown in Figure 9. These results provide further evidence that task ambiguity is the key driving factor behind the success of AL in this setting.
207
+
208
+ # 6.5 Failure Cases
209
+
210
+ We also encounter some failure cases when training on datasets far from the distribution of the pretrained model. We perform preliminary experiments on Camelyon17-WILDS [4, 31], which considers tumor identification from tissue patches, and FMoW-WILDS [11, 31], which considers land-use classification from satellite images. AL performs comparably or worse than random sampling on these datasets, even when using a pretrained BiT model, suggesting that specialized models may be necessary to see gains in domains very different from ImageNet-21k.<sup>5</sup>
211
+
212
+ # 7 Related work
213
+
214
+ Task ambiguity and specification Several works address ambiguity or poor specification in machine learning problems. [58] describe the problem of "inductive ambiguity identification," and describe AL as a promising potential solution that has failed to see practical success. [12] describe the problem of underspecification, where high variance, instability, and poor model performance result from training overparameterized models that are underconstrained by their training datasets. [21] describes how task ambiguity can arise when both desirable and undesirable features are predictive of the training labels, a problem which several works seek to better characterize and address [42, 49, 55, 50]. Finally, [18] address task ambiguity in few-shot settings via a probabilistic meta-learning algorithm, and perform an AL experiment in a 1D regression setting. We build on these works by demonstrating that simple uncertainty sampling with pretrained models can be an effective approach to the task ambiguity problem across a wide variety of high-dimensional classification settings—including when the sources of task ambiguity are not known.
215
+
216
+ Uncertainty and distribution shift In the face of these challenges, several works have tried to quantify how much pretrained models know about problems or their own uncertainty about them. [47] propose a question answering dataset with unanswerable questions, where a model must abstain rather than proceeding with an answer. Pretraining can also improve the calibration of model uncertainty [24] and pretrained features can be used for out-of-distribution detection [48, 62]—observations that align with our findings that uncertainty sampling can identify minority subgroups in datasets. A related stream of work seeks to identify high-confidence examples that are predicted incorrectly [1, 34]; by contrast, our focus is on improving model behavior across the full range of examples. Finally, our observation that upsampling latent minority groups results in better performance aligns well with [50, 28, 36], which explore various upweighting or upsampling strategies. Importantly, however, our approach does not require these groups to be known in advance.
217
+
218
+ AL and example selection Active learning (AL) [35, 53, 52, 27] is a well-studied field that investigates how machine learning algorithms might automatically select helpful additional data points to maximize their performance. Such strategies are especially helpful in imbalanced settings [17, 40] and have been fruitfully applied to deep models [19, 6], including pretrained models [63, 39, 54]. Past work has also considered AL for few-shot learning [61]. We extend these works by considering AL for resolving task ambiguity, showing that pretrained models successfully choose examples based on their high-level semantics, such as atypical backgrounds or rare latent attributes. Also in contrast to prior work, we investigate the role of pretraining itself by performing equivalent experiments with non-pretrained models, and providing a potential mechanism for the difference.
219
+
220
+ Pretrained models and their emergent properties Our work contributes to a broader literature on how pretraining enables new kinds of model capabilities [7, 56], especially those holding across multiple domains [57]. For example, [8] identify the phenomenon of in-context learning, where tasks can be specified for models through a language modeling prompt, while [9] discover that a self-supervised vision model implicitly learns high-quality segmentation maps visible through attention scores. [29, 25] conduct scaling laws experiments which chart how capabilities emerge with scale. We identify a new model capability that is significantly improved by pretraining: the capacity to actively learn and resolve task ambiguity in high-dimensional settings.
221
+
222
+ # 8 Discussion and Limitations
223
+
224
+ We show that pretrained models can preemptively resolve task ambiguity through active learning (AL), without requiring humans to anticipate these possible failure modes in advance. We find that AL helps across a variety of settings where data is spuriously correlated, undergoes domain shift, or contains unlabeled subgroups. These behaviors emerge most clearly as a result of large-scale pretraining, suggesting that AL may be an underappreciated tool for increasing the reliability of systems in real-world settings.
225
+
226
+ Of course, AL is no cure-all for resolving task ambiguity. First, it requires a human in the loop, which increases the time required to train a model compared to random sampling. Second, it requires the labeling method to be relatively free of noise—this may be acceptable if annotators are domain experts or are well-trained, but may also increase the cost per acquired example. Third, it is limited
227
+
228
+ by the range of examples present in the unlabeled dataset—a model cannot elicit labels for examples that do not exist.
229
+
230
+ Finally, we note the opportunity for an exciting array of future work, including broader investigation of these methods across domains such as medical, scientific, or industrial settings, as well as better understanding how pretraining shapes AL as models continue to scale.
231
+
232
+ # Acknowledgments and Disclosure of Funding
233
+
234
+ We would like to thank Shyamal Buch, Shreya Shankar, Megha Srivastava, and Ethan Perez for helpful comments. AT is supported by an Open Phil AI Fellowship.
235
+
236
+ # References
237
+
238
+ [1] Josh Attenberg, Panagiotis G. Ipeirotis, and Foster J. Provost. Beat the machine: Challenging workers to find the unknown unknowns. In Human Computation, 2011.
239
+ [2] Maria-Florina Balcan, Andrei Z. Broder, and Tong Zhang. Margin based active learning. In COLT, 2007.
240
+ [3] Maria-Florina Balcan and Philip M. Long. Active and passive learning of linear separators under log-concave distributions. ArXiv, abs/1211.1082, 2013.
241
+ [4] Péter Bándi, Oscar G. F. Geessink, Quirine F Manson, Marcory Crf van Dijk, Maschenka C. A. Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, Quanzheng Li, Farhad Ghazvinian Zanjani, Svitlana Zinger, Keisuke Fukuta, Daisuke Komura, Vlado Ovtcharov, Shenghua Cheng, Shaoqun Zeng, Jeppe Thagaard, Anders Bjorholm Dahl, Huangjing Lin, Hao Chen, Ludwig Jacobsson, Martin Hedlund, Melih Çetin, Eren Halçı, Hunter Jackson, Richard Chen, Fabian Both, Jörg K.H. Franke, Heidi V. N. Küsters-Vandevelde, W. Vreuls, Peter Bult, Bram van Ginneken, Jeroen A. van der Laak, and Geert J. S. Litjens. From detection of individual metastases to classification of lymph node status at the patient level: The camelyon17 challenge. IEEE Transactions on Medical Imaging, 38:550-560, 2019.
242
+ [5] Sara Beery, Elijah Cole, and Arvi Gjoka. The iwildcam 2020 competition dataset. ArXiv, abs/2004.10340, 2020.
243
+ [6] William H Beluch, Tim Genewein, Andreas Nurnberger, and Jan M Kohler. The power of ensembles for active learning in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9368-9377, 2018.
244
+ [7] Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, D. Card, Rodrigo Castellon, Niladri S. Chatterji, Annie Chen, Kathleen Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, J.C. Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jackson K. Ryan, Christopher R'e, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun
245
+
246
+ Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. ArXiv, abs/2108.07258, 2021.
247
+ [8] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. *ArXiv*, abs/2005.14165, 2020.
248
+ [9] Mathilde Caron, Hugo Touvron, Ishan Misra, Herv'e J'egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. *ArXiv*, abs/2104.14294, 2021.
249
+ [10] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. ArXiv, abs/2002.05709, 2020.
250
+ [11] Gordon A. Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6172-6180, 2018.
251
+ [12] Alexander D'Amour, Katherine A. Heller, Dan I. Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yi-An Ma, Cory Y. McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin G. Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vlademyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, and D. Sculley. Underspecification presents challenges for credibility in modern machine learning. ArXiv, abs/2011.03395, 2020.
252
+ [13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
253
+ [14] Shrey Desai and Greg Durrett. Calibration of pre-trained transformers. In EMNLP, 2020.
254
+ [15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1), 2019.
255
+ [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv, abs/2010.11929, 2021.
256
+ [17] Seyda Ertekin, Jian Huang, Leon Bottou, and Lee Giles. Learning on the border: active learning in imbalanced data classification. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 127-136, 2007.
257
+ [18] Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In NeurIPS, 2018.
258
+ [19] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192, 2017.
259
+ [20] Mingfei Gao, Zizhao Zhang, Guo Yu, Sercan Ö Arik, Larry S Davis, and Tomas Pfister. Consistency-based semi-supervised active learning: Towards minimizing labeling cost. In European Conference on Computer Vision, pages 510-526. Springer, 2020.
260
+ [21] Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix Wichmann. Shortcut learning in deep neural networks. *ArXiv*, abs/2004.07780, 2020.
261
+
262
+ [22] Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Christopher Olah. Multimodal neurons in artificial neural networks. 2021.
263
+ [23] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. *ICLR*, 2017.
264
+ [24] Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In ICML, 2019.
265
+ [25] T. J. Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. Scaling laws for autoregressive generative modeling. ArXiv, abs/2010.14701, 2020.
266
+ [26] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
267
+ [27] Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Mate Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011.
268
+ [28] Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. *ArXiv*, abs/2110.14503, 2021.
269
+ [29] Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. Scaling laws for neural language models. ArXiv, abs/2001.08361, 2020.
270
+ [30] Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher D. Manning. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. ArXiv, abs/2107.02331, 2021.
271
+ [31] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Wei hua Hu, Michihiro Yasunaga, Richard L. Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In ICML, 2021.
272
+ [32] Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In ECCV, 2020.
273
+ [33] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32-73, 2016.
274
+ [34] Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Eric Horvitz. Identifying unknown unknowns in the open world: Representations and policies for guided exploration. In AAAI, 2017.
275
+ [35] David D Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In Machine learning proceedings 1994, pages 148-156. Elsevier, 1994.
276
+ [36] Evan Zheran Liu, Behzad Haghoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, 2021.
277
+ [37] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
278
+ [38] David Lowell, Zachary Chase Lipton, and Byron C. Wallace. Practical obstacles to deploying active learning. In EMNLP, 2019.
279
+
280
+ [39] Katerina Margatina, Loïc Barrault, and Nikolaos Aletras. Bayesian active learning with pretrained language models. ArXiv, abs/2104.08320, 2021.
281
+ [40] Stephen Mussmann, Robin Jia, and Percy Liang. On the importance of adaptive data collection for extremely imbalanced pairwise tasks. In FINDINGS, 2020.
282
+ [41] Stephen Mussmann and Percy Liang. On the relationship between data efficiency and error for uncertainty sampling. In ICML, 2018.
283
+ [42] Vaishnavh Nagarajan, Anders Johan Andreassen, and Behnam Neyshabur. Understanding the failure modes of out-of-distribution generalization. ArXiv, abs/2010.15775, 2021.
284
+ [43] Jianmo Ni, Jiacheng Li, and Julian McAuley. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP, 2019.
285
+ [44] Christopher Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Q. Ye, and A. Mordvintsev. The building blocks of interpretability. 2018.
286
+ [45] Ethan Perez, Douwe Kiela, and Kyunghyun Cho. True few-shot learning with language models. ArXiv, abs/2105.11447, 2021.
287
+ [46] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021.
288
+ [47] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. In ACL, 2018.
289
+ [48] Tal Reiss, Niv Cohen, Liron Bergman, and Yedid Hoshen. Panda: Adapting pretrained features for anomaly detection and segmentation. In CVPR, 2021.
290
+ [49] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ArXiv, abs/1911.08731, 2019.
291
+ [50] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. ArXiv, abs/2005.04345, 2020.
292
+ [51] Marten Scheffer, Stephen R. Carpenter, Jonathan A. Foley, Carl Folke, and Brian H. Walker. Catastrophic shifts in ecosystems. Nature, 413:591-596, 2001.
293
+ [52] Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
294
+ [53] Burr Settles and Mark Craven. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1070-1079, 2008.
295
+ [54] Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis I. Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, E. Artemova, Dmitry V. Dylov, and Alexander Panchenko. Active learning for sequence tagging with deep pre-trained models and bayesian uncertainty estimates. In EACL, 2021.
296
+ [55] Megha Srivastava, Tatsunori B. Hashimoto, and Percy Liang. Robustness to spurious correlations via human annotations. In ICML, 2020.
297
+ [56] Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli. Understanding the capabilities, limitations, and societal impact of large language models. ArXiv, abs/2102.02503, 2021.
298
+ [57] Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel E Fein, Colin Schultz, and Noah D. Goodman. Dabs: A domain-agnostic benchmark for self-supervised learning. *ArXiv*, abs/2111.12062, 2021.
299
+
300
+ [58] Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, and Andrew Critch. Alignment for advanced machine learning systems. 2020.
301
+ [59] Laurens van der Maaten and Geoffrey E. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579-2605, 2008.
302
+ [60] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations, 2018.
303
+ [61] Mark P. Woodward and Chelsea Finn. Active one-shot learning. ArXiv, abs/1702.06559, 2017.
304
+ [62] Mike Wu and Noah D. Goodman. A simple framework for uncertainty in contrastive learning. ArXiv, abs/2010.02038, 2020.
305
+ [63] Michelle Yuan, Hsuan-Tien Lin, and Jordan L. Boyd-Graber. Cold-start active learning through self-supervised language modeling. In EMNLP, 2020.
activelearninghelpspretrainedmodelslearntheintendedtask/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d5659f27de45c4fb03cdae906b27fe62b807e4e0c6eb904651429a90e029035
3
+ size 288243
activelearninghelpspretrainedmodelslearntheintendedtask/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0205a3fc485f6e56f8680507e82a0d91bb021d394600c5433418e3454995ea0
3
+ size 370264
activelearningofclassifierswithlabelandseedqueries/f9840103-2625-43db-a3c2-be46606225f0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe4dcb453f161027adc39c3efc7f466ab2c9d5fbea16b4edc640140ef158f552
3
+ size 82407
activelearningofclassifierswithlabelandseedqueries/f9840103-2625-43db-a3c2-be46606225f0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9f91ce96fdbe20d9aa7cc23ff8c7ca3fe92bd48fdd9f2cd9c39569eddb9da0f
3
+ size 101623
activelearningofclassifierswithlabelandseedqueries/f9840103-2625-43db-a3c2-be46606225f0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d15712f7c3de4b668c72f962ceef0dad4a6fc375d91bced119835dca9d7b47c4
3
+ size 390406
activelearningofclassifierswithlabelandseedqueries/full.md ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning of Classifiers with Label and Seed Queries
2
+
3
+ Marco Bressan
4
+
5
+ Dept. of CS, Univ. of Milan, Italy marco.bressan@unimi.it
6
+
7
+ Nicolò Cesa-Bianchi
8
+
9
+ DSRC & Dept. of CS, Univ. of Milan, Italy nicolo.cesa-bianchi@unimi.it
10
+
11
+ Silvio Lattanzi
12
+
13
+ Google
14
+
15
+ silviol@google.com
16
+
17
+ Andrea Paudice
18
+
19
+ Dept. of CS, Univ. of Milan, Italy & Istituto Italiano di Tecnologia, Italy andrea.paudice@unimi.it
20
+
21
+ Maximilian Thiessen
22
+
23
+ Research Unit ML, TU Wien, Austria
24
+
25
+ maximilian.thiessen@tuwien.ac.at
26
+
27
+ # Abstract
28
+
29
+ We study exact active learning of binary and multiclass classifiers with margin. Given an $n$ -point set $X \subset \mathbb{R}^m$ , we want to learn an unknown classifier on $X$ whose classes have finite strong convex hull margin, a new notion extending the SVM margin. In the standard active learning setting, where only label queries are allowed, learning a classifier with strong convex hull margin $\gamma$ requires in the worst case $\Omega \left(1 + \frac{1}{\gamma}\right)^{\frac{m - 1}{2}}$ queries. On the other hand, using the more powerful seed queries (a variant of equivalence queries), the target classifier could be learned in $\mathcal{O}(m \log n)$ queries via Littlestone's Halving algorithm; however, Halving is computationally inefficient. In this work we show that, by carefully combining the two types of queries, a binary classifier can be learned in time poly $(n + m)$ using only $\mathcal{O}(m^2 \log n)$ label queries and $\mathcal{O}\left(m \log \frac{m}{\gamma}\right)$ seed queries; the result extends to $k$ -class classifiers at the price of a $k! k^2$ multiplicative overhead. Similar results hold when the input points have bounded bit complexity, or when only one class has strong convex hull margin against the rest. We complement the upper bounds by showing that in the worst case any algorithm needs $\Omega(km \log \frac{1}{\gamma})$ seed and label queries to learn a $k$ -class classifier with strong convex hull margin $\gamma$ .
30
+
31
+ # 1 Introduction
32
+
33
+ This work investigates efficient algorithms for exact active learning of binary and multiclass classifiers in the transductive setting. Given a set $X$ of $n$ points in $\mathbb{R}^m$ , our goal is to learn a function $h: X \to [k]$ belonging to some class $\mathcal{H}$ . In the classic active learning framework, $h$ identifies a subset of $X$ , and the algorithm learns $h$ via queries $\text{LABEL}(x)$ that return $h(x)$ for any given $x \in X$ . In that case, it is well-known that $h$ can be learned with $\mathcal{O}(\log n)$ label queries if the star number of $\mathcal{H}$ is finite [Hanneke and Yang, 2015]. Unfortunately, even simple families such as linear classifiers have unbounded star number, in which case $\Omega(n)$ label queries are needed in the worst case. To bypass this lower bound, it has become increasingly common to introduce enriched queries, that reveal additional information on $h$ and are plausible in practice. One notable example is that of comparison queries for linear separators in $\mathbb{R}^m$ which, given any pair of points $x, y \in X$ , reveal which one is
34
+
35
+ closer to the decision boundary. As proven by Kane et al. [2017], under some margin assumptions the combination of LABEL and comparisons yields exponential savings, allowing one to learn linear separators with only $\mathcal{O}(\log n)$ queries.
36
+
37
+ In this work we combine LABEL queries with seed queries. For any $U \subset X$ and any $i \in [k]$ , a query $\mathrm{SEED}(U, i)$ returns an arbitrary point $x$ in $U \cap C_i$ , where $C_i = h^{-1}(i)$ , or NIL if no such $x$ exists. SEED queries are natural in certain settings like crowdsourcing—e.g., finding the image of a car, see also Beygelzimer et al. [2016]—and have been used implicitly or explicitly in several works [Hanneke, 2009, Balcan and Hanneke, 2012, Attenberg and Provost, 2010, Tong and Chang, 2001, Doyle et al., 2011, Bressan et al., 2021b]. It is not hard to see that, using SEED alone, one can implement Littlestone's Halving algorithm and learn any $h \in \mathcal{H}$ with $\mathcal{O}(\log |\mathcal{H}|)$ queries<sup>1</sup>. For instance, linear separators in $\mathbb{R}^m$ can be learned with $\mathcal{O}(m \log n)$ SEED queries. The catch is that, save for special cases, it is not known how to run the Halving algorithm in polynomial time. Therefore, using SEED to obtain a computationally efficient active learning algorithm is less trivial than it seems at first glance.
38
+
39
+ The goal of this work is understanding whether one can actively learn binary and multiclass classifiers efficiently by using LABEL and SEED queries together. In line with Kane et al. [2017] and other previous works, we make assumptions on $\mathcal{H}$ . Our main assumption is that every class $C_i$ has strong convex hull margin $\gamma > 0$ . This means that, for any $j \neq i$ , $C_i$ and $C_j$ are linearly separable with a margin that is at least $\frac{\gamma}{2}$ times the diameter of $C_i$ . Moreover, it is sufficient that this hold under some pseudometric $d_i$ , unknown to the learner, that is homogeneous and invariant under translation (i.e., induced by a seminorm). This gives to every class its own personalized notion of distance that can be sensitive to the "scale" of the class. This assumption strictly generalizes the classical SVM margin; and, when suitably generalized, it captures stability properties of center-based clusterings Awasthi et al. [2012], Bilu and Linial [2012].
40
+
41
+ Using LABEL alone, Bressan et al. [2021a] showed that learning a multiclass classifier with (strong) convex hull margin $\gamma > 0$ requires between $\Omega \left(1 + \frac{1}{\gamma}\right)^{(m - 1) / 2}$ and $\tilde{\mathcal{O}}\left(k^3 m^5\left(1 + \frac{1}{\gamma}\right)^m\log n\right)$ queries. This exponential dependence on $m$ implies that, unless $m \ll \log n / \log \frac{1}{\gamma}$ , one needs $\Theta(n)$ LABEL queries in the worst case. On the other hand our margin implies linear separability and thus, as noted above, a $\mathcal{O}(m\log n)$ SEED query bound for the binary case, but with a running time that can be superpolynomial. This leaves open the following problem, which is the subject of this work:
42
+
43
+ Can one learn a multiclass classifier $h$ with strong convex hull margin $\gamma > 0$ on $X \subset \mathbb{R}^m$ in time poly $(n + m)$ using a number of queries that grows polynomially with $m$ ?
44
+
45
+ We solve the above question in the affirmative by proving that, with a careful combination of LABEL and SEED queries, one can do much better than using either query in isolation. For binary classification $(k = 2)$ , we show:
46
+
47
+ Theorem 1. Any binary classifier $h$ with strong convex hull margin $\gamma > 0$ over $X \subset \mathbb{R}^m$ can be learned in time $\mathrm{poly}(n + m)$ using in expectation $\mathcal{O}(m^2\log n)$ LABEL queries and $\mathcal{O}\big(m\log \frac{m}{\gamma}\big)$ SEED queries.
48
+
49
+ Note that, unless $\gamma$ is exceedingly small, Theorem 1 uses far fewer SEED than LABEL queries, which is a strength since SEED is arguably more expensive to implement. For instance, if $\gamma = \Omega(1/poly(m))$ then we use $\mathcal{O}(m^2\log n)$ LABEL queries but only $\mathcal{O}(m\log m)$ SEED queries. To prove Theorem 1 we design a novel algorithm that works in two phases. The first phase learns what we call an $\alpha$ -rounding of $X$ w.r.t. $h$ . Loosely speaking, this is a partition $(X_1,X_2)$ of $X$ such that each $X_i$ lies inside $\alpha\mathrm{conv}(C_i)$ where $\mathrm{conv}(C_i)$ is the convex hull of $C_i$ (see below for the formal definition). We show that, in polynomial time and using $\mathcal{O}(m^2\log n)$ LABEL queries, one can compute an $\alpha$ -rounding of $X_i$ for $\alpha = \mathcal{O}(m^3)$ . This allows us to put $X_i$ in near-isotropic position so that $X_i$ has radius 1 and to separate $C_1\cap X_i$ from $C_2\cap X_i$ with margin $\eta = \Omega (\gamma /m^3)$ . In the second phase, the algorithm uses SEED to implement a cutting plane algorithm that learns $C_1\cap X_i$ and $C_2\cap X_i$ using $\mathcal{O}\big(m\log \frac{1}{\eta}\big) = \mathcal{O}\big(m\log \frac{m}{\gamma}\big)$ queries in time poly $(n + m)$ .
50
+
51
+ Using a recursive approach, Theorem 1 can be extended to $k > 2$ at the price of a $k!k^2$ multiplicative overhead:
52
+
53
+ Theorem 2. Any $k$ -class classifier $h$ with strong convex hull margin $\gamma > 0$ over $X \subset \mathbb{R}^m$ can be learned in time $\mathrm{poly}(n + m)$ using in expectation $\mathcal{O}(k! k^2 m^2 \log n)$ LABEL queries and $\mathcal{O}\big(k! k^2 m \log \frac{m}{\gamma}\big)$ SEED queries.
54
+
55
+ We also consider the case where only one class has strong convex hull margin against the rest of the points w.r.t. a metric $d$ induced by a norm $\| \cdot \| _d$ . In this case we obtain a bound parameterized by the distortion $\kappa_{d}$ of $d$ (see Section 1.1):
56
+
57
+ Theorem 3. Suppose $C \subset X$ has strong convex hull margin $\gamma \in (0,1]$ w.r.t. a metric $d$ with distortion $\kappa_d < \infty$ . Given only $X$ , one can learn $C$ in time $\mathrm{poly}(n + m)$ using $\mathcal{O}(\log n)$ LABEL queries and $\mathcal{O}\left(m\log \frac{\kappa_d}{\gamma}\right)$ SEED queries in expectation.
58
+
59
+ As an application of our cutting-plane algorithm we also show that one can learn a $k$ -class classifier whose classes are pairwise linearly separable in time $\mathrm{poly}(n + m)$ using, in expectation, $\mathcal{O}(k^2m^3B)$ SEED queries if every $x \in X$ has rational coordinates that can be encoded in $B$ bits, and $\mathcal{O}(k^2m(B + m\log m))$ SEED queries if every $x \in X$ lies on the grid over $[-1,1]^m$ with stepsize $2^{-B/m}$ . It should be noted that, unlike most previous algorithms, all our algorithms do not need knowledge of $\gamma$ . Moreover, all the bounds above can be turned from expectation to high probability.
60
+
61
+ Finally, we show that the algorithms of Theorem 1 and 2 are nearly optimal:
62
+
63
+ Theorem 4. For all $m \geq 2$ , all $k \geq 2$ , and all $\gamma \leq m^{-3/2}/16$ there exists a distribution of instances with $k$ classes in $\mathbb{R}^m$ with strong convex hull margin $\gamma$ where any randomized algorithm using SEED and LABEL queries that returns $\mathcal{C}$ with probability at least $\frac{1}{2}$ makes at least $\left\lfloor \frac{k}{2} \right\rfloor \frac{m}{24} \log \frac{1}{2\gamma}$ total queries in expectation.
64
+
65
+ # 1.1 Preliminaries and notation
66
+
67
+ The input to our problem is a pair $(X, k)$ , where $X \subset \mathbb{R}^m$ and $k \in \mathbb{N}$ with $2 \leq k \leq n = |X|$ . The algorithm has access to oracles $O_{\mathrm{LABEL}}$ and $O_{\mathrm{SEED}}$ which provide respectively $LABEL$ and $SEED$ queries. The oracles $O_{\mathrm{LABEL}}, O_{\mathrm{SEED}}$ behave consistently with some target classifier $h: X \to [k]$ . For any $x \in X$ , $LABEL(x)$ returns $h(x)$ . For any $U \subseteq X$ and any $i \in [k]$ , $SEED(U, i)$ returns an arbitrary element $x \in U \cap C_i$ if $U \cap C_i \neq \emptyset$ , and NIL otherwise, where $C_i = h^{-1}(i)$ . We often think of $h$ as the partition $\mathcal{C} = (C_1, \ldots, C_k)$ and we call each $C_i$ a class or cluster.
68
+
69
+ A pseudometric is a symmetric and subadditive function $d: \mathbb{R}^m \times \mathbb{R}^m \to \mathbb{R}_{\geq 0}$ such that $d(x, x) = 0$ for all $x \in \mathbb{R}^m$ ; unlike a metric, $d(x, y)$ can be 0 for $x \neq y$ . In this work $d$ is always induced by a seminorm and thus homogeneous and invariant under translation: $d(u + ax, u + ay) = |a| d(x, y)$ for all $x, y, u \in \mathbb{R}^m$ and all $a \in \mathbb{R}$ . For a pseudometric $d$ and a set $A \subset \mathbb{R}^m$ , we let $\phi_d(A) = \sup \{d(x, y): x, y \in A\}$ denote the diameter of $A$ under $d$ . For $x \in \mathbb{R}^m$ and $r \geq 0$ we denote by $B_d^m(x, r)$ and $S_d^{m-1}(x, r)$ respectively the closed ball and the hypersphere with center $x$ and radius $r$ in $\mathbb{R}^m$ under $d$ . When $d$ is omitted we assume $d = d_{\mathrm{euc}}$ where $d_{\mathrm{euc}}$ is the Euclidean metric. We may also omit the superscript if clear from the context. The distortion of a (pseudometric) $d$ is $\kappa_d = \sup_{u, v \in S^{m-1}(0,1)} \| u \|_d / \| v \|_d$ .
70
+
71
+ For any set $A \subset \mathbb{R}^m$ , any $\mu \in \mathbb{R}^m$ , and any $\lambda > 0$ , let $\sigma(A, \mu, \lambda) = \mu + \lambda (A - \mu)$ be the scaling of $A$ about $\mu$ by a factor of $\lambda$ . For two sets $A, B \subset \mathbb{R}^m$ , we write $A \leq \lambda B$ if $A \subseteq \sigma(B, z, \lambda)$ for some $z \in \mathbb{R}^m$ . We may use $x$ in place of $A$ if $A = \{x\}$ . If $A$ is bounded, then $\mathrm{MVE}(A)$ denotes the minimum-volume enclosing ellipsoid (MVEE, or Lowner-John ellipsoid) of $A$ . Our proofs repeatedly use John's theorem; that is, $\sigma(E, \mu, 1/m) \subseteq \mathrm{conv}(A)$ where $\mu$ is the center of $E = \mathrm{MVE}(A)$ and $\mathrm{conv}(A)$ is the convex hull of $A$ . Given $A, B \subseteq \mathbb{R}^m$ , we say that $A$ and $B$ are linearly separable with margin $r$ if there exist $u \in S^{m-1}(0,1)$ and $b \in \mathbb{R}$ such that $\langle u, x\rangle + b \leq -r$ for all $x \in A$ and $\langle u, x\rangle + b \geq r$ for all $x \in B$ .
72
+
73
+ We consider classifiers satisfying the following property:
74
+
75
+ Definition 5. A class $C_i$ has strong convex hull margin $\gamma > 0$ if there exists a pseudometric $d_i$ induced by a seminorm over $\mathbb{R}^m$ such that $d_i(\mathrm{conv}(C_j),\mathrm{conv}(C_i)) > \gamma \phi_{d_i}(C_i)$ for all $j\in [k]\setminus \{i\}$ . If this holds for all $i\in [k]$ then we say $\mathcal{C}$ has strong convex hull margin $\gamma$ .
76
+
77
+ Remarks. The margin of Definition 5 captures natural scenarios that SVM margin does not. For instance, suppose we are clustering fruits on the basis of weight and colour. First, a fruit weighting more than, say, 1.5 times the typical weight of a species probably does not belong to it; but the typical weight varies greatly across species. Our margin captures this scenario, as it is expressed as a fraction of the class' diameter. Second, different fruit species have different separating features; for instance, weight does not separate well oranges from bananas, but colour does. Our margin captures this aspect, too, by allowing the metric that determines the margin to be a function the class. It is also known that the SVM margin $\gamma_{\mathrm{SVM}}$ can be arbitrarily smaller than $\gamma$ ; for instance there are simple cases with $\gamma > 1$ but $\gamma_{\mathrm{SVM}} < e^{-n}$ (see Bressan et al. [2021a]). Hence a large $\gamma$ does not imply good bounds for standard algorithms based on SVM margin (e.g., the Perceptron).
78
+
79
+ # 2 Related work
80
+
81
+ It is well known that active learning may achieve exponential savings in label complexity. That is, there are natural concept classes that can be learned with a number of LABEL queries exponentially smaller than that of passive learning. Hanneke and Yang [2015] characterize the label complexity of concept classes in terms of their star number. However, the star number of many natural classes such as linear classifiers is unbounded, implying a strong lower bound of $\Omega(n)$ LABEL queries.
82
+
83
+ This and other negative results motivated research on enriched queries. Kane et al. [2017] prove that active learnability is characterized by the inference dimension of the concept class $\mathcal{H}$ under the set of allowed queries $\mathcal{Q}$ , as long as those queries are local (i.e., are a function of a constant number of instances). This yields exponential savings when $\mathcal{H}$ is the class of linear separators and $\mathcal{Q}$ contains label queries and comparison queries (which, given two points, reveal which one is closer to the decision boundary), provided the classes have SVM margin or bounded bit complexity. Hopkins et al. [2020] give similar results under distributional assumptions. Unfortunately, bounded inference dimension does not automatically yield efficient algorithms, although it implies active learning algorithms with bounded memory [Hopkins et al., 2021].
84
+
85
+ SEED and their variants are motivated and used by Hanneke [2009] as positive example queries, by Balcan and Hanneke [2012] as conditional class queries, and by Beygelzimer et al. [2016], Attenberg and Provost [2010] as search queries. They are also used implicitly by Tong and Chang [2001], Doyle et al. [2011], and Vikram and Dasgupta [2016]. SEED queries have been used in cluster recovery [Bressan et al., 2021b] and yield exponential savings in non-realizable learning settings [Balcan and Hanneke, 2012]. It also easy to see that SEED queries are equivalent to partial equivalence queries of Maass and Turán [1992] and to subset plus superset queries of Angluin [1988]. To the best of our knowledge, no work combines LABEL and SEED as we do here.
86
+
87
+ Little is known about the SEED complexity of learning a concept class $\mathcal{H}$ actively in polynomial time. On the one hand, the inference dimension lower bounds of Kane et al. [2017] are inapplicable, as SEED queries are not local. On the other hand the Littlestone dimension of $\mathcal{H}$ yields an upper bound, but not necessarily an efficient algorithm; in fact, it is well known that (some sub-problem solved by) Halving is hard in general, see Gonen et al. [2013]. For $k = 2$ , we can use SEED to emulate equivalence queries, for which polynomial-time algorithms are known in some special cases. In particular, the algorithm of Maass and Turan [1994] could replace our cutting-planes subroutine under an implicit discretization of the space through a grid with step-size $\mathcal{O}(\gamma / m^4)$ . However, this gives a polynomial-time algorithm that uses $\mathcal{O}(m^2 \log m / \gamma)$ SEED queries, which is $\mathcal{O}(m)$ times our bound. Moreover, Maass and Turan [1994] use proper equivalence queries (i.e., the queried concept must be in the class), for which they show a lower bound of $\Omega(m^2 \log m / \gamma)$ . Finally, these techniques do not seem to extend to the case $k > 2$ .
88
+
89
+ Our notion of margin strengthens the convex hull margin of Bressan et al. [2021a] by requiring $d(\mathrm{conv}(C_j),\mathrm{conv}(C_i)) > \gamma \phi (C_i)$ rather than $d(C_{j},C_{i}) > \gamma \phi (C_{i})$ . It is not hard to see that the convex hull margin can be arbitrarily smaller than our strong convex hull margin. Finally, the polytope margin of Gottlieb et al. [2018] assumes that each class is in the intersection of a finite number of halfspaces with margin. It is easy to see that this condition is strictly stronger than ours.
90
+
91
+ # 3 Upper Bounds
92
+
93
+ This section gives the proofs of Theorem 1 and Theorem 2. The algorithm behind both theorems has two phases which are described in the next subsections. The case $k > 2$ is essentially the same as for $k = 2$ , except for an adaptation in the second phase.
94
+
95
+ # 3.1 The First Phase: Rounding the Classes
96
+
97
+ The first phase of our algorithms learns what we call an $\alpha$ -rounding of $X$ .
98
+
99
+ Definition 6. An $\alpha$ -rounding of $X$ (w.r.t. $h$ ) is a sequence of pairs $((X_i, E_i))_{i \in [k]}$ where $(X_i)_{i \in [k]}$ is a partition of $X$ , and where $E_i$ for $i \in [k]$ is an ellipsoid such that $X_i \subseteq E_i$ and $E_i \leq \alpha \operatorname{conv}(C_i)$ .
100
+
101
+ The idea is that, if $((X_i,E_i))_{i\in [k]}$ is an $\alpha$ -rounding of $X$ , then $E_{i}$ gives an approximation of the pseudometric $d_{i}$ witnessing the strong convex hull margin of $C_i$ . Indeed, let $p_i$ be the pseudometric induced by $E_{i}$ , the one such that $E_{i} = B_{p_{i}}(\mu_{i},1)$ where $\mu_{i}$ is the center of $E_{i}$ ; we prove:
102
+
103
+ Lemma 7. If $((X_i, E_i))_{i \in [k]}$ is an $\alpha$ -rounding of $X$ then $p_i(\operatorname{conv}(X_i \cap C_i), \operatorname{conv}(X_i \cap C_j)) \geq \frac{\gamma}{\alpha}$ for all distinct $i, j \in [k]$ .
104
+
105
+ Proof. If $\mu_{i}$ is the center of $E_{i}$ , then $E_{i} = B_{p_{i}}(\mu_{i},1)$ . Let $d_{i}$ be any pseudometric witnessing that $C_i$ has strong convex hull margin $\gamma >0$ . As the margin is invariant under scaling, we can assume $\phi_{d_i}(C_i) = 1$ and $\mathrm{conv}(C_i)\subseteq B_{d_i}(z_i,1)$ for some $z_{i}\in \mathbb{R}^{m}$ . Therefore:
106
+
107
+ $$
108
+ B _ {p _ {i}} \left(\mu_ {i}, 1\right) = E _ {i} \leq \alpha \operatorname {c o n v} \left(C _ {i}\right) \subseteq \alpha B _ {d _ {i}} \left(z _ {i}, 1\right) \tag {1}
109
+ $$
110
+
111
+ As $p_i$ and $d_i$ are homogeneous and invariant under translation this implies $p_i \geq \frac{d_i}{\alpha}$ and thus $p_i(\operatorname{conv}(X_i \cap C_j), \operatorname{conv}(X_i \cap C_i)) \geq \frac{1}{\alpha} d_i(\operatorname{conv}(X_i \cap C_j), \operatorname{conv}(X_i \cap C_i))$ . Moreover, by monotonicity under taking subsets and by the margin assumption $d_i(\operatorname{conv}(X_i \cap C_j), \operatorname{conv}(X_i \cap C_i)) \geq d_i(\operatorname{conv}(C_j), \operatorname{conv}(C_i)) \geq \gamma \phi_{d_i}(C_i) = \gamma$ . Combining the two inequalities yields the thesis.
112
+
113
+ We will use Lemma 7 in the second phase. First, we show how to compute an $\alpha$ -rounding of $X$ efficiently. We sample points independently and uniformly at random from $X$ until we find $\Theta(m^2)$ points $S_i$ with the same label $i$ . As the VC dimension of ellipsoids in $\mathbb{R}^m$ is $\mathcal{O}(m^2)$ , by standard generalization error bounds with constant probability the MVE of $S_i$ contains at least half of $C_i$ . We then store that MVE together with the index $i$ , remove $S_i$ from $X$ , and repeat until $X$ becomes empty. At that point for each $i \in [k]$ we "merge" together all points in the MVEs that were computed for class $i$ , and compute the MVE of this merged set. We show that this produces an $\alpha$ -rounding of $X$ after $\mathcal{O}(k \log n)$ rounds in expectation. The resulting algorithm Round is listed below; Figure 1 depicts its behaviour on a toy example.
114
+
115
+ Lemma 8. Round $(X,k)$ returns an $m^2 (m + 1)$ -rounding of $X$ in time $\mathrm{poly}(n + m)$ using $\mathcal{O}(k^2 m^2\log n)$ LABEL queries in expectation.
116
+
117
+ Proof sketch. First we show that $E_{i} \leq m^{2}(m + 1) \operatorname{conv}(C_{i})$ for all $i \in [k]$ . This is trivial if $E_{i} = \emptyset$ , so let $E_{i} \neq \emptyset$ and let $\ell_{i} \geq 1$ be the value of $h_{i}$ at return time. For every $h = 1, \ldots, \ell_{i}$ let $E_{i}^{h} = \mathrm{MVE}(S_{i}^{h})$ and let $\mu_{i}^{h}$ be the center of $E_{i}^{h}$ . Using John's theorem one can show that $\sigma(E_{i}, \mu_{i}, \frac{1}{m}) \subseteq \operatorname{conv}\bigcup_{h=1}^{\ell_{i}} \sigma\big(\operatorname{conv}(S_{i}^{h}), \mu_{i}^{h}, m\big)$ and $\sigma\big(\operatorname{conv}(S_{i}^{h}), \mu_{i}^{h}, m\big) \subseteq \sigma(\operatorname{conv}(C_{i}), \mu, m(m + 1))$ . By taking the union over all $h \in [\ell_{i}]$ we conclude that $\sigma\big(E_{i}, \mu_{i}, \frac{1}{m}\big) \subseteq \sigma(\operatorname{conv}(C_{i}), \mu, m(m + 1))$ , that is, $E_{i} \leq m^{2}(m + 1) \operatorname{conv}(C_{i})$ . It is also easy to see that $(X_{i})_{i \in [k]}$ is a partition of $X$ , hence $((X_{i}, E_{i}))_{i \in [k]}$ is an $m^{2}(m + 1)$ -rounding of $X$ .
118
+
119
+ For the running time, the for loops perform $k \leq n$ iterations, and the while loop performs at most $n$ iterations as each iteration strictly decreases the size of $X$ . The running time of any iteration is dominated by the computation of $\mathrm{MVE}(S_i)$ or $\mathrm{MVE}(X_i)$ , which takes time $\mathrm{poly}(n + m)$ , see above. Hence $\operatorname{Round}(X, k)$ runs in time $\mathrm{poly}(n + m)$ . For the query bounds, the while loop makes $\mathcal{O}(m^2 k)$ LABEL queries per iteration. By standard generalization bounds, since the VC dimension of
120
+
121
+ Algorithm 1: Round $(X,k)$
122
+ for $i\in [k]$ do $h_i\gets 0$
123
+ while $X\neq \emptyset$ do draw points independently u.a.r. from $X$ and LABEL them until for some $i\in [k]$ we draw a (multi)set of $cm^2$ points from $C_i$ $h_i\gets h_i + 1$ $S_{i}^{h_{i}}\gets$ the sample of $cm^2$ points from $C_i$ $X_{i}^{h_{i}}\gets X\cap \mathrm{MVE}(S_{i}^{h_{i}})$ $X\leftarrow X\setminus X_i^{h_i}$
124
+ for $i\in [k]$ do $X_{i}\gets X_{i}^{1}\cup \ldots \cup X_{i}^{h_{i}}$ (set to $\varnothing$ if $h_i = 0$ $E_{i}\gets \operatorname {MVE}(X_{i})$ (set to $\varnothing$ if $X_{i} = \varnothing$
125
+ return $((X_i,E_i))_{i\in [k]}$
126
+
127
+ ![](images/5f881e8fec1fdfe23fe6b1e8bb08885c276e766b0f1568cd35da7f785ee0afe7.jpg)
128
+ Figure 1: A toy example in $\mathbb{R}^2$ with $k = 2$ ; black points are in $C_1$ , blue points in $C_2$ . Round $(X,2)$ computes first the ellipsoids $E_2^1,E_2^2$ (dotted black, from left to right), and then the ellipsoids $E_1^1,E_1^2,E_1^3$ (dotted blue, from left to right). Finally it computes $E_{1}$ (solid blue) and $E_{2}$ (solid black). $X_{1}$ and $X_{2}$ consist of the points in the blue and white areas respectively. Note that $X_{2}$ contains a point of $C_1$ .
129
+
130
+ ellipsoids in $\mathbb{R}^m$ is $\mathcal{O}(m^2)$ , $E_i^h$ contains at least half of $X \cap C_i$ with probability at least $\frac{1}{2}$ , and thus the expected number of rounds before $X$ becomes empty is in $\mathcal{O}(k \lg n)$ , see Bressan et al. [2021a]. We conclude that $\operatorname{Round}(X, k)$ uses $\mathcal{O}(m^2 k^2 \lg n)$ LABEL queries in expectation.
131
+
132
+ # 3.2 The Second Phase: Finding a Separator via Cutting Planes
133
+
134
+ Let $((X_i, E_i))_{i \in [k]}$ be the output of $\operatorname{Round}(X, k)$ , and fix $i \in [k]$ . For each $j \in [k] \setminus \{i\}$ , we want to separate $X_i \cap C_i$ from $X_i \cap C_j$ . To this end, first we use $E_i$ to perform a change of coordinates; this puts $X_i$ inside the unit ball and ensures that $X_i \cap C_i$ and $X_i \cap C_j$ are linearly separated with margin $\gamma_{\mathrm{SVM}} = \Omega(\gamma m^{-3})$ . Next, by calling $C_i$ the positive class $(+1)$ and $C_j$ the negative class $(-1)$ , and letting $X = X_i$ for simplicity, one can reduce the task to the following problem. Consider a partial classifier $h: X \to \{+1, -1, *\}$ . The algorithm has access to an oracle answering queries $\operatorname{SEED}(U, y)$ where $U \subseteq X$ and $y \in \{+1, -1\}$ , and its goal is to compute a separator of $X$ :
135
+
136
+ Definition 9. Let $X \subset \mathbb{R}^m$ and $h: X \to \{+1, -1, *\}$ . A separator of $X$ (w.r.t. $h$ ) is a partition $(X_+, X_-)$ of $X$ such that, for every $x \in X$ , if $h(x) = +1$ then $x \in X_+$ and if $h(x) = -1$ then $x \in X_-$ .
137
+
138
+ A separator of $X$ can be learned, for instance, by the Perceptron (using SEED to find counterexamples). However, this would yield a query and running time bound of $\mathcal{O}(1 / \gamma_{\mathrm{SVM}}^2) = \mathcal{O}(m^6 / \gamma^2)$ . We provide CPLearn, a cutting-plane algorithm based on SEED that is much more query-efficient (in fact, near-optimal):
139
+
140
+ Theorem 10. Let $X \subset \mathbb{R}^m$ and $h: X \to \{+1, -1, *\}$ , and suppose $h^{-1}(+1)$ and $h^{-1}(-1)$ are linearly separable with margin $r$ . Given $X$ and access to SEED for labels $\{+1, -1\}$ , CPLearn( $X$ )
141
+
142
+ computes a separator of $X$ w.r.t. $h$ using $\mathcal{O}(m\log \frac{R}{r})$ SEED queries in expectation, where $R = \max_{x\in X}\| x\| _2$ , and running with high probability in time poly $(m + |X|)$ .
143
+
144
+ Proof. (Sketch) First, we lift $X$ to $\mathbb{R}^{m + 1}$ . This reduces the problem to finding a homogeneous linear separator. To this end we let $X' = \{x': x \in X\}$ where $x'$ is obtained by appending to $x$ an $(m + 1)$ -th coordinate that is equal to $R$ , and we extend $h$ to $X'$ in the obvious way. It is easy to prove that $X'$ has radius at most $2R$ and that in $X'$ the two classes are linearly separable with margin $\frac{r}{2}$ .
145
+
146
+ Next, we learn a separator of $X'$ w.r.t. $h$ via cutting planes—see, e.g., Mitchell [2003]. Let $V_0 = B^{m+1}(0,1)$ . Every point $u \in V_0$ identifies the halfspace $H(u) = \{z \in \mathbb{R}^{m+1} : \langle u, z \rangle \geq 0\}$ . For $i = 1, 2, \ldots, V_i$ will be our version space, and we compute $V_{i+1}$ from $V_i$ as follows. Let $\mu_i$ be the center of mass of $V_i$ , and let $X'_i = X' \cap H(\mu_i)$ . By issuing $\mathrm{SEED}(X'_i, -1)$ and $\mathrm{SEED}(X' \setminus X'_i, +1)$ we learn whether $(X'_i, X' \setminus X'_i)$ is a separator of $X'$ w.r.t. $h$ , in which case we return the corresponding partition of $X$ , or we obtain a point $u_i$ . In the second case, we let $V_{i+1} = V_i \cap U_i$ where $U_i = \{x \in \mathbb{R}^{m+1} : h(u_i) \cdot \langle u_i, x \rangle \geq 0\}$ . By [Gilad-Bachrach et al., 2004, Theorem 2] this procedure returns a separator of $X'$ w.r.t. $h$ using at most $\frac{2m}{\log\frac{e}{e-1}} \log\frac{4R}{r/2} = O\left(m\log\frac{R}{r}\right)$ queries.
147
+
148
+ Unfortunately, computing $\mu_{i}$ is hard in general [Rademacher, 2007]. We instead compute an estimate $\hat{\mu}_i$ that, used in place of $\mu_{i}$ , ensures $\frac{\mathrm{vol}(V_{i + 1})}{\mathrm{vol}(V_i)}$ is bounded away from 1 with high probability; the expected query bound follows by adapting the proof of [Gilad-Bachrach et al., 2004]. Assume for the moment that $V_{i}$ is well-rounded—that is, it contains a ball of radius $r = \mathrm{poly}(m)$ and is contained in a ball of radius 1. To compute $\hat{\mu}_i$ we average over $\mathrm{poly}(n + m)$ independent uniform points from $V_{i}$ , which can be drawn efficiently thanks to the rounding condition. At this point we use $\hat{\mu}_i$ in place of $\mu_{i}$ to invoke SEED and obtain a violated constraint $U_{i}$ . However, setting $V_{i + 1} = V_{i}\cap U_{i}$ could make $V_{i + 1}$ far from rounded (too "thin"), making sampling inefficient at the next round. Therefore we rotate $U_{i}$ so to obtain a weaker constraint $U_{i}^{*}$ , one that still contains $V_{i}\cap U_{i}$ but that has $\hat{\mu}_i$ on its boundary, and let $V_{i + 1} = V_{i}\cap U_{i}^{*}$ . By the assumption on $\hat{\mu}_i$ this implies that $\mathrm{vol}(V_{i + 1})\geq \frac{1}{3}\mathrm{vol}(V_i)$ ; therefore by sampling uniform points from $V_{i}$ we can obtain a large sample in $V_{i + 1}$ , from which we can put $V_{i + 1}$ in a rounding position. See the full proof for all the details.
149
+
150
+ To the best of our knowledge, CPLearn is the first efficient algorithm that achieves the query upper bound of Theorem 10, even for the special case of SVM margin.
151
+
152
+ # 3.3 Wrap-Up
153
+
154
+ We wrap up our algorithms, starting with the case $k = 2$ ; the case $k \geq 2$ is slightly more involved.
155
+
156
+ Algorithm 2: BinLearn(X)
157
+ $(X_{1},E_{1}),(X_{2},E_{2}))\gets \mathrm{Round}(X)$
158
+ for $i\gets 1,2$ do change system of coordinates so that $E_{i}$ becomes the unit ball $(X_{i + },X_{i - })\gets \mathrm{CPLearn}(X_i)$ with $h:X_i\to \{1,2\}$
159
+ return $(X_{1 + }\cup X_{2 - },X_{2 + }\cup X_{1 - })$
160
+
161
+ Theorem 11. Suppose $k = 2$ . Then $\operatorname{BinLearn}(X)$ returns $\mathcal{C} = (C_1, C_2)$ in time $\mathrm{poly}(n + m)$ using in expectation $\mathcal{O}(m^2 \log n)$ LABEL queries and $\mathcal{O}(m \log \frac{m}{\gamma})$ SEED queries.
162
+
163
+ Proof. By Lemma 8, $\operatorname{Round}(X)$ runs in time $\mathrm{poly}(n + m)$ , makes $\mathcal{O}(m^2\log n)$ LABEL queries in expectation, and returns an $\mathcal{O}(m^3)$ -rounding of $X$ . It is immediate to see that, after the change of coordinates, $X_{i}$ has radius $R\leq 1$ , while $C_1\cap X_1$ and $C_2\cap X_1$ are separated linearly with margin $r = \Omega (\gamma m^{-3})$ . By Theorem 10 then, $\mathrm{CPLearn}(X_i)$ returns the partition of $X_{i}$ induced by $h$ in time $\mathrm{poly}(|X_i| + m) = \mathrm{poly}(n + m)$ using $\mathcal{O}\big(m\log \frac{R}{r}\big) = \mathcal{O}\big(m\log \frac{m}{\gamma}\big)$ expected SEED queries.
164
+
165
+ For $k \geq 2$ we proceed as follows. Let $\mathbf{k} = [k]$ . We take $X_{i}$ for each $i \in \mathbf{k}$ in turn, and for each $j \in \mathbf{k} \setminus i$ , we use CPLearn to compute a separator for $i, j$ in $X_{i}$ . By intersecting the left side of all those separators we obtain $X_{i} \cap C_{i}$ . Then we recurse on $X_{i} \setminus C_{i}$ , updating $\mathbf{k}$ to $\mathbf{k} \setminus i$ . The resulting algorithm KClassLearn is listed below and yields:
166
+
167
+ Theorem 12. KClassLearn $(X,[k])$ returns $\mathcal{C}$ in time $\mathrm{poly}(n + m)$ using in expectation $\mathcal{O}(k!k^2 m^2\log n)$ LABEL queries and $\mathcal{O}\big(k!k^2 m\log \frac{m}{\gamma}\big)$ SEED queries.
168
+
169
+ Proof. We adapt the proof of Theorem 11. Observe that $\mathrm{KClassLearn}(X,[k])$ makes at most $\min (k!,n)$ recursive calls; the $n$ in the min comes from the fact that any given (recursive) call learns the label of at least one unlabeled point. Now, every (recursive) call makes one invocation to $\operatorname{Round}(X)$ , which by Lemma 8 uses time $\mathrm{poly}(n + m)$ and $\mathcal{O}(k^2 m^2\log n)$ LABEL queries, and $\mathcal{O}(k^2)$ invocations to $\mathrm{CPLearn}(X_i)$ , each of which by Theorem 10 uses $\mathrm{poly}(n + m)$ time and $\mathcal{O}\big(m\log \frac{m}{\gamma}\big)$ SEED queries.
170
+
171
+ Algorithm 3: KClassLearn(X, k)
172
+ $k\gets |\mathbf{k}|$
173
+ if $k = 1$ then query any point of $X$ and label all of $X$ accordingly
174
+ else $(\left(X_{i},E_{i}\right))_{i\in [k]}\gets \mathrm{Round}(X)$
175
+ for $i\in \mathbf{k}$ do change system of coordinates so that $E_{i}$ becomes the unit ball for $j\in \mathbf{k}\setminus i$ do $\begin{array}{r}\big{\lfloor}(C_{ij},\overline{C_{ij}})\gets \mathrm{CPLearn}(X_i)\mathrm{~with~}h:X_i\to \{i,j\} \end{array}$ $\widehat{C}_i\gets \bigcap_{j\in \mathbf{k}\setminus i}C_{ij}$ mark all of $\widehat{C}_i$ with label $i$
176
+ if $X_{i}\backslash \widehat{C}_{i}\neq \emptyset$ then KClassLearn $(X_{i}\setminus \widehat{C}_{i},\mathbf{k}\setminus i)$
177
+
178
+ # 4 Lower Bounds
179
+
180
+ This section gives a detailed sketch of the proof of Theorem 4, recalled here for convenience:
181
+
182
+ Theorem 4. For all $m \geq 2$ , all $k \geq 2$ , and all $\gamma \leq m^{-3/2}/16$ there exists a distribution of instances with $k$ classes in $\mathbb{R}^m$ with strong convex hull margin $\gamma$ where any randomized algorithm using SEED and LABEL queries that returns $\mathcal{C}$ with probability at least $\frac{1}{2}$ makes at least $\left\lfloor \frac{k}{2} \right\rfloor \frac{m}{24} \log \frac{1}{2\gamma}$ total queries in expectation.
183
+
184
+ We first give the sketch for $k = 2$ , and then extend it to $k \geq 2$ . For a full proof see Appendix B. Setup. The construction is adapted from Proposition 2 of Thiessen and Gartner [2021]. Let $e_1, \ldots, e_m$ be the canonical basis of $\mathbb{R}^m$ and let $\ell = \lfloor 1 / \sqrt{2\gamma\sqrt{m}} \rfloor$ ; note that $\gamma \leq \frac{m^{-3/2}}{16}$ and $m \geq 2$ ensure $\ell \geq 4$ . Let $p = m - 1$ , and for each $i \in [p]$ and $j \in [\ell]$ define $x_i^j = e_i + j \cdot e_m$ . Finally, let $X = \{x_i^j : i \in [p], j \in [\ell]\}$ and define the concept class $\mathcal{H} = \left\{\bigcup_{i \in [p]} \{x_i^1, \ldots, x_i^{\ell_i}\} : (\ell_1, \ldots, \ell_p) \in [\ell]^p\right\}$ . Let $\mathcal{C} = (C_1, C_2)$ be any partition of $X$ such that $C_1 \in \mathcal{H}$ . One can easily verify that $\mathcal{C}$ has strong convex hull margin $\frac{1}{2\ell^2\sqrt{m}} \geq \gamma$ . See Figure 2 for reference.
185
+
186
+ Query bound. Let $V_0 = \{(C_1, C_2) : C_1 \in \mathcal{H}\}$ . This is the initial version space. We let the target concept $\mathcal{C} = (C_1, C_2)$ be drawn uniformly at random from $V_0$ . Note that for $k = 2$ , any lower bound on the number of SEED queries alone, also holds for any combination of SEED and LABEL queries, as $\mathrm{LABEL}(x)$ can be simulated by $\mathrm{SEED}(x, 1)$ . Thus, without loss of generality, we can assume that the algorithm is only using SEED queries. For all $t = 0, 1, \ldots$ , we denote by $V_t$ the version space after the first $t$ SEED queries made by the algorithm. Now fix any $t \geq 1$ and let $\mathrm{SEED}(U, y)$ be the $t$ -th such query. Without loss of generality we assume $y = 1$ ; a symmetric argument applies to $y = 2$ . If $U \cap C_1$ contains a point $x$ whose label can be inferred from the first $t - 1$ queries, then we return
187
+
188
+ ![](images/2454b18f89aacba2625851264c5ffe8b9f60707dd008560538b4b2f468cc5631.jpg)
189
+ Figure 2: $X$ for $p = 2$ and $\ell = 10$ . Filled points represent the agreement region. The maximum point of $S_{1} \cap C_{1}$ (resp. $S_{2} \cap C_{1}$ ) can be any point in $Z_{1}$ (resp. $Z_{2}$ ). $U$ is a possible query.
190
+
191
+ $x$ . Therefore we can continue under the assumption that $U$ does not contain any such point (doing otherwise cannot reduce the probability that the algorithm learns nothing). The oracle answers so to maximize $\frac{|V_t|}{|V_{t-1}|}$ , as described below.
192
+
193
+ For each $i \in [p]$ let $S_i = \{x_i^j : j \in [\ell]\}$ . We consider $S_i$ as sorted by the index $j$ . Let $Z_i$ be the subset of $S_i$ in the disagreement region of $V_{t-1}$ together with the point in $S_i$ preceding this region; observe that this point always exists, as $x_i^1 \in C_1$ is in the agreement region. Note that $Z_i$ is necessarily an interval of $S_i$ . We let $U_i = Z_i \cap U$ for each $i \in [p]$ and $P(U) = \{i \in [p] : U_i \neq \emptyset\}$ . For every $i \in P(U)$ , we let $\alpha_i$ be the fraction of points of $Z_i$ that precede the first point in $U_i$ . Let $x_i^* = \arg \max \{j : x_i^j \in S_i \cap C_1\}$ . Observe that $|V_{t-1}| = \prod_{i \in [p]} |Z_i|$ . Indeed, $x_i^*$ is uniformly distributed over $Z_i$ ; either $x_i^*$ is a point in the disagreement region of $S_i$ , or the disagreement region of $S_i$ is fully contained in $C_2$ and $x_i^*$ is the point preceding the disagreement region of $S_i$ .
194
+
195
+ Now we show that $\mathbb{E}[|V_{t - 1}| / |V_t|]\leq m$ . Let $\mathcal{E}$ be the event that $\mathrm{SEED}(U,1) = \mathrm{NIL}$ . Write:
196
+
197
+ $$
198
+ \mathbb {E} \left[ \frac {\left| V _ {t - 1} \right|}{\left| V _ {t} \right|} \right] = \Pr (\mathcal {E}) \mathbb {E} \left[ \frac {\left| V _ {t - 1} \right|}{\left| V _ {t} \right|} \mid \mathcal {E} \right] + \Pr (\overline {{\mathcal {E}}}) \mathbb {E} \left[ \frac {\left| V _ {t - 1} \right|}{\left| V _ {t} \right|} \mid \overline {{\mathcal {E}}} \right] \tag {2}
199
+ $$
200
+
201
+ We bound each one of the two terms in the right-hand side.
202
+
203
+ For the first term, note that $\mathcal{E}$ holds if and only if $U_{i} \cap C_{1} = \emptyset$ for all $i \in P(U)$ . Since $x_{i}^{*}$ is uniformly distributed over $Z_{i}$ , for all $i \in P(U)$ we have $\operatorname{Pr}(C_{1} \cap U_{i} = \emptyset) = \alpha_{i}$ , and since the distributions of those points are independent, then $\operatorname{Pr}(\mathcal{E}) = \prod_{i \in P(U)} \alpha_{i}$ . If $\operatorname{Pr}(\mathcal{E}) > 0$ and $\mathcal{E}$ holds, then $x_{i}^{*}$ is uniformly distributed over the first $\alpha_{i}|Z_{i}|$ points of $Z_{i}$ , as the rest of $Z_{i}$ belongs to $C_{2}$ . This holds independently for all $i$ , thus:
204
+
205
+ $$
206
+ \left| V _ {t} \right| = \left(\prod_ {i \in P (U)} \alpha_ {i} \mid Z _ {i} |\right) \left(\prod_ {i \in [ p ] \backslash P (U)} \left| Z _ {i} \right|\right) = \left(\prod_ {i \in P (U)} \alpha_ {i}\right) \left(\prod_ {i \in [ p ]} \left| Z _ {i} \right|\right) = \left| V _ {t - 1} \right| \prod_ {i \in P (U)} \alpha_ {i} \tag {3}
207
+ $$
208
+
209
+ It follows that $\operatorname{Pr}(\mathcal{E})\mathbb{E}\left[\frac{|V_{t - 1}|}{|V_t|}\Bigg{|}\mathcal{E}\right]\leq 1.$
210
+
211
+ Let us turn to the second term. If $\mathcal{E}$ does not hold, then $\mathrm{SEED}(U,1)$ returns the smallest point $x\in U_i$ for any $i\in P(U)$ such that $C_1\cap U_i\neq \emptyset$ (note that necessarily $x\in C_1$ ). For any fixed $i\in P(U)$ , the probability of returning the smallest point of $U_{i}$ is bounded by $\operatorname *{Pr}(C_1\cap U_i\neq \emptyset)$ , which is $1 - \alpha_{i}$ ; and if this is the case, then we have $|V_{t}| = (1 - \alpha_{i})|V_{t - 1}|$ . Thus:
212
+
213
+ $$
214
+ \Pr (\bar {\mathcal {E}}) \mathbb {E} \left[ \frac {| V _ {t - 1} |}{| V _ {t} |} \mid \bar {\mathcal {E}} \right] \leq \Pr (\bar {\mathcal {E}}) \max _ {i \in P (U)} \left(1 - \alpha_ {i}\right) \frac {1}{\left(1 - \alpha_ {i}\right)} = \Pr (\bar {\mathcal {E}}) \leq 1 \tag {4}
215
+ $$
216
+
217
+ So the two terms of (2) are both bounded by 1; we conclude that $\mathbb{E}\left[\frac{|V_{t - 1}|}{|V_t|}\right]\leq 2$
218
+
219
+ Next, fix any $\bar{t} \geq 1$ and let $\log = \log_2$ . By the concavity of $\log$ and by Jensen's inequality:
220
+
221
+ $$
222
+ \mathbb {E} \left[ \log \frac {\left| V _ {0} \right|}{\left| V _ {\bar {t}} \right|} \right] = \mathbb {E} \left[ \sum_ {t = 1} ^ {\bar {t}} \log \frac {\left| V _ {t - 1} \right|}{\left| V _ {t} \right|} \right] = \sum_ {t = 1} ^ {\bar {t}} \mathbb {E} \left[ \log \frac {\left| V _ {t - 1} \right|}{\left| V _ {t} \right|} \right] \leq \sum_ {t = 1} ^ {\bar {t}} \log \mathbb {E} \left[ \frac {\left| V _ {t - 1} \right|}{\left| V _ {t} \right|} \right] \tag {5}
223
+ $$
224
+
225
+ Since $\mathbb{E}\left[\frac{|V_{t - 1}|}{|V_t|}\right] \leq 2$ , the right-hand side is at most $\bar{t}$ . Now, since $|V_0| = \ell^p = \ell^{m - 1}$ , by Markov's inequality, and since $(m - 1)\log \ell - \log 2 \geq \frac{(m - 1)\log\ell}{2} \geq \frac{m\log\ell}{4}$ :
226
+
227
+ $$
228
+ \Pr \left(\left| V _ {\bar {t}} \right| \leq 2\right) = \Pr \left(\log \frac {\left| V _ {0} \right|}{\left| V _ {\bar {t}} \right|} \geq (m - 1) \log \ell - \log 2\right) \leq \frac {4 \mathbb {E} \left[ \log \frac {\left| V _ {0} \right|}{\left| V _ {\bar {t}} \right|} \right]}{m \log \ell} \leq \frac {4 \bar {t}}{m \log \ell} \tag {6}
229
+ $$
230
+
231
+ Now let $T$ be the random variable counting the number of queries spent by the algorithm, and let $V_{T}$ be the version space at return time. Since $\mathcal{C}$ is uniform over $V_{T}$ and $\mathcal{C}$ is returned with probability at least $\frac{1}{2}$ , then $\operatorname*{Pr}(|V_T|\leq 2)\geq \frac{1}{2}$ . By (6) and linearity of expectation,
232
+
233
+ $$
234
+ \frac {1}{2} \leq \Pr (| V _ {T} | \leq 2) \leq \sum_ {\bar {t} \geq 0} \Pr (T = \bar {t}) \cdot \frac {4 \bar {t}}{m \log \ell} = \mathbb {E} [ T ] \frac {4}{m \log \ell} \tag {7}
235
+ $$
236
+
237
+ Therefore $\mathbb{E}[T] \geq \frac{m \log \ell}{4}$ . Now, since $\ell \geq 4$ then $\ell \geq \frac{4}{5\sqrt{2\gamma\sqrt{m}}}$ , which since $m \leq (16\gamma)^{-2/3}$ yields, after calculations, $\ell \geq \sqrt[3]{1/\gamma} \cdot \frac{4^{4/3}}{5\sqrt{2}} > 0.89\sqrt[3]{1/\gamma}$ . This shows that $E[T] > \frac{m}{24} \log \frac{1}{2\gamma}$ , concluding the proof for $k = 2$ .
238
+
239
+ Extension to $\mathbf{k} \geq 2$ . For each $s \in \left\lfloor \frac{k}{2} \right\rfloor$ and each pair of classes $C_{2s-1}, C_{2s}$ , use the construction above shifted along the $m$ -th dimension by $(s - 1)\ell$ . One can easily verify that learning $\mathcal{C}$ is as hard as learning $\left\lfloor \frac{k}{2} \right\rfloor$ independent binary classifiers, for each of which the bound above holds.
240
+
241
+ # 5 Conclusions and Future Work
242
+
243
+ We have shown that, with a careful combination of LABEL and SEED queries, one can overcome the limitations of each query alone and get the "best of both worlds": an algorithm that achieves exponential savings and, simultaneously, has running time polynomial in the dimension of the space. Our work leaves open a few problems. The first problem is to understand the tradeoff between the two query types: how many LABEL does one need if one is allowed only $Q$ SEED? The second problem is whether, for the one-sided case, one can achieve a query rate that is independent of the distortion $\kappa_{d}$ , as we did for the multiclass case. The third problem is whether one can improve the dependence of our bounds on the number $k$ of classes, ideally bounding it by a polynomial.
244
+
245
+ # Acknowledgments and Disclosure of Funding
246
+
247
+ The authors gratefully acknowledge partial support by the Google Focused Award "Algorithms and Learning for AI" (ALL4AI). Nicolò Cesa-Bianchi is also supported by the MIUR PRIN grant Algorithms, Games, and Digital Markets (ALGADIMAR) and by the EU Horizon 2020 ICT-48 research and innovation action under grant agreement 951847, project ELISE (European Learning and Intelligent Systems Excellence).
248
+
249
+ # References
250
+
251
+ Dana Angluin. Queries and concept learning. Machine Learning, 2(4):319-342, 1988. doi: 10.1023/A:1022821128753.
252
+ Josh Attenberg and Foster Provost. Why label when you can search? Alternatives to active learning for applying human resources to build classification models under extreme class imbalance. In Proc. of ACM KDD, page 423-432, 2010. doi: 10.1145/1835804.1835859.
253
+ Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturbation stability. Information Processing Letters, 112(1):49-54, 2012. doi: 10.1016/j.ipl.2011.10.006.
254
+ Maria Florina Balcan and Steve Hanneke. Robust interactive learning. In Proc. of COLT, volume 23, pages 20.1-20.34, 2012.
255
+ Alina Beygelzimer, Daniel J Hsu, John Langford, and Chicheng Zhang. Search improves label for active learning. In Advances in Neural Information Processing Systems, volume 29, 2016.
256
+
257
+ Yonatan Bilu and Nathan Linial. Are stable instances easy? Comb. Probab. Comput., 21(5):643-660, September 2012. doi: 10.1017/S0963548312000193.
258
+ Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, and Andrea Paudice. On margin-based cluster recovery with oracle queries. In Advances in Neural Information Processing Systems, volume 34, 2021a.
259
+ Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, and Andrea Paudice. Exact recovery of clusters in finite metric spaces using oracle queries. In Proc. of COLT, volume 134, pages 775-803, 2021b.
260
+ Scott Doyle, James Monaco, Michael Feldman, John Tomaszewski, and Anant Madabhushi. An active learning based classification strategy for the minority class problem: application to histopathology annotation. BMC Bioinformatics, 12(1), 2011. doi: 10.1186/1471-2105-12-424.
261
+ Ran Gilad-Bachrach, Amir Navot, and Naftali Tishby. Bayes and Tukey meet at the center point. In Proc. of COLT, pages 549-563, 2004.
262
+ Alon Gonen, Sivan Sabato, and Shai Shalev-Shwartz. Efficient active learning of halfspaces: an aggressive approach. Journal of Machine Learning Research, 14(1):2583-2615, 2013.
263
+ Lee-Ad Gottlieb, Eran Kaufman, Aryeh Kontorovich, and Gabriel Nivasch. Learning convex polytopes with margin. In Advances in Neural Information Processing Systems, volume 31, 2018.
264
+ Steve Hanneke. Theoretical Foundations of Active Learning. PhD thesis, USA, 2009. AAI3362265.
265
+ Steve Hanneke and Liu Yang. Minimax analysis of active learning. Journal of Machine Learning Research, 16(12):3487-3602, 2015.
266
+ Max Hopkins, Daniel Kane, and Shachar Lovett. The power of comparisons for actively learning linear classifiers. In Advances in Neural Information Processing Systems, volume 33, pages 6342-6353, 2020.
267
+ Max Hopkins, Daniel Kane, Shachar Lovett, and Michal Moshkovitz. Bounded memory active learning through enriched queries. In Proc. of COLT, pages 2358-2387, 2021.
268
+ D. M. Kane, S. Lovett, S. Moran, and J. Zhang. Active classification with comparison queries. In Proc. of IEEE FOCS, pages 355-366, 2017. doi: 10.1109/FOCS.2017.40.
269
+ Leonid G Khachiyan. Rounding of polytopes in the real number model of computation. Mathematics of Operations Research, 21(2):307-320, 1996. doi: 10.1287/moor.21.2.307.
270
+ Bernhard Korte and Jens Vygen. Combinatorial Optimization. Springer, Berlin, Heidelberg, 2018. doi: 10.1007/978-3-662-56039-6.
271
+ Stephen Kwek and Leonard Pitt. PAC learning intersections of halfspaces with membership queries. Algorithmica, 22(1):53-75, 1998. doi: 10.1007/PL00013834.
272
+ László Lovász and Santosh Vempala. Hit-and-run from a corner. SIAM Journal on Computing, 35(4): 985-1005, 2006. doi: 10.1137/S009753970544727X.
273
+ Wolfgang Maass and György Turán. Lower bound methods and separation results for on-line learning models. Machine Learning, 9(2):107-145, 1992. doi: 10.1007/BF00992674.
274
+ Wolfgang Maass and György Turán. How fast can a threshold gate learn? In Proceedings of a workshop on Computational learning theory and natural learning systems (vol. 1): constraints and prospects, pages 381-414, 1994.
275
+ John E Mitchell. *Polynomial interior point cutting plane methods.* *Optimization Methods and Software*, 18(5):507-534, 2003. doi: 10.1080/10556780310001607956.
276
+ Luis A. Rademacher. Approximating the centroid is hard. In Proc. of ACM SoCG, page 302-305, 2007. doi: 10.1145/1247069.1247123.
277
+ Maximilian Thiessen and Thomas Gartner. Active learning of convex halfspaces on graphs. In Advances in Neural Information Processing Systems, volume 34, 2021.
278
+
279
+ Simon Tong and Edward Chang. Support vector machine active learning for image retrieval. In Proc. of ACM ICM, page 107-118, 2001. doi: 10.1145/500141.500159.
280
+
281
+ Santosh S. Vempala. Recent progress and open problems in algorithmic convex geometry. In Proc. of FSTTCS, volume 8, pages 42-64, 2010. doi: 10.4230/LIPIcs.FSTTCS.2010.42.
282
+
283
+ Sharad Vikram and Sanjoy Dasgupta. Interactive bayesian hierarchical clustering. In Proc. of ICML, volume 48, pages 2081-2090, 2016.
284
+
285
+ # Checklist
286
+
287
+ 1. For all authors...
288
+
289
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
290
+ (b) Did you describe the limitations of your work? [Yes] We clearly state the assumptions under which our results hold.
291
+ (c) Did you discuss any potential negative societal impacts of your work? [Yes] This is a purely theoretical work with no direct societal impact, neither positive nor negative.
292
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
293
+
294
+ 2. If you are including theoretical results...
295
+
296
+ (a) Did you state the full set of assumptions of all theoretical results? [Yes]
297
+ (b) Did you include complete proofs of all theoretical results? [Yes]
298
+
299
+ 3. If you ran experiments...
300
+
301
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
302
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A]
303
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A]
304
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
305
+
306
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
307
+
308
+ (a) If your work uses existing assets, did you cite the creators? [N/A]
309
+ (b) Did you mention the license of the assets? [N/A]
310
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
311
+
312
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
313
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
314
+
315
+ 5. If you used crowdsourcing or conducted research with human subjects...
316
+
317
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
318
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
319
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
activelearningofclassifierswithlabelandseedqueries/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1824dc16ed9b6d5d58ea8f90bfb345d33ae669f807c05078bfcf4ed3a2aa0ce
3
+ size 112306
activelearningofclassifierswithlabelandseedqueries/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f73c55441285c8665c2d3d99da8a19b20d44efa788f674a6a7e338e802d3085c
3
+ size 930801
activelearningpolynomialthresholdfunctions/cd9cbb09-1885-49c0-b011-29403e1bcaf6_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b472a472932a115046f4c2b81db3b6239a68b5034f999a15fdf2868dd14d2725
3
+ size 83505
activelearningpolynomialthresholdfunctions/cd9cbb09-1885-49c0-b011-29403e1bcaf6_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54b828dccbebb01348ba03701f69d0e6bf08a56b15c1963cf8510a371cf54e25
3
+ size 106826
activelearningpolynomialthresholdfunctions/cd9cbb09-1885-49c0-b011-29403e1bcaf6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77afb2bf3dd67a063dfc9dc5e81a23fa6e71ebe05a8dbd37f21939d5dcb0ca8
3
+ size 285591
activelearningpolynomialthresholdfunctions/full.md ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning Polynomial Threshold Functions
2
+
3
+ # Omri Ben-Eliezer
4
+
5
+ Department of Mathematics
6
+ Massachusetts Institute of Technology
7
+ omrib@mit.edu
8
+
9
+ # Max Hopkins
10
+
11
+ Department of Computer Science and Engineering
12
+ University of California, San Diego
13
+ nmhopkin@eng.ucsd.edu
14
+
15
+ # Chutong Yang
16
+
17
+ Department of Computer Science
18
+ Stanford University
19
+ yct1998@stanford.edu
20
+
21
+ # Hantao Yu
22
+
23
+ Department of Computer Science Columbia University hantao.yu@columbia.edu
24
+
25
+ # Abstract
26
+
27
+ We initiate the study of active learning polynomial threshold functions (PTFs). While traditional lower bounds imply that even univariate quadratics cannot be non-trivially actively learned, we show that allowing the learner basic access to the derivatives of the underlying classifier circumvents this issue and leads to a computationally efficient algorithm for active learning degree- $d$ univariate PTFs in $\tilde{O}(d^3\log(1/\varepsilon\delta))$ queries. We extend this result to the batch active setting, providing a smooth transition between query complexity and rounds of adaptivity, and also provide near-optimal algorithms for active learning PTFs in several average case settings. Finally, we prove that access to derivatives is insufficient for active learning multivariate PTFs, even those of just two variables.
28
+
29
+ # 1 Introduction
30
+
31
+ Today's deep neural networks perform incredible feats when provided sufficient training data. Sadly, annotating enough raw data to train your favorite classifier can often be prohibitively expensive, especially in important scenarios like computer-assisted medical diagnoses where labeling requires the advice of human experts. This issue has led to a surge of interest in active learning, a paradigm introduced to mitigate extravagant labeling costs. Active learning, originally studied by Angluin in 1988 [1], is in essence formed around two basic hypotheses: raw (unlabeled) data is cheap, and not all data is equally useful. The idea is that by adaptively selecting only the most informative data to label, we can get the same accuracy without the prohibitive cost. As a basic example, consider the class of thresholds in one dimension. Identifying the threshold within some $\varepsilon$ accuracy requires about $1 / \varepsilon$ labeled data points, but if we are allowed to adaptively select points we can use binary search to recover the same error in only $\log(1 / \varepsilon)$ labels, an exponential improvement!
32
+
33
+ Unfortunately, there's a well-known problem with this approach: it breaks down for most non-trivial classifiers beyond 1D-thresholds [2], providing no asymptotic benefit over standard non-adaptive methods. This has lead researchers in recent years to develop a slew of new strategies overcoming this obstacle. We follow an approach pioneered by Kane, Lovett, Moran, and Zhang (KLMZ) [3]: asking more informative questions. KLMZ suggest that if we are modeling access to a human expert, there's no reason to restrict ourselves to asking only about the labels of raw data; rather, we should be allowed access to other natural application-dependent questions as well. They pay particular attention to learning halfspaces in this model via "comparison queries," which given $x, x' \in \mathbb{R}^d$ ask which point is closer to the bounding hyperplane (think of asking a doctor "which patient is more sick?"). Such queries had already shown promise in practice [4-6], and KLMZ proved they could be used
34
+
35
+ to efficiently active learn halfspaces in two-dimensions, recovering the exponential improvement seen for 1D-thresholds via binary search. Beyond two dimensions, however, all known techniques either require strong structural assumptions [3, 7], or the introduction of complicated queries [8, 9] requiring infinite precision, a significant limitation in both theory and practice.
36
+
37
+ The study of active learning halfspaces can be naturally viewed as an attempt to extend the classical active learning of 1D-thresholds to higher dimensions. In this work, we take a different approach and instead study the generalization of this problem to higher degrees. In particular, we initiate the study of active learning polynomial threshold functions, classifiers of the form $\mathrm{sign}(p(x))$ for $x\in \mathbb{R}$ and $p$ some underlying univariate polynomial. When the degree of $p$ is 1, this reduces to the class of 1D-thresholds. Similar to halfspaces, standard arguments show that even degree-two univariate PTFs cannot be actively learned. To this end, we introduce derivative queries, a natural class-specific query-type that allows the learner weak access to the derivatives of the underlying PTF $p$ .
38
+
39
+ Derivative queries are well-motivated both in theory and practice. A simple example is the medical setting, where a first-order derivative might correspond to asking "Is patient $X$ recovering, or getting sicker?" Derivatives also play an essential role in our sensory perception of the world. Having two eyes grants us depth perception [10], allowing us to compute low-order derivatives across time-stamps to predict future object positions (e.g. for hunting, collision-avoidance). Multi-viewpoint settings also allow access to low order derivatives by comparing nearby points; one intriguing example is the remarkable sensory echolocation system of bats, which emit ultrasonic waves while moving to learn the structure of their environment [11]. While high order derivatives may be more difficult to compute for a human (or animal) oracle, they still have natural implications in settings such as experimental design where queries are measured mechanically (e.g. automated tests of a self-driving car system might reasonably measure higher order derivatives of positional data). Such techniques have already seen practical success with other query types typically considered too difficult for human annotators (see e.g. the survey of Sverchkov and Craven [12] on automated design in biology).
40
+
41
+ Our main result can be viewed as theoretical confirmation that this type of question is indeed useful: derivative queries are necessary and sufficient for active learning univariate PTFs. We prove that if the learner is allowed access to $\mathrm{sign}(p^{(i)}(x))$ (the $i$ -th order derivative of $p$ ), PTFs are learnable in $O(\log (1 / \varepsilon))$ queries, but require $\Omega (1 / \varepsilon)$ queries if the learner is missing access even to a single relevant derivative. We generalize this upper bound to the batch setting as well, giving a smooth interpolation between query complexity and rounds of communication with data annotators (which have costly overhead in practice).
42
+
43
+ We also study active learning PTFs beyond the worst-case setting. Specifically, we consider a setup in which the learner is promised that both points in $\mathbb{R}$ and the underlying polynomial are drawn from known distributions. We propose a general algorithm for active learning PTFs in this model based on coupon collecting and binary search, and analyze its query complexity across a few natural settings. Notably, our algorithm in this model avoids the use of derivatives altogether, making it better adapted to scenarios like learning natural imagery where we expect the underlying distributions to be nice, but may not have access to higher order information like derivatives. Finally, we note that all of our upper bounds (in both worst and average-case settings) actually hold for the stronger 'perfect' learning model in which the learner aims to query-efficiently label a fixed 'pool' of data with zero error. Perfect learning is equivalent to active learning in the worst-case model [13, 3], but is likely harder in the average-case and requires new insight over standard techniques in our setting.
44
+
45
+ We end our work with a preliminary analysis of active learning multivariate PTFs, where we prove a strong lower bound showing access to derivative information is actually insufficient to active learn even degree-two PTFs in two variables. We leave upper bounds in this challenging regime (e.g. through distributional assumptions, additional enriched queries) as a direction of future research.
46
+
47
+ # 1.1 Background
48
+
49
+ We briefly overview the basic theory of PAC-learning (in both the "passive" and "active" settings) and of the main model we study, perfect learning. We cover these topics in much greater detail in the supplementary materials. PAC-learning, originally introduced by Valiant [14] and Vapnik and Chervonenkis [15], provides a framework for studying the learnability of pairs $(X,H)$ where $X$ is
50
+
51
+ a set and $H = \{h : X \to \{-1, 1\}\}$ is a family of binary classifiers. A class $(X, H)$ is said to be PAC-learnable in $n = n(\varepsilon, \delta)$ samples if for all $\varepsilon, \delta > 0$ , there exists an algorithm $A$ which for all distributions $D$ over $X$ and classifiers $h \in H$ , intakes a labeled sample of size $n$ and outputs a good hypothesis with high probability:
52
+
53
+ $$
54
+ \operatorname * {P r} _ {S \sim D ^ {n}} [ \operatorname {e r r} _ {D, h} (A (S, h (S))) \leq \varepsilon ] \geq 1 - \delta ,
55
+ $$
56
+
57
+ where $\mathrm{err}_{D,h}(A(S,h(S))) = \mathbb{P}_{x\sim D}[A(S,h(S))(x)\neq h(x)]$ . Active learning is a modification of the PAC-paradigm where the learner instead draws unlabeled samples, and may choose whether or not they wish to ask for the label of any given point. The goal is to minimize the query complexity $q(\varepsilon ,\delta)$ , which measures the number of queries required to attain the same accuracy guarantees as the standard "passive" PAC-model described above. In the batch setting, the learner may send points to the oracle in batches. This incurs the same query cost as in the standard setting (a batch of $m$ points costs $m$ queries), but allows for a finer-grained analysis of adaptivity through the round complexity $r(\varepsilon ,\delta)$ which measures the total number of batches sent to the oracle.
58
+
59
+ In this work, we study a challenging variant of active learning called perfect learning (variants of which go by many names in the literature, e.g. RPU-learning [16], perfect selective classification [13], and confident learning [3]). In this model, the learner is asked to label an adversariably selected size- $n$ sample from $X$ . The query complexity $q(n)$ (respectively round complexity $r(n)$ ) is the expected number of queries (respectively rounds) required to infer the labels of all $n$ points in the sample. Perfect learning is well known to be equivalent to active learning up to small factors in query complexity in worst-case settings, and is at least as hard as the latter in the average-case [13, 3]. We discuss these connections in more depth in Section 2.
60
+
61
+ In this work, we study the learnability of $(\mathbb{R},H_d)$ , the class of degree (at most) $d$ univariate PTFs. In the worst-case setting, we allow the learner access to derivative queries: for any $x\in \mathbb{R}$ in the learner's sample, they may query $\mathrm{sign}(f^{(i)}(x))$ for any $i = 0,\ldots ,d - 1$ , where $f^{(i)}$ is the $i$ -th derivative of $f$ and $f^{(0)}$ is $f$ itself.
62
+
63
+ # 1.2 Results
64
+
65
+ Our main result is that univariate PTFs can be computationally and query-efficiently learned in the perfect model via derivative queries.
66
+
67
+ Theorem 1.1 (Perfect Learning PTFs). The query complexity of perfect learning $(\mathbb{R},H_d)$ with derivative queries is:
68
+
69
+ $$
70
+ \Omega (d \log n) \leq q (n) \leq O (d ^ {3} \log n).
71
+ $$
72
+
73
+ Furthermore, there is an algorithm achieving this upper bound that runs in time $\tilde{O}(nd)$ .
74
+
75
+ By standard connections with active learning, this implies PTFs are active learnable in query complexity $\Omega(d\log(1/\varepsilon)) \leq q(\varepsilon, \delta) \leq \tilde{O}\left(d^{3}\log\left(\frac{1}{\varepsilon\delta}\right)\right)$ when the learner has access to derivative queries.
76
+
77
+ Theorem 1.1 is based on a deterministic algorithm that iteratively learns each derivative given higher order information. This technique necessarily requires a large amount of adaptivity which can be costly in practice. To mitigate this issue, we give a randomized algorithm that extends Theorem 1.1 to the batch setting, providing a smooth trade-off between (expected) query-optimality and adaptivity.
78
+
79
+ Theorem 1.2 (Perfect Learning PTFs Batch Setting). For any $n \in \mathbb{N}$ and $\alpha \in (1 / \log(n), 1]$ , there exists a randomized algorithm perfectly learning size $n$ subsets of $(\mathbb{R}, H_d)$ in
80
+
81
+ $$
82
+ q (n) \leq O \left(\frac {d ^ {3} n ^ {\alpha}}{\alpha}\right)
83
+ $$
84
+
85
+ expected queries, and
86
+
87
+ $$
88
+ r (n) \leq 1 + \frac {2}{\alpha}
89
+ $$
90
+
91
+ expected rounds of adaptivity. Moreover, the algorithm can be implemented in $\tilde{O}(n)$ expected time.
92
+
93
+ When $\alpha = O(1 / \log (n))$ , this recovers the query complexity of Theorem 1.1 in expectation, but also gives a much broader range of options, e.g. sub-linear query algorithms in $O(1)$ rounds of communication. In fact it is worth noting that even in the former regime the algorithm uses only $O(\log (n))$ total rounds of communication, independent of the underlying PTF's degree. Finally, note that run-time is also near-optimal since $\Omega (n)$ time is required even to read the input.
94
+
95
+ To complement these upper bounds, we also show that PTFs cannot be actively learned at all if the learner is missing access to any derivative.
96
+
97
+ Theorem 1.3 (Perfect Learning PTFs Requires Derivatives). Any learner using label and derivative queries that is missing access to $f^{(i)}$ for some $1 \leq i \leq d - 1$ must make at least
98
+
99
+ $$
100
+ q (n) \geq \Omega (n)
101
+ $$
102
+
103
+ queries to perfectly learn $(\mathbb{R},H_d)$
104
+
105
+ This implies the query complexity of active learning PTFs with any missing derivative is $\Omega(1 / \varepsilon)$ .
106
+
107
+ In some practical scenarios, our worst-case assumption over the choice of distribution over $\mathbb{R}$ and PTF $h\in H_d$ may be unrealistically adversarial. To this end, we also study a natural average case model for perfect learning, where the sample $S\subset \mathbb{R}$ and PTF $h\in H_d$ are promised to come from known distributions. In Section 4, we show derivates are often unnecessary in this regime by providing a generic label-only algorithm and proving query efficiency in basic scenarios. This is better suited than our worst-case analysis to practical scenarios like learning natural 3D-imagery, where we expect objects to come from nice distributions but don't necessarily have higher order information.
108
+
109
+ We start by considering the basic scenario where both the sample and roots of our PTF are drawn uniformly at random from the interval $[0,1]$ , a distribution we denote by $U_{[0,1]}$ .
110
+
111
+ Theorem 1.4 (Learning PTFs with Uniformly Random Roots). The query complexity of perfect learning $(\mathbb{R},H_d)$ when promised that the sample and roots are chosen from $U_{[0,1]}$ is:
112
+
113
+ $$
114
+ \Omega (d \log n) \leq q (n) \leq O (d ^ {2} \log d \log n).
115
+ $$
116
+
117
+ While studying the uniform distribution is appealing due to its simplicity, similar results can be proved for somewhat more realistic distributions. As an example, we study the case where the (intervals between) roots of our polynomial are drawn from a Dirichlet distribution $\mathrm{Dir}(\alpha)$ , which has pdf:
118
+
119
+ $$
120
+ f (x _ {1}, \dots , x _ {d + 1}) \propto \prod_ {i = 1} ^ {d + 1} x _ {i} ^ {\alpha - 1}
121
+ $$
122
+
123
+ where $x_{i} \geq 0$ and $\sum x_{i} = 1$ . This generalizes drawing a uniformly random point on the $d$ -simplex.
124
+
125
+ Theorem 1.5 (Learning PTFs with Dirichlet Roots). The query complexity of perfect learning $(\mathbb{R},H_d)$ when the subsample $S\sim U_{[0,1]}$ and $h\sim \mathrm{Dir}(\alpha)$ is at most
126
+
127
+ $$
128
+ q (n) = O \left(d ^ {2} \log d \log n\right) \quad f o r \alpha = 1
129
+ $$
130
+
131
+ $$
132
+ q (n) = O \left(d ^ {2} \log d + d \log n\right) \quad f o r \alpha \geq 2, a n d
133
+ $$
134
+
135
+ $$
136
+ q (n) = O (d \log n) \quad f o r \alpha \geq \Omega (\log^ {2} n).
137
+ $$
138
+
139
+ Moreover, this result is tight for constant $\alpha$ and sufficiently large $n$ .
140
+
141
+ So far we have only discussed univariate PTFs. One might reasonably wonder to what extent our results hold for multivariate PTFs. In fact, we show that derivative queries are insufficient (in the worst-case setting) for learning PTFs of even two variables.
142
+
143
+ Theorem 1.6 (Derivatives Can't Learn Multivariate PTFs). Let $(\mathbb{R}^2, H_2^2)$ denote the class of degree-two, two-variate PTFs. The query complexity of perfectly learning $(\mathbb{R}^2, H_2^2)$ is
144
+
145
+ $$
146
+ q (n) \geq \Omega (n),
147
+ $$
148
+
149
+ even when the learner may query the sign of the gradient and hessian on any point in its sample.
150
+
151
+ In other words, multivariate PTFs cannot be actively learned via access to basic derivative queries in the worst-case. It remains an interesting open problem whether there exist natural query sets that can learn multivariate PTFs, or whether this issue can be avoided in average-case settings; we leave these questions to future work.
152
+
153
+ # 1.3 Related work
154
+
155
+ Active Learning Halfspaces: While to our knowledge active learning polynomial threshold functions has not been studied in the literature, the closely related problem of learning halfspaces is perhaps one of the best-studied problems in the field, and indeed in learning theory in general. It has long been known that halfspaces cannot be active learned in the standard model [2], but several series of works have gotten around this fact either by restricting the adversary, or empowering the learner. The first of these two methods generally involves forcing the learner to choose a nice marginal distribution over the data, e.g. over the unit sphere [18], unit ball [19], log-concave [20], or more generally $s$ -concave distributions [21]. The second approach usually involves allowing the learner to ask some type of additional questions. This encompasses not only KLMZ's [3] notion of enriched queries, but also the original "Membership query" model of Angluin [22] who allowed the learner to query any point in the overall instance space $X$ rather than just on the subsample $S \subset X$ . This model is also particularly well-studied for halfspaces where it is called the point-location problem [17, 23-25, 7, 9], and was actually studied originally by Meyer auf der Heide [17] in the perfect learning model even before Angluin's introduction of active learning.
156
+
157
+ Bounded degree PTFs may be viewed as a special set of halfspaces via the natural embedding to $\{1, x, x^2, \ldots\}$ . Given this fact, it is reasonable to ask why our work is not superseded by these prior methods for learning halfspaces. The answer lies in the fact that the query types used in these works are generally very complicated and require infinite precision. For instance, many use arbitrary membership queries (which are known to behave poorly in practice [26]), and even those that sacrifice on query complexity for simpler queries still require arbitrary precision (e.g. the "generalized comparisons" of [8]). Indeed, learning halfspaces even in three dimensions with a simple query set remains an interesting open problem, and our work can be viewed as partial progress in this direction for sets of points that lie on an embedded low-degree univariate polynomial. For instance, one could learn the set $S = \{(x, 3x^5, 5x^7) : x \in [n]\} \subset \mathbb{R}^3$ with respect to any underlying halfspace $\mathrm{sign}(\langle v, \cdot \rangle + b)$ in $O(\log n)$ queries using access to standard labels and the derivatives of the underlying polynomial.
158
+
159
+ Active Learning with Enriched Queries: Our work also fits into a long line of recent studies on learning with enriched queries in theory and in practice. As previously mentioned, Angluin's [22] original membership query model can in a sense be viewed as the seminal work in this direction, and many types of problem-specific enriched queries such as comparisons [27, 4-6, 3, 8, 7, 28, 29, 9, 30], cluster-queries [31-39], mistake queries [40], separation queries [41], and more have been studied since. Along with providing exponential improvements in query complexity in theory, many of these query types have also found use in practice [4, 5, 42, 43, 12]. Indeed even complicated queries such as Angluin's original model that cannot be accurately assessed by humans [26] have found significant use in application to automated experimental design, where the relevant oracle is given by precise scientific measurements rather than a human (see e.g. the seminal work of King et al. "The Automation of Science" [43]). While we view first or second order derivatives as reasonable query types for human experts, higher order derivatives are likely more useful in this latter setting, e.g. in application to dynamical systems where one tracks object movement with physical sensors.
160
+
161
+ Average Case Active Learning: The average-case model we study in this work is the 'perfect' or 'zero-error' variant of the average-case active learning model introduced by Dasgupta [2] (and implicitly in earlier work of Kosaraju, Przytycka, and Borgstrom [44]). These works gave a generic greedy algorithm for active learning finite concept classes $(X,H)$ over arbitrary prior distributions whose query complexity is optimal to within a factor of $O(\log(|H|))$ . The exact constants of this approximation were later optimized in the literature on submodular optimization [45], and more recently extended to the batch setting [46]. These works differ substantially from our setting as they focus on giving a generic algorithm for average-case active learning, rather than giving query complexity bounds for any specific class.
162
+
163
+ Perhaps more similar to our general approach are active learning methods based on Hanneke's disagreement coefficient [47], and Balcan, Hanneke, and Wortman's [48] work on active learning rates over fixed instead of worst-case hypotheses. Analysis based on these approaches typically takes advantage of the fact that for a fixed distribution and classifier, the minimum measure of any interval can be considered constant. Our average-case setting can be thought of as a strengthening of this approach in two ways: first we are only promised (weak) concentration bounds on the probability this
164
+
165
+ measure is small, and second we work in the harder perfect learning model. This latter fact is largely what separates our analysis, as naive attempts at combining prior techniques with concentration lead to 'imperfect' algorithms (ones with a small probability of error). Moving from the low-error to zero-error regime is in general a difficult problem,<sup>3</sup> but is important in high-risk applications like medical diagnoses.<sup>4</sup> Fixing this issue requires analysis of a new 'capped' variant of the coupon collector problem, and proving optimal query bounds requires further involved calculation that would be unnecessary in the low-error active regime.
166
+
167
+ # 2 Preliminaries
168
+
169
+ We now cover basic background on learning with enriched queries before sketching the proofs of our main results. Detailed information on all background and full versions of all proofs can be found in the supplementary materials.
170
+
171
+ Learning with Enriched Queries: Recall our learner is allowed to make derivative queries, that is given $S \subset \mathbb{R}$ , the learner may query $\mathrm{sign}(f^{(i)}(x))$ for any $x \in S, 0 \leq i \leq d - 1$ . Such queries have a number of natural interpretations, e.g. the relative distance of objects in image recognition ("is the pedestrian getting closer, or further away?"). Given a PTF $f \in H_d$ and point $x \in S$ , it is useful to consider the collection of all derivative queries on $x$ which we call its sign pattern.
172
+
173
+ Definition 2.1 (Sign Pattern). The sign pattern of $x \in \mathbb{R}$ with respect to $f \in H_d$ is the vector in $\{-1, 1\}^{d+1}$ :
174
+
175
+ $$
176
+ \operatorname {S g n P a t} (f, x) = \left[ \operatorname {s i g n} (f (x)), \operatorname {s i g n} \left(f ^ {(1)} (x)\right), \dots , \operatorname {s i g n} \left(f ^ {(d)} (x)\right) \right].
177
+ $$
178
+
179
+ More generally, given a family of binary queries $Q$ (e.g. labels and derivative queries), let $Q_{h}(T)$ denote the set of all possible query responses to $x \in T$ given $h$ (when $Q$ consists of derivative queries, $Q_{h}(T)$ is the set of sign patterns in $T$ ). Since we can rule out any hypotheses $h' \in H$ such that $Q_{h'}(T) \neq Q_{h}(T)$ , we will be interested in the set of consistent hypotheses, $H|_{Q_{h}(T)}$ , which satisfy $Q_{h'}(T) = Q_{h}(T)$ . We say that $Q_{h}(S)$ infers the label of a point $x \in X$ when $x$ only has one possible label under the set of consistent hypotheses, that is when for some $z \in \{-1, 1\}$ :
180
+
181
+ $$
182
+ \forall h ^ {\prime} \in H | _ {Q _ {h} (S)}: \operatorname {s i g n} \left(h ^ {\prime}\right) (x) = z.
183
+ $$
184
+
185
+ Inference Dimension: In their seminal work on the enriched query model, KLMZ [3] introduced inference dimension, a combinatorial parameter that exactly characterizes the query complexity of both perfect and active learning under enriched queries.
186
+
187
+ Definition 2.2 (Inference Dimension). The inference dimension of $(X, H)$ with query set $Q$ is the smallest $k$ such that for any subset $S \subset X$ of size $k$ , $\forall h \in H$ , $\exists x \in S$ s.t. $Q_h(S \setminus \{x\})$ infers $x$ . If no such $k$ exists, then we say the inference dimension is $\infty$ .
188
+
189
+ KLMZ proved that query-efficient learning is possible if and only if inference dimension is finite.
190
+
191
+ Theorem 2.3 (Inference Dimension Characterizes Active Learning [3, Theorem 1.5]). Let $(X,H)$ be a class with inference dimension $k$ with respect to query set $Q$ . The expected query complexity of perfectly learning $(X,H)$ is<sup>5</sup>
192
+
193
+ $$
194
+ \Omega (\min (n, k)) \leq q (n) \leq O _ {k} (\log n).
195
+ $$
196
+
197
+ Similarly, the query complexity of active learning $(X,H)$ is at most
198
+
199
+ $$
200
+ q (\varepsilon , \delta) \leq O _ {k} \left(\left(\log \left(\frac {d}{\varepsilon}\right) + \log \left(\frac {1}{\delta}\right)\right)\right),
201
+ $$
202
+
203
+ and if $k = \infty$ , active learning gives no asymptotic improvement over standard passive bounds:
204
+
205
+ $$
206
+ q (\varepsilon , \delta) \geq \Omega (1 / \varepsilon).
207
+ $$
208
+
209
+ # 3 Worst-Case Active Learning PTFs
210
+
211
+ # 3.1 Upper Bounds
212
+
213
+ In this section, we sketch the proofs of our upper bounds Theorem 1.1 and Theorem 1.2. At a high level, both results follow from the fact that a PTF $f$ can be query-efficiently broken into a small number of monotone segments which act like thresholds. This is done in two main steps. First, we observe that it is possible to break $f$ into a small number of segments sharing the same sign pattern.
214
+
215
+ Lemma 3.1. For any degree- $d$ polynomial $f\in H_d$ and set $S = \{s_1\leq \ldots \leq s_n\}$ , given sign $(f^{(i)}(x))$ for all $1\leq i\leq k$ and $x\in S$ , it is possible to partition $S$ into $j\leq O(d^2)$ contiguous, disjoint segments
216
+
217
+ $$
218
+ I _ {1} = \left[ s _ {1}, s _ {i _ {1}} \right], I _ {2} = \left[ s _ {i _ {1} + 1}, s _ {i _ {2}} \right], \dots , I _ {j} = \left[ s _ {i _ {j - 1} + 1}, s _ {n} \right]
219
+ $$
220
+
221
+ such that each interval has a fixed sign pattern, i.e. for every $1 \leq \ell \leq j$ and $s, s' \in I_{\ell}$ :
222
+
223
+ $$
224
+ \operatorname {S g n P a t} (s, f ^ {(1)}) = \operatorname {S g n P a t} (s ^ {\prime}, f ^ {(1)}).
225
+ $$
226
+
227
+ Moreover, this can be done in $O(n(d + \log n))$ time.
228
+
229
+ Second, we observe that $f$ must be monotone on any interval with a fixed pattern.
230
+
231
+ Lemma 3.2. If $f \in H_d$ and $a < b \in \mathbb{R}$ satisfy $\operatorname{SgnPat}(f^{(1)}, a) = \operatorname{SgnPat}(f^{(1)}, b)$ , then $f$ is monotone on $[a, b]$ .
232
+
233
+ Based on these facts, it is not hard to see that Theorem 1.1 is realized by the following iterative algorithm that learns each derivative in a top-down fashion until reaching $f^{(0)}$ itself.
234
+
235
+ Algorithm 1: ITERATIVE-ALGORITHM(f,S)
236
+ Result: Label all points in $S$
237
+ Input: Polynomial $f \in H_d$ , Subset $S \subseteq \mathbb{R}$
238
+ Algorithm:
239
+ 1 Learn $f^{(d-1)}(x)$ by binary search
240
+ 2 $i \gets d - 2$
241
+ 3 while $i \geq 0$ do
242
+ 4 Apply Lemma 3.1 to $f^{(i)}$ , partitioning $S$ into $j \leq O((d - i)^2)$ monotone segments $\{I_\ell\}$
243
+ 5 Learn $f^{(i)}$ by running binary search separately on each $I_\ell$
244
+ 6 $i \gets i - 1$
245
+ 7 end
246
+
247
+ The iterative approach gives a simple, deterministic technique for learning PTFs with derivative queries, but comes at the cost of a high level of adaptivity. We now give a simple randomized algorithm that smoothly interpolates between the query-efficient and low-adaptivity regimes.
248
+
249
+ Algorithm 2 is a batch variant of KLMZ's original algorithm underlying Theorem 2.3. The following batch variant of their upper bound follows from similar analysis (setting $m$ above to $2kn^{\alpha}$ ).
250
+
251
+ Theorem 3.3 (Inference Dimension $\rightarrow$ Batch Active Learning). Let $(X,H)$ be a class with inference dimension $k$ with respect to query set $Q$ . Then for any $n\in \mathbb{N}$ and $\alpha \in (1 / \log (n),1]$ and size $n$ -subset $S\subset X$ , BATCH-KLMZ labels all of $S$ in
252
+
253
+ $$
254
+ q (n) \leq \frac {2 Q _ {t o t} \left(2 k n ^ {\alpha}\right)}{\alpha}
255
+ $$
256
+
257
+ expected queries, and only
258
+
259
+ $$
260
+ r (n) \leq 1 + \frac {2}{\alpha}
261
+ $$
262
+
263
+ expected rounds of adaptivity, where $Q_{tot}(m)$ is the number of queries available on a set of $m$ points.
264
+
265
+ To prove Theorem 1.2, it therefore suffices to show that PTFs have bounded inference dimension with respect to derivative queries. In fact, this is essentially immediate from Lemma 3.1 and Lemma 3.2. Any $f \in H_d$ can be broken up into $O(d^2)$ monotonic regions based on sign pattern. Given any three points in such a region the middle can always be inferred, so by the pigeonhole principle we have:
266
+
267
+ Lemma 3.4. The inference dimension of $(\mathbb{R},H_d)$ with derivative queries is $O(d^{2})$
268
+
269
+ In the supplementary materials, we additionally prove an $\Omega(d)$ lower bound on inference dimension.
270
+
271
+ Algorithm 2: BATCH-KLMZ(S,m)
272
+ Result: Labels all points in $S$
273
+ Input: Class $(X,H)$ , Subset $S \subseteq X$ , Query set $Q$ , Query Oracle $O_Q$
274
+ Parameters: Inference dimension $k$ , Batch size $m$ , Iteration cutoff $t = \frac{\log(n)}{\log(\frac{m}{2k})}$
275
+ Algorithm:
276
+ 8 $S_0 \gets S$
277
+ 9 for i in range t do
278
+ 10 $\begin{array}{l} T \gets \{\} \\ \text{while } Q_h(T) \text{ infers less than a }\frac{m - 2k}{m} \text{ fraction of } S_i \text{ do} \\ \text{Sample } T \sim S_i^m \\ \text{Query } T: Q_h(T) \leftarrow O_Q(T) \\ \text{end} \\ S_{i + 1} \leftarrow \{x \in S_i : Q_h(T) \text{ does not infer } x\} \\ \text{if } |S_{i + 1}| \leq m \text{ then} \\ \text{Query } O_Q(S_{i + 1}) \\ \text{Return} \\ \text{end} \\ \text{end} \end{array}$
279
+
280
+ # 3.2 Lower Bounds
281
+
282
+ We briefly discuss the proof techniques behind our lower bounds in Theorem 1.1 and Theorem 1.3. The former follows from a standard information theoretic argument, noting that degree- $d$ PTFs result in $n^{\Omega(d)}$ possible labelings of any $n$ point subset. For the latter, let $Q_{\hat{i}}$ denote the family of derivative queries without the $i$ th derivative. Theorem 2.3 shows it is enough to prove the inference dimension of $(\mathbb{R}, H_d)$ with respect to $Q_{\hat{i}}$ is infinite. By a simple induction, it is sufficient to consider $i = d - 1$ . This is done by exhibiting for all $n \in \mathbb{N}$ a family of PTFs $\{h, p_1, \ldots, p_n\}$ and points $S = \{s_1, \ldots, s_n\}$ such that $h$ and $p_j$ are indistinguishable on $S \setminus \{s_j\}$ with respect to $\hat{Q}_{d-1}$ , but $h(s_j) \neq p_j(s_j)$ . We achieve such a construction by building a set of polynomials where each $p_i^{(d-1)}$ is sufficiently negative around $s_i$ to force the function to flip sign, but the points are sufficiently spread out that this cannot be detected on any other $s_j$ for $j \neq i$ .
283
+
284
+ # 4 Average-Case Active Learning PTFs
285
+
286
+ We now sketch the proof of our average-case results. We briefly recall the model: given distributions $D_X$ over $\mathbb{R}$ and $D_H$ over $H_{d}$ , we are interested in analyzing the expected number of queries needed to infer all labels of a sample $S\sim D_X$ with respect to $h\sim D_H$ . We now present a simple generic algorithm for this problem we call "Sample and Search." Given a sample $S\subset \mathbb{R}$ :
287
+
288
+ 1. Query the label (sign) of points from $S$ uniformly at random until either:
289
+
290
+ (a) We have queried all $n$ points in $S$ .
291
+ (b) We see $d$ sign flips in the queried points, i.e. we have queried $x_{1},\ldots ,x_{k}$ and there exists indices $i_1 < \dots < i_{d + 1}$ such that
292
+
293
+ $$
294
+ \operatorname {s i g n} \left(f \left(x _ {i _ {j}}\right)\right) \neq \operatorname {s i g n} \left(f \left(x _ {i _ {j + 1}}\right)\right)
295
+ $$
296
+
297
+ $$
298
+ f o r \quad j = 1, \dots , d.
299
+ $$
300
+
301
+ 2. If (b) occurred in the previous step, perform binary search on the points in $S$ between each pair $(x_{i_j}, x_{i_{j+1}})$ to find the sign threshold (and thereby labels) in that interval.
302
+
303
+ It is not hard to see Sample and Search infers all labels of points in $S$ by construction. The main challenge lies in analyzing its query complexity, and in particular step 1 which can be thought of as a variant of the classical coupon collector (CC) problem. In our setting, the "coupons" are made up by the intervals between adjacent roots, and their probability is given by the mass of the marginal distribution on that interval. With this in mind, let $Y$ be the random variable measuring the number of samples required to hit each interval (coupon) at least once, and let $Z = \min(Y, n)$ .
304
+
305
+ Proposition 4.1. The expected query complexity of the Sample and Search Algorithm is at most:
306
+
307
+ $$
308
+ q (n) \leq \mathbb {E} _ {D _ {X}, D _ {H}} [ Z ] + d \log n.
309
+ $$
310
+
311
+ It is worth noting that $\mathbb{E}_{D_X,D_H}[Z]$ and $\mathbb{E}_{D_X,D_H}[Y]$ can differ drastically. As a basic example, consider the case where $d = 1$ and we draw our $n$ points and one root uniformly at random from [0, 1]. It is a simple exercise to show that $\mathbb{E}_{D_X,D_H}[Y] = \infty$ , whereas $\mathbb{E}_{D_X,D_H}[Z] = O(\log n)$ . With this in mind, we finish the section by sketching the proof of our average-case results. We focus on the setting of the uniform distribution (Theorem 1.4). The Dirichlet case (Theorem 1.5) follows similar overall ideas, but requires more involved calculation and machinery to deal with dependence of the roots. In both cases, however, the first step is to observe the following standard bound on $Y$ conditional on the roots being well separated.
312
+
313
+ Lemma 4.2. For any $x \in \mathbb{R}_+$ , let $E_x$ denote the event that $f \sim D_H$ has measure at least $\frac{1}{x}$ over $D_X$ between any two adjacent roots, the leftmost root and 0, and the rightmost root and 1. Then:
314
+
315
+ $$
316
+ \mathbb {E} _ {D _ {X}, D _ {H}} \left[ Y \mid E _ {x} \right] \leq O (x \log d).
317
+ $$
318
+
319
+ To analyze the capped variable $Z = \min \{Y, n\}$ , we expand the expectation and cap the integral:
320
+
321
+ $$
322
+ \mathbb {E} _ {D _ {X}, D _ {H}} [ Z ] = \int_ {0} ^ {\infty} 1 - \mathbb {P} _ {D _ {H}} [ \mathbb {E} _ {D _ {X}} [ Z ] \leq x ] d x \leq (d + 1) + \int_ {d + 1} ^ {n} 1 - \mathbb {P} _ {D _ {H}} [ \mathbb {E} _ {D _ {X}} [ Y ] \leq x ] d x. \tag {1}
323
+ $$
324
+
325
+ By Lemma 4.2, the righthand probability is lower bounded by the probability the minimum interval measure (denoted M) is $\Omega\left(\frac{\log d}{x}\right)$ , which can be computed directly in the uniform setting:
326
+
327
+ $$
328
+ \mathbb {P} _ {D _ {H}} \left[ \mathbb {E} _ {D _ {X}} [ Y ] \leq x \right] \geq \mathbb {P} \left[ M \geq \frac {c \log d}{x} \right] = \left(1 - \frac {c (d - 1) \log d}{x}\right) ^ {d}. \tag {2}
329
+ $$
330
+
331
+ Plugging this back into Equation (1) and computing the integral gives $\mathbb{E}[Z] \leq O(d^2 \log(n))$ . To prove the corresponding lower bound for this problem (appearing in Theorem 1.4), we appeal to classic information theoretic techniques. In particular, the expected number of binary queries to reveal the labels of $S$ cannot be less than the entropy of the resulting distribution over labelings. In the uniform case, one can directly show the entropy is $\Omega(d \log(n))$ , so Sample and Search is off from the information theoretic optimum by at most a factor of $d$ .
332
+
333
+ We briefly remark on the challenges moving to the Dirichlet distribution. Due to the dependence of roots in this setting, bounding the minimum measure $M$ in Equation (2) becomes substantially more difficult. For small $\alpha$ , we union bound over each variable and directly analyze the marginal distributions. For large $\alpha$ , we appeal to strong Bernstein-type concentration bounds on Beta distributions of Skorski [49]. The lower bound follows from a similar appeal to information theoretic techniques. The trick in this case is to observe the induced label distribution is exactly the well-studied "Dirichlet-Multinomial" distribution whose asymptotic entropy is known (see e.g. [50, Theorem 2]).
334
+
335
+ # 5 Beyond Univariate PTFs
336
+
337
+ Finally, we sketch the proof of Theorem 1.6, our lower bound on learning two-variate quadratics with derivative queries. Recall in this setting the learner is allowed to make gradient queries of the form $\mathrm{sign}\left(\frac{\partial f}{\partial x}(x_1,y_1),\frac{\partial f}{\partial y}(x_1,y_1)\right)$ , and Hessian queries of the form $\mathrm{sign}\left(\frac{\partial^2f}{\partial x\partial x} (x_1,y_1),\frac{\partial^2f}{\partial x\partial y} (x_1,y_1),\frac{\partial^2f}{\partial y\partial x} (x_1,y_1),\frac{\partial^2f}{\partial y\partial y} (x_1,y_1)\right)$ for any $(x_{1},y_{1})\in \mathbb{R}^{2}$ in the learner's sample (along with standard label queries).
338
+
339
+ As in our worst-case lower bound for missing derivatives, we prove the inference dimension of this class is infinite. Our construction consists of $n$ points distributed evenly on the quarter circle of radius 1 in the first quadrant. Consider the functions $-x^{2} - y^{2} \pm \epsilon xy$ (with $\epsilon$ small enough), which have negative values for all label queries, gradient queries, and diagonal entries of the Hessian, and either all positive or all negative values for the off-diagonal. The goal is then to find functions $h_{i}$ which flip the label on the $i$ th point, but are indistinguishable from one of $-x^{2} - y^{2} \pm \epsilon xy$ elsewhere (at least half the points are therefore indistinguishable from one of these choices, which gives an inference dimension lower bound of $n / 2$ for any $n \in \mathbb{N}$ ).
340
+
341
+ The idea is to consider rotations of the function $f = xy - c_{1}y^{2}$ , which is only positive for a small sector dependent on $c_{1}$ . $h_i$ is chosen to be the rotation where this sector contains only the $i$ th point. To ensure the derivatives and diagonal Hessian values remain negative, we subtract $c_{2}(x^{2} + y^{2} - 1)$ which does not change the value on the unit circle. Since the Hessian is symmetric for these functions, they also agree with one of $-x^{2} - y^{2} \pm \epsilon xy$ on the off-diagonal giving the desired result.
342
+
343
+ # References
344
+
345
+ [1] Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988.
346
+ [2] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In Advances in neural information processing systems, pages 337-344, 2005.
347
+ [3] Daniel M Kane, Shachar Lovett, Shay Moran, and Jiapeng Zhang. Active classification with comparison queries. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 355-366. IEEE, 2017.
348
+ [4] Amin Karbasi, Stratis Ioannidis, et al. Comparison-based learning with rank nets. arXiv preprint arXiv:1206.4674, 2012.
349
+ [5] Fabian L Wauthier, Nebojsa Joic, and Michael I Jordan. Active spectral clustering via iterative uncertainty reduction. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1339-1347, 2012.
350
+ [6] Yichong Xu, Hongyang Zhang, Kyle Miller, Aarti Singh, and Artur Dubrawski. Noise-tolerant interactive learning using pairwise comparisons. In Advances in Neural Information Processing Systems, pages 2431-2440, 2017.
351
+ [7] Max Hopkins, Daniel Kane, and Shachar Lovett. The power of comparisons for actively learning linear classifiers. Advances in Neural Information Processing Systems, 33, 2020.
352
+ [8] Daniel Kane, Shachar Lovett, and Shay Moran. Generalized comparison trees for point-location problems. In International Colloquium on Automata, Languages and Programming, 2018.
353
+ [9] Max Hopkins, Daniel Kane, Shachar Lovett, and Gaurav Mahajan. Point location and active learning: Learning halfspaces almost optimally. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 1034-1044. IEEE, 2020.
354
+ [10] Myron L Braunstein. Depth perception through motion. Academic Press, 2014.
355
+ [11] Gareth Jones and Emma C. Teeling. The evolution of echolocation in bats. Trends in Ecology & Evolution, 21(3):149-156, 2006.
356
+ [12] Yuriy Sverchkov and Mark Craven. A review of active learning approaches to experimental design for uncovering biological networks. PLoS computational biology, 13(6):e1005466, 2017.
357
+ [13] Ran El-Yaniv and Yair Wiener. Active learning via perfect selective classification. Journal of Machine Learning Research, 13(Feb):255-279, 2012.
358
+ [14] Leslie G Valiant. A theory of the learnable. In Proceedings of the sixteenth annual ACM symposium on Theory of computing, pages 436-445. ACM, 1984.
359
+ [15] Vladimir Vapnik and Alexey Chervonenkis. Theory of pattern recognition, 1974.
360
+ [16] Ronald L Rivest and Robert H Sloan. Learning complicated concepts reliably and usefully. In AAAI, pages 635-640, 1988.
361
+ [17] Friedhelm Meyer auf der Heide. A polynomial linear search algorithm for the n-dimensional knapsack problem. In Annual ACM Symposium on Theory of Computing: Proceedings of the fifteenth annual ACM symposium on Theory of computing, volume 1983, pages 70-79, 1983.
362
+ [18] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78-89, 2009.
363
+ [19] Maria-Florina Balcan, Andrei Broder, and Tong Zhang. Margin based active learning. In International Conference on Computational Learning Theory, pages 35-50. Springer, 2007.
364
+ [20] Maria-Florina Balcan and Phil Long. Active and passive learning of linear separators under log-concave distributions. In Conference on Learning Theory, pages 288-316, 2013.
365
+ [21] Maria-Florina Balcan and Hongyang Zhang. Sample and computationally efficient learning algorithms under s-concave distributions. arXiv preprint arXiv:1703.07758, 2017.
366
+
367
+ [22] Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343-370, 1988.
368
+ [23] Stefan Meiser. Point location in arrangements of hyperplanes. Information and Computation, 106(2):286-303, 1993.
369
+ [24] Jean Cardinal, John Iacono, and Aurélien Ooms. Solving k-SUM using few linear queries. arXiv preprint arXiv:1512.06678, 2015.
370
+ [25] Esther Ezra and Micha Sharir. A nearly quadratic bound for point-location in hyperplane arrangements, in the linear decision tree model. Discrete & Computational Geometry, 61(4):735-755, 2019.
371
+ [26] Eric B Baum and Kenneth Lang. Query learning can work poorly when a human oracle is used. In International joint conference on neural networks, volume 8, page 8, 1992.
372
+ [27] Kevin G Jamieson and Robert Nowak. Active ranking using pairwise comparisons. In Advances in neural information processing systems, pages 2240-2248, 2011.
373
+ [28] Max Hopkins, Daniel Kane, Shachar Lovett, and Gaurav Mahajan. Noise-tolerant, reliable active classification with comparison queries. In Conference on Learning Theory, pages 1957-2006. PMLR, 2020.
374
+ [29] Zhenghang Cui and Issei Sato. Active classification with uncertainty comparison queries. arXiv preprint arXiv:2008.00645, 2020.
375
+ [30] Max Hopkins, Daniel Kane, Shachar Lovett, and Michal Moshkovitz. Bounded memory active learning through enriched queries. arXiv preprint arXiv:2102.05047, 2021.
376
+ [31] Hassan Ashtiani, Shrinu Kushagra, and Shai Ben-David. Clustering with same-cluster queries. arXiv preprint arXiv:1606.02404, 2016.
377
+ [32] Sharad Vikram and Sanjoy Dasgupta. Interactive bayesian hierarchical clustering. In International Conference on Machine Learning, pages 2081-2090, 2016.
378
+ [33] Vasilis Verroios, Hector Garcia-Molina, and Yannis Papakonstantinou. Waldo: An adaptive human interface for crowd entity resolution. In Proceedings of the 2017 ACM International Conference on Management of Data, pages 1133-1148, 2017.
379
+ [34] Arya Mazumdar and Barna Saha. Clustering with noisy queries. In Advances in Neural Information Processing Systems, pages 5788-5799, 2017.
380
+ [35] Nir Ailon, Anup Bhattacharya, and Ragesh Jaiswal. Approximate correlation clustering using same-cluster queries. In Latin American Symposium on Theoretical Informatics, pages 14-27. Springer, 2018.
381
+ [36] Donatella Firmani, Sainyam Galhotra, Barna Saha, and Divesh Srivastava. Robust entity resolution using a crowdoracle. IEEE Data Eng. Bull., 41(2):91-103, 2018.
382
+ [37] Sanjoy Dasgupta, Akansha Dey, Nicholas Roberts, and Sivan Sabato. Learning from discriminative feature feedback. Advances in Neural Information Processing Systems, 31:3955-3963, 2018.
383
+ [38] Barna Saha and Sanjay Subramanian. Correlation clustering with same-cluster queries bounded by optimal cost. arXiv preprint arXiv:1908.04976, 2019.
384
+ [39] Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, and Andrea Paudice. Exact recovery of mangled clusters with same-cluster queries. arXiv preprint arXiv:2006.04675, 2020.
385
+ [40] Maria Florina Balcan and Steve Hanneke. Robust interactive learning. In Conference on Learning Theory, pages 20-1, 2012.
386
+ [41] Sariel Har-Peled, Mitchell Jones, and S. Rahul. Active learning a convex body in low dimensions. In ICALP, 2020.
387
+
388
+ [42] Buyue Qian, Xiang Wang, Fei Wang, Hongfei Li, Jieping Ye, and Ian Davidson. Active learning from relative queries. In Twenty-Third International Joint Conference on Artificial Intelligence. Citeseer, 2013.
389
+ [43] Ross D King, Jem Rowland, Stephen G Oliver, Michael Young, Wayne Aubrey, Emma Byrne, Maria Liakata, Magdalena Markham, Pinar Pir, Larisa N Soldatova, et al. The automation of science. Science, 324(5923):85-89, 2009.
390
+ [44] S Rao Kosaraju, Teresa M Przytycka, and Ryan Borgstrom. On an optimal split tree problem. In Workshop on Algorithms and Data Structures, pages 157-168. Springer, 1999.
391
+ [45] Daniel Golovin and Andreas Krause. Adaptive submodularity: A new approach to active learning and stochastic optimization. In $COLT$ , pages 333-345. Citeseer, 2010.
392
+ [46] Hossein Esfandiari, Amin Karbasi, and Vahab Mirrokni. Adaptivity in adaptive submodularity. In Conference on Learning Theory, pages 1823-1846. PMLR, 2021.
393
+ [47] Steve Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pages 353-360, 2007.
394
+ [48] Maria-Florina Balcan, Steve Hanneke, and Jennifer Wortman Vaughan. The true sample complexity of active learning. Machine learning, 80(2):111-139, 2010.
395
+ [49] Maciej Skorski. Bernstein-type bounds for beta distribution. arXiv preprint arXiv:2101.02094, 2021.
396
+ [50] Krzysztof Turowski, Philippe Jacquet, and Wojciech Szpankowski. Asymptotics of entropy of the dirichlet-multinomial distribution. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 1517-1521, 2019.
397
+
398
+ # Checklist
399
+
400
+ 1. For all authors...
401
+
402
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
403
+ (b) Did you describe the limitations of your work? [Yes]
404
+ (c) Did you discuss any potential negative societal impacts of your work? [N/A]
405
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
406
+
407
+ 2. If you are including theoretical results...
408
+
409
+ (a) Did you state the full set of assumptions of all theoretical results? [Yes]
410
+ (b) Did you include complete proofs of all theoretical results? [Yes]
411
+
412
+ 3. If you ran experiments...
413
+
414
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
415
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A]
416
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A]
417
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
418
+
419
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
420
+
421
+ (a) If your work uses existing assets, did you cite the creators? [N/A]
422
+ (b) Did you mention the license of the assets? [N/A]
423
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
424
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
425
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
426
+
427
+ 5. If you used crowdsourcing or conducted research with human subjects...
428
+
429
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
430
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
431
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
activelearningpolynomialthresholdfunctions/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aa6362bd6f37966be291028a9d5131a59f9896dab1ed14f98fe8f3129acdedb
3
+ size 120787
activelearningpolynomialthresholdfunctions/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd6452916be2f04ba199f736e7b4a586d7dd72fd0bf6684ec56667caa4b2cfd5
3
+ size 585265
activelearningthroughacoveringlens/ba2c5b63-9688-4cda-8726-15c897314b0a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e5d09f9d40fa7577b673997e37480788cfc9cae6d931206b989d4edf1021f68
3
+ size 83630
activelearningthroughacoveringlens/ba2c5b63-9688-4cda-8726-15c897314b0a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74271dfc96f42c48e7e06a5a0e2b03eafa95a46f719cafe0f54efbef76f000a3
3
+ size 112297
activelearningthroughacoveringlens/ba2c5b63-9688-4cda-8726-15c897314b0a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8275cdf6be672d0dc1c1aa471bcc7db23c5d42217b8b15dc77f8fe5ad4bf2725
3
+ size 1794926
activelearningthroughacoveringlens/full.md ADDED
@@ -0,0 +1,424 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning Through a Covering Lens
2
+
3
+ Ofer Yehuda†, Avihu Dekel†, Guy Hacohen†‡, Daphna Weinshall†
4
+ School of Computer Science & Engineering†
5
+ Edmond and Lily Safra Center for Brain Sciences‡
6
+ The Hebrew University of Jerusalem
7
+ Jerusalem 91904, Israel
8
+ {offer.yehuda,avihu.dekel,guy.hacohen,daphna}@mail.huji.ac.il
9
+
10
+ # Abstract
11
+
12
+ Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry. Until recently, deep active learning methods were ineffectual in the low-budget regime, where only a small number of examples are annotated. The situation has been alleviated by recent advances in representation and self-supervised learning, which impart the geometry of the data representation with rich information about the points. Taking advantage of this progress, we study the problem of subset selection for annotation through a "covering" lens, proposing ProbCover – a new active learning algorithm for the low budget regime, which seeks to maximize Probability Coverage. We then describe a dual way to view the proposed formulation, from which one can derive strategies suitable for the high budget regime of active learning, related to existing methods like Coreset. We conclude with extensive experiments, evaluating ProbCover in the low-budget regime. We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks. This method is especially beneficial in the semi-supervised setting, allowing state-of-the-art semi-supervised methods to match the performance of fully supervised methods, while using much fewer labels nonetheless. Code is available at https://github.com/avihu111/TypiClust.
13
+
14
+ # 1 Introduction
15
+
16
+ For the most part, deep learning technology critically depends on access to large amounts of annotated data. Yet annotations are costly and remain so even in our era of Big Data. Deep active learning (AL) aims to alleviate this problem by improving the utility of the annotated data. Specifically, given a fixed budget $b$ of examples that can be annotated, and some deep learner, AL algorithms aim to query those $b$ examples that will most benefit this learner.
17
+
18
+ In order to optimally choose unlabeled examples to be annotated, most deep AL strategies follow some combination of two main principles: 1) Uncertainty sampling [e.g., 26, 48, 3, 14, 15, 36, 25], in which examples that the learner is most uncertain about are picked, to maximize the added value of the new annotations. 2) Diversity Sampling [e.g., 1, 22, 16, 38, 18, 40, 37, 17, 47, 42, 46], in which examples are chosen from diverse regions of the data distribution, to represent it wholly and reduce redundancy in the annotation.
19
+
20
+ Most AL methods fail to improve over random selection when the annotation budget is very small [35, 39, 7, 32, 55, 21, 2], a phenomenon sometimes termed "cold start" [8, 49, 16, 50, 23]. When the budget contains only a few examples, they struggle to improve the model's performance, and even fail to reach the accuracy of the random baseline. Recently, it was shown that uncertainty sampling is inherently unsuited for the low-budget regime, which may explain the cold start phenomenon [19]. The low-budget scenario is relevant in many applications, especially those requiring an expert tagger
21
+
22
+ ![](images/b3242aa78041cd1379ba27fe9cf4bcf5ca4ae73b74b5d63be1fccb1f0ca18bc8.jpg)
23
+
24
+ ![](images/43980701fe4f26316020f43e6ecd3fa53d1636f748ac2cce11ad60f6b9a7a61f.jpg)
25
+ ProbCover selection
26
+
27
+ ![](images/abb8c4b48a9168ace170374b07f257b08dd4e023ada37ec35dd061bf8f347b37.jpg)
28
+
29
+ ![](images/5aac90c9ead85594b7b6050110f6e4705843d2d8d6b4eca021dfb8334a03bd5c.jpg)
30
+
31
+ ![](images/f8293b76afb555633b3119f1cea07b86803c91674c1610330b6f58767ce3670e.jpg)
32
+ (a) 5 Samples
33
+
34
+ ![](images/71ea33e86a34d756b5960c8763ca0b76cd24372ceb1bc9f16e9739987cd70883.jpg)
35
+ (b) 20 Samples
36
+
37
+ ![](images/806dc16187a1429bdaad31ece86d1d30536a5c3f6bc5793c4c094c685530bc8e.jpg)
38
+ (c) 50 Samples
39
+ Figure 1: ProbCover selection (top) vs Coreset selection (bottom) of 5/20/50 samples (out of 600). Selected points are marked by $\mathbf{x}$ , which is color-coded by density (see color code bar to the right). Density is measured using Gaussian Kernel Density Estimation, and the covered area is marked in light blue. Coreset attempts to minimize ball size, constrained by complete coverage, while ProbCover attempts to maximize coverage, constrained by a fixed ball size. Note that especially in low budgets, such as 5 samples, Coreset only selects outliers of the distribution (yellow), while ProbCover selects from dense regions of the distribution (red).
40
+
41
+ whose time is expensive (e.g., a radiologist tagger for tumor detection). If we want to expand deep learning to new domains, overcoming the cold start problem is an ever-important task.
42
+
43
+ In this work, we focus on understanding the very low budget regime of AL, where the budget of $b$ examples cannot dependably represent the annotated data distribution. To face up to this challenge, in Sections 2.1-2.2 we model the problem as Max Probability Cover, defined as follows: given some data distribution, and a radius $\delta$ , select the $b$ examples that maximize the probability of the union of balls with radius $\delta$ around each example. We further show that under a separation assumption that is realistic in semantic embedding spaces, Max Probability Cover is befitting the nearest-neighbor classification model, in that it minimizes an upper bound on its generalization error.
44
+
45
+ In Section 2.4 we show a connection with existing deep AL methods, like Coreset [37], and explain why those methods are more suitable for the high-budget regime than the low-budget regime. This phenomenon is visualized in Fig. 1, where we see that with only a few examples to choose, Coreset – an AL strategy that employs the principle of diversity sampling – chooses distant and often abnormal points, while ProbCover chooses representative examples.
46
+
47
+ When using the empirical data distribution, we further show that Max Probability Cover can be reduced to Max Coverage – a known classical NP-hard problem [34] (see Section 2.2). To obtain a practical AL strategy, in Section 3 we adapt a greedy algorithm for the selection of $b$ examples from a fixed finite training set of unlabeled examples (the training set), which guarantees $1 - \frac{1}{e}$ approximation to the problem. We call this new method ProbCover.
48
+
49
+ In Section 4 we empirically evaluate the performance of ProbCover on several computer vision datasets, including CIFAR-10, CIFAR-100, Tiny-ImageNet, ImageNet and its subsets. ProbCover is thus shown to significantly outperform all alternative deep AL methods in the very low-budget regime. Additionally, ProbCover improves the performance of state-of-the-art semi-supervised methods, which were thought until recently to make AL redundant [6], allowing for the learning of computer vision tasks with very few annotated examples.
50
+
51
+ Relation to prior art Recent work investigated AL methods based on an approximation to a facility location problem [45, 30, 54, 37, 43], which is a variant of the covering problem. In the minimax facility location problem [13], the entire distribution is covered with a fixed number of balls, which can vary in size, whereas in ProbCover the size of the balls is fixed, and we are allowed to cover only part of the total distribution. While this difference may seem minor, in the low-budget regime, when the budget is not large enough to represent the data, examples chosen by the facility location problem are not representative (as illustrated in Fig. 1), which leads to poor performance (as shown in Fig. 10).
52
+
53
+ # Summary of contribution
54
+
55
+ (i) Develop a theoretical framework to analyze AL strategies in embedding spaces, with "dual" low and high-budget interpretations.
56
+ (ii) Introduce ProbCover, a low-budget AL strategy motivated by our framework, which significantly outperforms other methods in the low-budget regime.
57
+ (iii) Demonstrate the outstanding competence of ProbCover in semi-supervised learning with very few labeled examples.
58
+
59
+ # 2 Theoretical Analysis
60
+
61
+ To address the challenge of active learning in low budgets, we adopt a point coverage framework. Computationally, we analyze the generalization of the Nearest Neighbor (NN) classification model, as this model depends exclusively on distances from a set of training examples and does not involve any additional inductive bias. Thus in Section 2.1 we develop a bound on the generalization error in 1-NN models. It follows from the analysis (see discussion below) that if full coverage is required, which is only practical in the high budget regime, the minimization of this bound translates to the optimal minimax facility location problem, which is known to be NP-hard and which the AL Coreset algorithm by Sener and Savarese [37] is designed to approximate.
62
+
63
+ In contrast, in the low budget regime, the aforementioned bound can best be optimized by seeking a labeled set $L$ whose probability of covering the unlabeled set is maximal. In Section 2.2 we show that this problem is also NP-hard. Furthermore, when the data distribution is not known and is being approximated by the empirical distribution, we show that it is equivalent to the classical Max Coverage problem. ProbCover described in Section 3, is designed to solve this problem. In Section 2.4 we discuss a sense in which the high budget and low budget problems are dual.
64
+
65
+ # 2.1 Bounding the Generalization Error
66
+
67
+ We shall now derive a bound on the generalization error of the 1 Nearest Neighbor (1-NN) classifier. We start with some necessary notations and definitions. Most important is the assumption of $\delta$ -purity, which states that most of the time, points that are less than $\delta$ apart have the same label. We then prove a lemma, showing that given a labeled set $L$ and the coverage it achieves, and given the $\delta$ -purity assumption, the probability of a point being inside this cover and still being falsely labeled is small. From this, we finally derive a bound on the generalization error, which is stated in Thm. 1.
68
+
69
+ Notations Let $\mathbb{X}$ denote the input domain whose underlying probability function is denoted $P$ , and let $\mathbb{Y} = [k]$ denote the target domain. Assume that a true labeling function $f: \mathbb{X} \to \mathbb{Y}$ exists. Let $X = \{x_{i}\}_{i=1}^{m}$ denote an unlabeled set of points, and $b \leq m$ the annotation budget. Let $L \subseteq X$ denote the labeled set, where $|L| = b$ . Let $B_{\delta}(x) = \{x': \|x' - x\|_{2} \leq \delta\}$ denote a ball centered at $x$ of radius $\delta$ . Let $C \equiv C(L, \delta) = \bigcup_{x \in L} B_{\delta}(x)$ denote the region covered by $\delta$ -balls centered at the labeled examples in $L$ . We call $C(L, \delta)$ the covered region and $P(C)$ the coverage.
70
+
71
+ Definition 2.1. We say that a ball $B_{\delta}(x)$ is pure if $\forall x^{\prime}\in B_{\delta}(x):f(x^{\prime}) = f(x)$
72
+
73
+ Definition 2.2. We define the purity of $\delta$ as
74
+
75
+ $$
76
+ \pi (\delta) = P \left(\left\{x: B _ {\delta} (x) \text {i s p u r e} \right\}\right).
77
+ $$
78
+
79
+ Notice that $\pi (\delta)$ is monotonically decreasing.
80
+
81
+ Let $\hat{f}$ denote the 1-NN classifier based on $L$ . We split the covered region $C(L, \delta)$ into two sets:
82
+
83
+ $$
84
+ C _ {r i g h t} = \{x \in C \colon \hat {f} (x) = f (x) \}, \qquad C _ {w r o n g} = C \setminus C _ {r i g h t}.
85
+ $$
86
+
87
+ Lemma 1. $C_{wrong} \subseteq \{x : B_{\delta}(x) \text{ is not pure}\}$ .
88
+
89
+ Proof. Let $x \in C_{\text{wrong}}$ . Let $c \in L$ denote the nearest neighbor to $x$ . Then they have the same predicted label, $\hat{f}(x) = \hat{f}(c)$ , and $f(c) = \hat{f}(c)$ because $c$ is labeled. Since $x$ is wrongly labeled, $\hat{f}(x) \neq f(x)$ , which implies that
90
+
91
+ $$
92
+ f (c) = \hat {f} (c) = \hat {f} (x) \neq f (x).
93
+ $$
94
+
95
+ Finally, since $x \in C_{\text{wrong}} \subseteq C$ is in the coverage, $d(x, c) < \delta$ , which means that $c \in B_{\delta}(x)$ with a different label and so $B_{\delta}(x)$ is not pure.
96
+
97
+ # Corollary 1.
98
+
99
+ $$
100
+ P \left(C _ {w r o n g}\right) \leq P \left(\left\{x: B _ {\delta} (x) \text {i s n o t p u r e} \right\}\right) = 1 - \pi (\delta).
101
+ $$
102
+
103
+ Theorem 1. The generalization error of the 1-NN classifier $\hat{f}$ is bounded as follows
104
+
105
+ $$
106
+ \mathbb {E} \left[ \hat {f} (x) \neq f (x) \right] \leq \left(1 - P \left(C (L, \delta)\right)\right) + (1 - \pi (\delta)). \tag {1}
107
+ $$
108
+
109
+ Proof.
110
+
111
+ $$
112
+ \begin{array}{l} \mathbb {E} \left[ \hat {f} (x) \neq f (x) \right] = \mathbb {E} [ \mathbb {1} _ {f (x) \neq \hat {f} (x)} \mathbb {1} _ {x \notin C} ] + \mathbb {E} [ \mathbb {1} _ {f (x) \neq \hat {f} (x)} \mathbb {1} _ {x \in C} ] \\ \leq P (x \notin C) + \mathbb {E} [ \mathbb {1} _ {f \neq \hat {f}} \mathbb {1} _ {x \in C _ {r i g h t}} ] + \mathbb {E} [ \mathbb {1} _ {f (x) \neq \hat {f} (x)} \mathbb {1} _ {x \in C _ {w r o n g}} ] \\ \leq P (x \notin C) + 0 + P (x \in C _ {w r o n g}) \\ \leq \left(1 - P (C (L, \delta))\right) + (1 - \pi (\delta)). \\ \end{array}
113
+ $$
114
+
115
+ Note that (1) gives us a different bound for different $\delta$ values, which also depends on the labeled set $L$ . This bound introduces a trade-off: as $\delta$ increases, the coverage increases, but the purity decreases. Ideally, we should seek a pair $\{\delta, L\}$ that achieves the tightest bound.
116
+
117
+ Discussion We can interpret (1) in the context of two boundary conditions of AL: high-budget and low-budget. In the high-budget regime, achieving full coverage $P(C) = 1$ is feasible as we have many points, and the remaining challenge is to reduce $1 - \pi(\delta)$ . Accordingly, since $\pi(\delta)$ is monotonically decreasing, we seek to minimize $\delta$ subject to the constraint $P(C) = 1$ . This is similar to Coreset [37]. In the low-budget regime, full coverage entails very low purity, which (if sufficiently low) makes the bound trivially 1. Thus, instead of insisting on full coverage, we fix a $\delta$ that yields "large enough" purity $\pi(\delta) > 0$ , and then seek a labeled set $L$ that maximizes the coverage $P(C)$ . We call this problem Max Probability Cover.
118
+
119
+ # 2.2 Max Probability Cover
120
+
121
+ Definition 2.3 (Max Probability Cover). Fix $\delta >0$ , and obtain a subset $L\subset X$ , $|L| = b$ , that maximizes the probability of the covered area $P(C(L,\delta))$
122
+
123
+ $$
124
+ \underset {L \subseteq X; | L | = b} {\operatorname {a r g m a x}} P \left(\bigcup_ {x \in L} B _ {\delta} (x)\right) \tag {2}
125
+ $$
126
+
127
+ An optimal solution to (2) would minimize the bound in (1), when $\delta$ is fixed.
128
+
129
+ Unfortunately, when moving to practical settings there are two obstacles. The first is complexity:
130
+
131
+ Theorem 2. Max Probability Cover is NP-hard.
132
+
133
+ Proof. (Sketch, see full proof in App. B) We construct a reduction from an established NP-hard problem (Max Coverage, see Def. A.1) to Max Probability Cover. For the collection of subsets $S = \{S_1, \ldots, S_m\}$ , we consider the space $\mathbb{R}^m$ and a collection of $\delta$ -balls $\{B_{\delta}(x_i)\}_{i=1}^m$ with the exhaustive intersection property. This means that any subset of the balls has at least one point that is contained in all the balls in the subset, but not contained in any other ball (see example in Fig. 2). The existence of such a collection of balls in $\mathbb{R}^m$ , $\forall m$ , is proved in Lemma 3 (see App. B). We then assign each $S_i$ to $B_{\delta}(x_i)$ , and each element in $S_i$ is mapped to a point in the intersection of all the balls assigned to subsets that contain it. Each such point then defines a Dirac measure, the normalized sum of which determines a probability distribution on $\mathbb{R}^m$ . The selection of $\delta$ -balls that is the solution to the Max Probability Cover can be translated back to a selection of subsets, which is the solution to the original Max Coverage problem.
134
+
135
+ ![](images/829360fb139a12ad27944474986fcc2600eddb2b7a6b98526b5ecebd62600c65.jpg)
136
+ (a)
137
+
138
+ ![](images/950e7aa7792be033ca36702d6dddd38ac47ae29ee83cdd722522e207f055e99f.jpg)
139
+ (b)
140
+ Figure 2: Illustration in $\mathbb{R}^2$ of exhaustive intersection (see Def. B.2). (a) With 3 balls, every subset of balls has a point that is contained only in the specific subset. Thus this set of 3 balls has the exhaustive intersection property. (b) With 4 balls, any point in the intersection of two opposite balls is also contained in at least one other ball. Thus this set of 4 balls does not have the exhaustive intersection property. In the drawing, the region of intersection between the red and blue balls is outlined in purple, while the points within the region that are unique to this pair are marked in light purple. Note that in example (b), this set is empty.
141
+
142
+ # 2.3 Using the Empirical Distribution
143
+
144
+ When employing Max Probability Cover, the second practical problem concerns the data distribution, which is hardly ever known apriori. In fact, even when known, the subsequent probabilistic computations are often intractable and hard to approximate. Instead, we may use the empirical distribution $\tilde{P}(A) = \frac{1}{m}\sum_{i=1}^{m}\mathbb{1}_{x_i\in A}$ as an approximation, which gives us the following useful result:
145
+
146
+ Proposition. When $P$ is the empirical distribution $\tilde{P}$ , the Max Probability Cover objective function is equivalent to the Max Coverage objective, with $\{B_{\delta}(x_i) \cap X\}_{i=1}^m$ as the collection of subsets.
147
+
148
+ Proof. Given a labeled set $L = \{x_{i}\}_{i=1}^{b}$ , we show equality of objectives up to constant $\frac{1}{|X|}$ .
149
+
150
+ $$
151
+ \begin{array}{l} P \left(\bigcup_ {i = 1} ^ {b} B _ {\delta} \left(x _ {i}\right)\right) = P \left(\left\{y \in \mathbb {R} ^ {d} \mid \exists i \quad \| x _ {i} - y \| < \delta \right\}\right) \\ = \frac {1}{| X |} | \{x \in X \mid \exists i \| x _ {i} - x \| < \delta \} | \\ = \frac {1}{| X |} \left| \bigcup_ {i = 1} ^ {b} \left(B _ {\delta} \left(x _ {i}\right) \cap X\right) \right| \\ \end{array}
152
+ $$
153
+
154
+ ![](images/76400b53edf9dbfe04306b2550307b6d33a19c9f3d6fec49249b8b5b43fb438e.jpg)
155
+
156
+ # 2.4 The "Duality" of Max Probability Cover and Coreset
157
+
158
+ The Coreset AL method by Sener and Savarese [37] minimizes the objective
159
+
160
+ $$
161
+ \delta (L) = \max _ {x \in X} \min _ {c \in L} d (x, c) = \min \{\delta \in \mathbb {R} _ {+} \colon X \subseteq \bigcup_ {c \in L} B _ {\delta} (c) \}
162
+ $$
163
+
164
+ We can rewrite the above in the language of distributions as
165
+
166
+ $$
167
+ \delta^ {\prime} (L) = \min \{\delta \in \mathbb {R} _ {+}: P (\bigcup_ {c \in L} B _ {\delta} (c)) = 1 \}
168
+ $$
169
+
170
+ If we use the empirical distribution then $\delta(L) = \delta'(L)$ . In this framework we can say that Max Probability Cover and Coreset are dual problems in the following loose sense:
171
+
172
+ 1. Max Probability Cover minimizes the generalization error bound (1) when we fix $\delta$ and seek to maximize the coverage, which is suitable for the low budget regime.
173
+ 2. Coreset minimizes the generalization error bound (1) when we fix the coverage to 1 and minimize $\delta$ , which is suitable for the high budget regime because only then can we fix the coverage to 1.
174
+
175
+ This duality is visualized in Fig. 1.
176
+
177
+ # 3 Method: ProbCover
178
+
179
+ To deliver a practical method, we first note that our approach implicitly relies on the existence of a good embedding space [4, 10, 53], where distance is correlated with semantic similarity, and where similar points are likely to bunch together in high-density regions. As is now customary [e.g., 30, 19], we use an embedding space derived by training a self-supervised task over the large unlabeled pool. In such a space similar labels often correspond to short distances, making 1-NN classification suitable, and also providing for the existence of large enough $\delta$ balls with good purity and coverage properties.
180
+
181
+ Secondly, we note that Max Coverage is NP-hard and cannot be solved efficiently. Instead, as its objective is submodular and monotone [27], we use the greedy approximate algorithm that achieves $\left(1 - \frac{1}{e}\right)$ -approximation [27]. A better approximation is impractical, as shown in App. D.1. See App. E for additional time and space complexity analysis.
182
+
183
+ Below, we describe the greedy algorithm in Section 3.1, and the estimation of ball size $\delta$ in Section 3.2.
184
+
185
+ # 3.1 Greedy Algorithm
186
+
187
+ Algorithm 1 ProbCover
188
+ Input: unlabeled pool $U$ , labeled pool $L$ , budget $b$ , ball-size $\delta$
189
+ Output: a set of points to query
190
+ $X\gets$ Embedding of representation learning algorithm on $U\cup L$ $G = (V = X,E = \{(x,x^{\prime}):x^{\prime}\in B_{\delta}(x)\})$
191
+ for all $c\in L$ do Remove the incoming edges to covered vertices, $\{(x^{\prime},x)\in E:(c,x)\in E\}$ , from $E$
192
+ end for
193
+ Queries $\leftarrow \emptyset$
194
+ for all i=1,...,b do Add $c\in U$ with the highest out-degree in $G$ to Queries Remove the incoming edges to covered vertices, $\{(x^{\prime},x)\in E:(c,x)\in E\}$ , from $E$
195
+ end for
196
+ return Queries
197
+
198
+ The algorithm (see Alg. 1 below for pseudo-code) goes as follows: First, construct a directed graph $G = (V, E)$ , with $V = X$ the embedding of the data space, and $(x, x') \in E \iff x' \in B_{\delta}(x) \iff d(x, x') \leq \delta$ . In $G$ , each vertex represents a specific example, and there is an edge between two vertices $(x, x')$ if $x'$ is covered by the $\delta$ -ball centered at $x$ (distances are measured in the embedding space). The algorithm then performs $b$ iterations of the following two steps:
199
+
200
+ (i) Pick the vertex $x_{max}$ with the highest out-degree for annotation;
201
+ (ii) Remove all incoming edges to $x_{max}$ and its neighbors.
202
+
203
+ As ProbCover uses a sparse representation of the adjacency graph, it is able to scale to large datasets while requiring limited space resources. The complexity analysis of the algorithm, and specifically the complexity of constructing the adjacency graph and of the sample selection, are discussed in the Appendix E.
204
+
205
+ # 3.2 Estimating $\delta$
206
+
207
+ Our algorithm requires the specification of hyper-parameter $\delta$ , the ball radius, whose value depends on details of the embedding space (see App. C.1 for embeddings used). In choosing $\delta$ , we need to consider the trade-off between large coverage $P(C)$ and high purity $\pi(\delta)$ . We resolve this trade-off with the following heuristic, where we pick the largest $\delta$ possible, while maintaining purity above a certain threshold $\alpha \in (0, 1)$ . Specifically,
208
+
209
+ $$
210
+ \delta^ {*} = \max \left\{\delta : \pi (\delta) \geq \alpha \right\}
211
+ $$
212
+
213
+ Importantly, $\alpha$ is more intuitive to tune, and is kept constant across different datasets (unlike $\delta$ ). We still need to estimate the purity $\pi(\delta)$ , which depends on the labels, from unlabeled data. To this end, we estimate purity using unsupervised representation learning and clustering. First, we cluster
214
+
215
+ self-supervised features using $k$ -means with $k$ equal to the number of classes. For a given $\delta$ , we compute the purity $\pi(\delta)$ using the clustering labels as pseudo-labels for each example. Searching for the best $\delta$ , we repeat the process and pick the largest $\delta$ so that at least $\alpha = 0.95$ of the balls are pure.
216
+
217
+ In Fig. 3, we plot the percentage of pure balls across different datasets as a function of $\delta$ , where the dashed line represents the $\delta^{*}$ chosen by ProbCover.
218
+
219
+ ![](images/d8a87d3feb2322a5d4ea77af576c4e79b8ddcf77df8bf3c9ea5cfa88224e1feb.jpg)
220
+ (a) CIFAR-10
221
+
222
+ ![](images/c625a80cdc0e1d633ad88458bfd295aa8b56dd562b65f08b89baaf6d653c98ba.jpg)
223
+ (b) CIFAR-100
224
+ Figure 3: Ball purity, as a function of $\delta$ , estimated from the unlabeled data (see text). The dashed line marks the highest $\delta$ , after which purity drops below $\alpha = 0.95$ .
225
+
226
+ ![](images/9fc999eaec8b90290cc8808b9e04e025d436eef41454cdd42163cc13fce8fffe.jpg)
227
+ (c) Tiny-ImageNet
228
+
229
+ ![](images/0439bbe37e82b959df632054b788d9639903317d384f7a226671f5f19e026a6e.jpg)
230
+ (d) ImageNet
231
+
232
+ # 4 Empirical Results
233
+
234
+ We report a set of empirical results, comparing ProbCover to other AL strategies in a variety of settings. We focus on the very low budget regime, with a budget size $b$ of a similar order of magnitude as the number of classes. Note that since the data is picked from an unlabeled pool, chances are that the initial labeled set is not going to be balanced across classes, and in the early stages of training some classes will almost always be missing. ProbCover's excellent performance nevertheless, as seen below, demonstrates its robustness in the presence of this hurdle.
235
+
236
+ # 4.1 Methodology
237
+
238
+ Three deep AL frameworks are evaluated:
239
+
240
+ (i) Fully supervised: train a ResNet-18 only on the annotated data, as a fully supervised task.
241
+ (ii) Semi-supervised by transfer learning: create a representation of the data by training with a self-supervised task on the unlabeled data, then construct a 1-NN classifier using the ensuing representation in a supervised manner. This framework is intended to capture the basic benefits of semi-supervised learning, regardless of the added benefits provided by modern semi-supervised learning methods and the more sophisticated derivation of pseudo-labels.
242
+ (iii) Fully semi-supervised: train a competitive semi-supervised model on both the annotated and unlabeled data. In our experiments we use FlexMatch by Zhang et al. [51].
243
+
244
+ In frameworks (i) and (ii) we adopt the evaluation kit created by Munjal et al. [33], in which we can compare multiple deep AL strategies in a principled way. In framework (iii), we adopt the code and hyper-parameters provided by FlexMatch.
245
+
246
+ When evaluating frameworks (i) and (ii), we compare ProbCover to 9 deep AL strategies as baselines. (1) Random query uniformly. (2)-(4) query examples with the lowest score, using the following basic scores: (2) Uncertainty - max softmax output, (3) Margin - margin between the two highest softmax outputs, (4) Entropy - inverse entropy of softmax outputs. (5) BADGE [1]. (6) DBAL [15]. (7) TypiClust [19]. (8) BALD [26]. (9) W-Dist [30], see also App. D.4. (10) Coreset [37]. We note that while most baseline methods are suitable for the high budget regime, TypiClust and W-Dist are also suitable for the low budget regime. Similarly to ProbCover, TypiClust requires a good embedding space to work properly. When comparing ProbCover and TypiClust, and in order to avoid possible confounds, we use the same embedding space for both methods.
247
+
248
+ These AL methods are evaluated on the following classification datasets: CIFAR-10/100 [28], TinyImageNet, [29], ImageNet [12] and its subsets (following Van Gansbeke et al. [41]). When considering CIFAR-10/100 and TinyImageNet, we use as input the embedding of SimCLR [9] across all methods. When considering ImageNet we use as input the embedding of DINO [5] throughout. Results on ImageNet-50/100 are deferred to App. D.1. Details concerning specific networks and hyper-parameters can be found in App. C, and in the attached code in the supplementary material. When evaluating frameworks (i) and (ii), we perform 5 active learning rounds, querying a fixed
249
+
250
+ budget of $b$ examples in each round. In framework (iii), as FlexMatch is computationally demanding, we only evaluate methods on their initial pool selection capabilities.
251
+
252
+ # 4.2 Main Results
253
+
254
+ (i) Fully supervised framework. We evaluate different AL methods based on the performance of a deep neural network trained directly on the raw queried data. In each round, we query $b$ samples where $b$ is equal to the number of classes in each dataset, and train a ResNet-18 on the accumulated queried set. We repeat this for 5 active learning rounds, and plot the mean accuracy of 5 repetitions (3 for ImageNet) in Fig. 4 (see App. D.1 for additional results).
255
+
256
+ ![](images/b9f3e5e96cf246b3ebf45d9a2172c4213c43d373f1826153dbac1f911d7db3b1.jpg)
257
+ Figure 4: Framework (i), fully supervised: The performance of ProbCover is compared with baseline AL strategies in image classification tasks in the low budget regime. Budget $b$ guarantees on average 1 sample per class, thus the initial sample may be imbalanced. The final average test accuracy in each iteration is reported, using 5 repetitions (3 for ImageNet). The shaded area reflects the standard error across repetitions.
258
+
259
+ (ii) Semi-supervised by transfer learning. In this framework, we make use of pretrained self-supervised features, and measure classification performance using the 1-NN classifier. Accordingly, each point is classified by the label of its nearest neighbor (within the selected labeled set $L$ ) in the self-supervised features space. In low budgets, this framework outperforms the fully-supervised framework (i), though it is not as effective as the full-blown semi-supervised learning framework (iii). This supports the generality of our findings, not limited to any specific semi-supervised method. Similarly to Fig. 4, in Fig. 5 we plot the mean accuracy of 5 repetitions for the different tasks.
260
+
261
+ ![](images/ed705f62c3e9087414e84f8916c3170a320497c38acff7ac0ea7b063d3f81fa3.jpg)
262
+ Figure 5: Comparative evaluation of framework (ii) - semi-supervised by transfer learning, see caption of Fig. 4.
263
+
264
+ (iii) Semi-supervised framework. We compare the performance of different AL strategies used prior to running FlexMatch, a state-of-the-art semi-supervised method. In Fig. 6 we show results with 3 repetitions of FlexMatch, using the labeled sets provided by different AL strategies and budget $b$ equal to the number of classes. We see that ProbCover outperforms random sampling and other AL baselines by a large margin. We note that in agreement with previous works [6, 19], AL strategies that are suited for high budgets do not improve the results of random sampling, while AL strategies that are suited for low budgets achieve large improvements.
265
+
266
+ # 4.3 Ablation Study
267
+
268
+ We report a set of ablation studies, evaluating the added value of each step of ProbCover.
269
+
270
+ ![](images/4b3e0fb29f583bf4c8aeb6b20a921cd4762afdf18d3171ee3f109a9dd21bee20.jpg)
271
+ (a) CIFAR-10
272
+
273
+ ![](images/c1694b103307cbeb10df1229342e4c784bc554744ad1f5e66d933c47ecc09bbe.jpg)
274
+ (b) CIFAR-100
275
+ Figure 6: Framework (iii) Semi-supervised: comparison of AL strategies in a semi-supervised task. Each bar shows the mean test accuracy after 3 repetitions of FlexMatch trained using $b$ labeled examples, where $b$ is equal to the number of classes in each task. Error bars denote the standard error.
276
+
277
+ ![](images/e6059a688dbdec87049459fc6ee6238303428bf66f45cd439466a568dc9234d4.jpg)
278
+ (c) Tiny-ImageNet
279
+
280
+ Random initial selection When following the uncertainty sampling principle, as many AL methods do, a trained learner is needed. Any such method requires therefore a non-empty initial pool of labeled examples to train a rudimentary learner, from which uncertainty selection can be bootstrapped. In the set of methods evaluated here (see Section 4.1), only two - ProbCover and TypiClust - are not affected by this problem. This can be seen in Fig. 4, noting that only these two methods do better than random in the initial step. Is this the only reason they outperform other methods in low budgets?
281
+
282
+ To address this question, we repeat the experiments reported in Fig. 4a-4b, using an initial random set of annotated examples across the board and by all methods. Results are reported in Fig. 7. When comparing Fig. 4a-4b and Fig. 7, we see that the advantage of ProbCover and TypiClust goes beyond the initial set selection, and remains in effect even if this factor is eliminated.
283
+
284
+ ![](images/ea8ac30a69c20eba30b0c65d695cb1f329f7d987301f4aa595eccfd03e8e35c7.jpg)
285
+ (a) CIFAR-10
286
+ (b) CIFAR-100
287
+ Figure 7: Random Initial pool in the supervised framework, an average of 1 sample per class.
288
+
289
+ ![](images/d56915dedec47a2b22b204660c8ab9f702e20162e42e7800f660713b2cdf34f7.jpg)
290
+ (a) CIFAR-10
291
+ (b) CIFAR-100
292
+ Figure 8: Comparison of ProbCover when applied to the raw data vs the embedding space.
293
+
294
+ RGB space distances As discussed in Section 3, our approach relies on the existence of a good embedding space, where distance is correlated with semantic similarity. We now verify this claim by repeating the basic fully-supervised experiments (Fig. 4) with one difference: ProbCover can only use the original RGB space representation to compute distances. Results are shown in Fig. 8. When comparing the original ProbCover with its variant using RGB space, a significant drop in performance is seen as expected, demonstrating the importance of the semantic embedding space.
295
+
296
+ The interaction between $\delta$ and budget size To understand the interaction between the hyperparameter $\delta$ and budget $b$ , we repeat our basic experiments (Fig. 4) with different choices of $\delta$ and $b$ using CIFAR-10. For each pair $(\delta, b)$ , we select an initial pool of $b$ examples using ProbCover with $\delta$ balls, and report the difference in accuracy from the selection of $b$ random points. Average results across 3 repetitions are shown in Fig. 9 as a function of $b$ . We see that as the budget $b$ increases, smaller $\delta$ 's are preferred.
297
+
298
+ Coreset vs. ProbCover. In Section 2.4 we argue that ProbCover is suitable for low budgets, while Coreset is suitable for high budgets. To verify this claim, we compare their performance under the following 3 setups while using the same embedding space, and report results on CIFAR-10:
299
+
300
+ - Low budget - Select an initial pool of 100 samples using the SimCLR representation.
301
+
302
+ ![](images/da0095beafebd8fab7c4fb5cea4c5e803f9232b499b23c0603097558b3aa4944.jpg)
303
+ Figure 9: The accuracy difference between ProbCover when using different $\delta$ values, and the outcome of $b$ random samples (average over 3 repetitions).
304
+
305
+ ![](images/113b6bc259538b9f8e98f9039274470d369297a8b84785fde64f60df88968174.jpg)
306
+ (a) Low budget
307
+
308
+ ![](images/6295ddeb73c491986811b26baba2218bdcbeaf5c14313f2881e3d0d11fddcf35.jpg)
309
+ (b) Mid budget
310
+
311
+ ![](images/8c5bc5330244f24b580095f005ed9c05c1c9bc7d76f1fac3901e940db82c7252.jpg)
312
+ (c) High Budget
313
+ Figure 10: Comparing the performance under the supervised framework of ProbCover and Coreset on different budget regimes. The low budget shows an initial pool selection of 100 samples. Mid/High budget start with 1K/5K samples and query additional 1K/5K samples (see text).
314
+
315
+ - High budget - Train a model on 5K randomly selected examples. Then select an additional set of 5K examples using the learner's latent representation. This is the setup used by Sener and Savarese [37].
316
+ - Mid budget - Same as high budget, except the initial pool size and added budget are 1K.
317
+
318
+ Results are reported in Fig. 10. In the low budget regime, ProbCover outperforms Coreset as would be expected. In the mid-budget regime, where the feature space of the learner is informative, only ProbCover achieves significant improvement over random selection. In the high budget regime, Coreset improves over random selection, while ProbCover is least effective.
319
+
320
+ # 5 Summary and Discussion
321
+
322
+ We study the problem of AL in the low-budget regime. We model the problem as Max Probability Cover, showing that under certain assumptions on the data distribution, which are likely to hold in self-supervised embedding spaces, it optimizes an upper-bound on the generalization error of a 1-NN classifier. We devise an AL strategy termed ProbCover, which approximates the optimal solution. We empirically evaluate it in supervised and semi-supervised frameworks across different datasets, showing that ProbCover significantly outperforms other methods in the low-budget regime.
323
+
324
+ In future work we intend to investigate: (i) possible avenues for improving the choice of $\delta$ by making use of already known labels, or inferring a score for $\delta$ through the topology of the resulting covering graph; (ii) extensions of the current formulation of Max Probability Cover, by making $\delta$ - the radius of the balls - dependent on the samples rather than uniform; (iii) soft-coverage approaches, where the covering notion is not binary but some continuous measure, which may allow us to do away with $\delta$ .
325
+
326
+ # Acknowledgments
327
+
328
+ This work was supported by the Israeli Ministry of Science and Technology, and by the Gatsby Charitable Foundations. We are grateful to our dedicated NeurIPS AC, who acted upon the lengthy discussion between us and the reviewers, as seen on OpenReview.
329
+
330
+ # References
331
+
332
+ [1] Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
333
+ [2] Josh Attenberg and Foster Provost. Why label when you can search? alternatives to active learning for applying human resources to build classification models under extreme class imbalance. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 423-432, 2010.
334
+
335
+ [3] William H Beluch, Tim Genewein, Andreas Nurnberger, and Jan M Kohler. The power of ensembles for active learning in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9368-9377, 2018.
336
+ [4] Javad Zolfaghari Bengar, Joost van de Weijer, Bartlomiej Twardowski, and Bogdan Raducanu. Reducing label effort: Self-supervised meets active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1631-1639, 2021.
337
+ [5] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021.
338
+ [6] Yao-Chun Chan, Mingchen Li, and Samet Oymak. On the marginal benefit of active learning: Does self-supervision eat its cake? In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3455-3459. IEEE, 2021.
339
+ [7] Akshay L Chandra, Sai Vikas Desai, Chaitanya Devaguptapu, and Vineeth N Balasubramanian. On initial pools for deep active learning. In NeurIPS 2020 Workshop on Pre-registration in Machine Learning, pages 14-32. PMLR, 2021.
340
+ [8] Liangyu Chen, Yutong Bai, Siyu Huang, Yongyi Lu, Bihan Wen, Alan L Yuille, and Zongwei Zhou. Making your first choice: To address cold start problem in vision active learning. arXiv preprint arXiv:2210.02442, 2022.
341
+ [9] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020.
342
+ [10] Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E. Hinton. Big self-supervised models are strong semi-supervised learners. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
343
+ [11] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702-703, 2020.
344
+ [12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
345
+ [13] Reza Zanjirani Farahani and Masoud Hekmatfar. Facility location: concepts, models, algorithms and case studies. Springer Science & Business Media, 2009.
346
+ [14] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016.
347
+ [15] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR, 2017.
348
+ [16] Mingfei Gao, Zizhao Zhang, Guo Yu, Sercan Ö Arik, Larry S Davis, and Tomas Pfister. Consistency-based semi-supervised active learning: Towards minimizing labeling cost. In European Conference on Computer Vision, pages 510-526. Springer, 2020.
349
+ [17] Yonatan Geifman and Ran El-Yaniv. Deep active learning over the long tail. arXiv preprint arXiv:1711.00941, 2017.
350
+ [18] Daniel Gissin and Shai Shalev-Shwartz. Discriminative active learning. arXiv preprint arXiv:1907.06347, 2019.
351
+ [19] Guy Hacohen, Avihu Dekel, and Daphna Weinshall. Active learning on a budget: Opposite strategies suit high and low budgets. arXiv preprint arXiv:2202.02794, 2022.
352
+
353
+ [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
354
+ [21] Tao He, Xiaoming Jin, Guiguang Ding, Lan Yi, and Chenggang Yan. Towards better uncertainty sampling: Active learning with multiple views for deep convolutional neural network. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pages 1360-1365. IEEE, 2019.
355
+ [22] SeulGi Hong, Heonjin Ha, Junmo Kim, and Min-Kook Choi. Deep active learning with augmentation-based consistency estimation. arXiv preprint arXiv:2011.02666, 2020.
356
+ [23] Neil Houlsby, José Miguel Hernández-Lobato, and Zoubin Ghahramani. Cold-start active learning with robust ordinal matrix factorization. In International Conference on Machine Learning, pages 766-774. PMLR, 2014.
357
+ [24] Harry B III Hunt, Madhav V Marathe, Venkatesh Radhakrishnan, Shankar S Ravi, Daniel J Rosenkrantz, and Richard E Stearns. Nc-approximation schemes for np-and pspace-hard problems for geometric graphs. Journal of algorithms, 26(2):238-274, 1998.
358
+ [25] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2372-2379. IEEE, 2009.
359
+ [26] Andreas Kirsch, Joost Van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. Advances in neural information processing systems, 32:7026-7037, 2019.
360
+ [27] Andreas Krause and Daniel Golovin. Submodular function maximization. *Tractability*, 3: 71-104, 2014.
361
+ [28] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Online, 2009.
362
+ [29] Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015.
363
+ [30] Rafid Mahmood, Sanja Fidler, and Marc T Law. Low budget active learning via wasserstein distance: An integer programming approach. arXiv preprint arXiv:2106.02968, 2021.
364
+ [31] Daniel Marx. Efficient approximation schemes for geometric problems? In European Symposium on Algorithms, pages 448-459. Springer, 2005.
365
+ [32] Sudhanshu Mittal, Maxim Tatarchenko, Özgün Çiçek, and Thomas Brox. Parting with illusions about deep active learning. arXiv preprint arXiv:1912.05361, 2019.
366
+ [33] Prateek Munjal, N. Hayat, Munawar Hayat, J. Sourati, and S. Khan. Towards robust and reproducible active learning using neural networks. ArXiv, abs/2002.09564, 2020.
367
+ [34] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical programming, 14(1):265–294, 1978.
368
+ [35] Kossar Pourahmadi, Parsa Nooralinejad, and Hamed Pirsiavash. A simple baseline for low-budget active learning. arXiv preprint arXiv:2110.12033, 2021.
369
+ [36] Hiranmayi Ranganathan, Hemanth Venkateswara, Shayok Chakraborty, and Sethuraman Panchanathan. Deep active learning for image classification. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3934-3938. IEEE, 2017.
370
+ [37] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018.
371
+ [38] Changjian Shui, Fan Zhou, Christian Gagné, and Boyu Wang. Deep active learning: Unified and principled method for query and training. In International Conference on Artificial Intelligence and Statistics, pages 1308-1318. PMLR, 2020.
372
+
373
+ [39] Oriane Simeoni, Mateusz Budnik, Yannis Avrithis, and Guillaume Gravier. Rethinking deep active learning: Using unlabeled data at model training. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 1220-1227. IEEE, 2021.
374
+ [40] Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational adversarial active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5972-5981, 2019.
375
+ [41] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In European Conference on Computer Vision, pages 268-285. Springer, 2020.
376
+ [42] Zengmao Wang, Bo Du, Lefei Zhang, and Liangpei Zhang. A batch-mode active learning framework by querying discriminative and representative samples for hyperspectral image classification. Neurocomputing, 179:88-100, 2016.
377
+ [43] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in data subset selection and active learning. In International Conference on Machine Learning, pages 1954-1963. PMLR, 2015.
378
+ [44] Daphna Weinshall, Hynek Hermansky, Alon Zweig, Jie Luo, Holly Jimison, Frank Ohl, and Misha Pavel. Beyond novelty detection: Incongruent events, when general and specific classifiers disagree. Advances in Neural Information Processing Systems, 21, 2008.
379
+ [45] Ziting Wen, Oscar Pizarro, and Stefan Williams. Active self-semi-supervised learning for few labeled samples fast training. arXiv preprint arXiv:2203.04560, 2022.
380
+ [46] Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113-127, 2015.
381
+ [47] Changchang Yin, Buyue Qian, Shilei Cao, Xiaoyu Li, Jishang Wei, Qinghua Zheng, and Ian Davidson. Deep similarity-based batch mode active learning with exploration-exploitation. In 2017 IEEE International Conference on Data Mining (ICDM), pages 575-584. IEEE, 2017.
382
+ [48] Donggeun Yoo and In So Kweon. Learning loss for active learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 93-102, 2019.
383
+ [49] Yue Yu, Rongzhi Zhang, Ran Xu, Jieyu Zhang, Jiaming Shen, and Chao Zhang. Cold-start data selection for few-shot language model fine-tuning: A prompt-based uncertainty propagation approach. arXiv preprint arXiv:2209.06995, 2022.
384
+ [50] Michelle Yuan, Hsuan-Tien Lin, and Jordan L. Boyd-Graber. Cold-start active learning through self-supervised language modeling. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7935-7948. Association for Computational Linguistics, 2020.
385
+ [51] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. CoRR, abs/2110.08263, 2021.
386
+ [52] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3): 107-115, 2021.
387
+ [53] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018.
388
+ [54] Fedor Zhdanov. Diverse mini-batch active learning. arXiv preprint arXiv:1901.05954, 2019.
389
+ [55] Yu Zhu, Jinghao Lin, Shibi He, Beidou Wang, Ziyu Guan, Haifeng Liu, and Deng Cai. Addressing the item cold-start problem by attribute-driven active learning. IEEE Trans. Knowl. Data Eng., 32(4):631-644, 2020. doi: 10.1109/TKDE.2019.2891530.
390
+
391
+ # Checklist
392
+
393
+ 1. For all authors...
394
+
395
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
396
+ (b) Did you describe the limitations of your work? [Yes] Throughout the entire paper.
397
+ (c) Did you discuss any potential negative societal impacts of your work? [No] Irrelevant for this work.
398
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
399
+
400
+ 2. If you are including theoretical results...
401
+
402
+ (a) Did you state the full set of assumptions of all theoretical results? [Yes] Section 2.1
403
+ (b) Did you include complete proofs of all theoretical results? [Yes] Section 2, and App. B
404
+
405
+ 3. If you ran experiments...
406
+
407
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We included the necessary information for reproducibility. Furthermore, the code will be published upon acceptance.
408
+ (b) Did you specify all the training details (e.g., data splits, hyper-parameters, how they were chosen)? [Yes]
409
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes]
410
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
411
+
412
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
413
+
414
+ (a) If your work uses existing assets, did you cite the creators? [Yes] 4.1
415
+ (b) Did you mention the license of the assets? [Yes] All assets are publicly available.
416
+ (c) Did you include any new assets either in the supplemental material or as a URL? [No]
417
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [Yes] The data is publicly available.
418
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [No] Irrelevant.
419
+
420
+ 5. If you used crowdsourcing or conducted research with human subjects...
421
+
422
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
423
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
424
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
activelearningthroughacoveringlens/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8174d0d9bd50629905581c00b533b97e23b75b88e91f53c7bc1cc72cb3aa83cf
3
+ size 409984
activelearningthroughacoveringlens/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02a6b20bda1ed85ee2c9eba7d02b02cba380b2700420e878eaa3a59ed9efb8e9
3
+ size 551356
activelearningwithneuralnetworksinsightsfromnonparametricstatistics/e9178624-4230-484c-a4e9-8608b7b59f16_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33563bb219a5c98696ee1cc787e53e262cc5f02777b25662be6cff50f0620d67
3
+ size 93181
activelearningwithneuralnetworksinsightsfromnonparametricstatistics/e9178624-4230-484c-a4e9-8608b7b59f16_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:140cc2b39c53d4a5588998f0fca3b28ae099e1c823e50d723a8a65d943b7dd26
3
+ size 125778
activelearningwithneuralnetworksinsightsfromnonparametricstatistics/e9178624-4230-484c-a4e9-8608b7b59f16_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65141c68c9339c7b2178942ea50f9e50c82ebe39b5bdbb4290fa4a79aab3d0fc
3
+ size 361499
activelearningwithneuralnetworksinsightsfromnonparametricstatistics/full.md ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning with Neural Networks: Insights from Nonparametric Statistics
2
+
3
+ Yinglun Zhu
4
+
5
+ Department of Computer Sciences
6
+
7
+ University of Wisconsin-Madison
8
+
9
+ Madison, WI 53706
10
+
11
+ yinglun@cs.wisc.edu
12
+
13
+ Robert Nowak
14
+
15
+ Department of Electrical and Computer Engineering
16
+
17
+ University of Wisconsin-Madison
18
+
19
+ Madison, WI 53706
20
+
21
+ rdnowak@wisc.edu
22
+
23
+ # Abstract
24
+
25
+ Deep neural networks have great representation power, but typically require large numbers of training examples. This motivates deep active learning methods that can significantly reduce the amount of labeled training data. Empirical successes of deep active learning have been recently reported in the literature, however, rigorous label complexity guarantees of deep active learning have remained elusive. This constitutes a significant gap between theory and practice. This paper tackles this gap by providing the first near-optimal label complexity guarantees for deep active learning. The key insight is to study deep active learning from the nonparametric classification perspective. Under standard low noise conditions, we show that active learning with neural networks can provably achieve the minimax label complexity, up to disagreement coefficient and other logarithmic terms. When equipped with an abstention option, we further develop an efficient deep active learning algorithm that achieves polylog $\left(\frac{1}{\varepsilon}\right)$ label complexity, without any low noise assumptions. We also provide extensions of our results beyond the commonly studied Sobolev/Hölder spaces and develop label complexity guarantees for learning in Radon BV<sup>2</sup> spaces, which have recently been proposed as natural function spaces associated with neural networks.
26
+
27
+ # 1 Introduction
28
+
29
+ We study active learning with neural network hypothesis classes, sometimes known as deep active learning. Active learning agent proceeds by selecting the most informative data points to label: The goal of active learning is to achieve the same accuracy achievable by passive learning, but with much fewer label queries (Settles, 2009; Hanneke, 2014). When the hypothesis class is a set of neural networks, the learner further benefits from the representation power of deep neural networks, which has driven the successes of passive learning in the past decade (Krizhevsky et al., 2012; LeCun et al., 2015). With these added benefits, deep active learning has become a popular research area, with empirical successes observed in many recent papers (Sener and Savarese, 2018; Ash et al., 2019; Citovsky et al., 2021; Ash et al., 2021; Kothawade et al., 2021; Emam et al., 2021; Ren et al., 2021). However, due to the difficulty of analyzing a set of neural networks, rigorous label complexity guarantees for deep active learning have remained largely elusive.
30
+
31
+ To the best of our knowledge, there are only two papers (Karzand and Nowak, 2020; Wang et al., 2021) that have made the attempts at theoretically quantifying active learning gains with neural networks. While insightful views are provided, these two works have their own limitations. The guarantees provided in Karzand and Nowak (2020) only work in the $1d$ case where data points are uniformly sampled from $[0,1]$ and labeled by a well-seperated piece-wise constant function in a noise-free way (i.e., without any labeling noise). Wang et al. (2021) study deep active learning by
32
+
33
+ linearizing the neural network at its random initialization and then analyzing it as a linear function; moreover, as the authors agree, their error bounds and label complexity guarantees can in fact be vacuous in certain cases. Thus, it's fair to say that up to now researchers have not identified cases where deep active learning are provably near minimax optimal (or even with provably non-vacuous guarantees), which constitutes a significant gap between theory and practice.
34
+
35
+ In this paper, we bridge this gap by providing the first near-optimal label complexity guarantees for deep active learning. We obtain insights from the nonparametric setting where the conditional probability (of taking a positive label) is assumed to be a smooth function (Tsybakov, 2004; Audibert and Tsybakov, 2007). Previous nonparametric active learning algorithms proceed by partitioning the action space into exponentially many sub-regions (e.g., partitioning the unit cube $[0,1]^d$ into $\varepsilon^{-d}$ sub-cubes each with volume $\varepsilon^d$ ), and then conducting local mean (or some higher-order statistics) estimation within each sub-region (Castro and Nowak, 2008; Minsker, 2012; Locatelli et al., 2017, 2018; Shekhar et al., 2021; Kpotufe et al., 2021). We show that, with an appropriately chosen set of neural networks that globally approximates the smooth regression function, one can in fact recover the minimax label complexity for active learning, up to disagreement coefficient (Hanneke, 2007, 2014) and other logarithmic factors. Our results are established by (i) identifying the "right tools" to study neural networks (ranging from approximation results (Yarotsky, 2017, 2018) to complexity measure of neural networks (Bartlett et al., 2019)), and (ii) developing novel extensions of agnostic active learning algorithms (Balcan et al., 2006; Hanneke, 2007, 2014) to work with a set of neural networks.
36
+
37
+ While matching the minimax label complexity in nonparametric active learning is existing, such minimax results scale as $\Theta(\mathrm{poly}(\frac{1}{\varepsilon}))$ (Castro and Nowak, 2008; Locatelli et al., 2017) and do not resemble what is practically observed in deep active learning: A fairly accurate neural network classifier can be obtained by training with only a few labeled data points. Inspired by recent results in parametric active learning with abstention (Puchkin and Zhivotovskiy, 2021; Zhu and Nowak, 2022), we develop an oracle-efficient algorithm showing that deep active learning provably achieves polylog $(\frac{1}{\varepsilon})$ label complexity when equipped with an abstention option (Chow, 1970). Our algorithm not only achieves an exponential saving in label complexity (without any low noise assumptions), but is also highly practical: In real-world scenarios such as medical imaging, it makes more sense for the classifier to abstain from making prediction on hard examples (e.g., those that are close to the boundary), and ask medical experts to make the judgments.
38
+
39
+ # 1.1 Problem setting
40
+
41
+ Let $\mathcal{X}$ denote the instance space and $\mathcal{Y}$ denote the label space. We focus on the binary classification problem where $\mathcal{Y} := \{+1, -1\}$ . The joint distribution over $\mathcal{X} \times \mathcal{Y}$ is denoted as $\mathcal{D}_{\mathcal{X}\mathcal{Y}}$ . We use $\mathcal{D}_{\mathcal{X}}$ to denote the marginal distribution over the instance space $\mathcal{X}$ , and use $\mathcal{D}_{\mathcal{Y}|x}$ to denote the conditional distribution of $\mathcal{Y}$ with respect to any $x \in \mathcal{X}$ . We consider the standard active learning setup where $x \sim \mathcal{D}_{\mathcal{X}}$ but its label $y \sim \mathcal{D}_{\mathcal{Y}|x}$ is only observed after issuing a label query. We define $\eta(x) := \mathbb{P}_{y \sim \mathcal{D}_{\mathcal{Y}|x}}(y = +1)$ as the conditional probability of taking a positive label. The Bayes optimal classifier $h^{\star}$ can thus be expressed as $h^{\star}(x) := \mathrm{sign}(2\eta(x) - 1)$ . For any classifier $h: \mathcal{X} \to \mathcal{Y}$ , its (standard) error is calculated as $\mathrm{err}(h) := \mathbb{P}_{(x,y) \sim \mathcal{D}_{\mathcal{X}\mathcal{Y}}} (h(x) \neq y)$ ; and its (standard) excess error is defined as $\mathrm{excess}(h) := \mathrm{err}(h) - \mathrm{err}(h^{\star})$ . Our goal is to learn an accurate classifier with a small number of label querying.
42
+
43
+ The nonparametric setting. We consider the nonparametric setting where the conditional probability $\eta$ is characterized by a smooth function. Fix any $\alpha \in \mathbb{N}_{+}$ , the Sobolev norm of a function $f:\mathcal{X}\to \mathbb{R}$ is defined as $\| f\|_{\mathcal{W}^{\alpha ,\infty}}\coloneqq \max_{\overline{\alpha},|\overline{\alpha} |\leq \alpha}\mathrm{ess}\sup_{x\in \mathcal{X}}|\mathsf{D}^{\alpha}f(x)|$ , where $\alpha = (\alpha_{1},\ldots ,\alpha_{d})$ $|\alpha | = \sum_{i = 1}^{d}\alpha_{i}$ and $\mathsf{D}^{\alpha}f$ denotes the standard $\alpha$ -th weak derivative of $f$ . The unit ball in the Sobolev space is defined as $\mathcal{W}_1^{\alpha ,\infty}(\mathcal{X})\coloneqq \{f:\| f\|_{\mathcal{W}^{\alpha ,\infty}}\leq 1\}$ . Following the convention of nonparametric active learning (Castro and Nowak, 2008; Minsker, 2012; Locatelli et al., 2017, 2018; Shekhar et al., 2021; Kpotufe et al., 2021), we assume $\mathcal{X} = [0,1]^d$ and $\eta \in \mathcal{W}_1^{\alpha ,\infty}(\mathcal{X})$ (except in Section 4).
44
+
45
+ Neural Networks. We consider feedforward neural networks with Rectified Linear Unit (ReLU) activation function, which is defined as $\mathsf{ReLU}(x)\coloneqq \max \{x,0\}$ . Each neural network $f_{\mathrm{dnn}}:\mathcal{X}\to \mathbb{R}$ consists of several input units (which corresponds to the covariates of $x\in \mathcal{X}$ ), one output unit (which corresponds to the prediction in $\mathbb{R}$ ), and multiple hidden computational units. Each hidden
46
+
47
+ computational unit takes inputs $\{\bar{x}_i\}_{i=1}^N$ (which are outputs from previous layers) and perform the computation $\mathrm{ReLU}(\sum_{i=1}^{N} w_i \bar{x}_i + b)$ with adjustable parameters $\{w_i\}_{i=1}^N$ and $b$ ; the output unit performs the same operation, but without the ReLU nonlinearity. We use $W$ to denote the total number of parameters of a neural network, and $L$ to denote the depth of the neural network.
48
+
49
+ # 1.2 Contributions and paper organization
50
+
51
+ Neural networks are known to be universal approximators (Cybenko, 1989; Hornik, 1991). In this paper, we argue that, in both passive and active regimes, the universal approximatability makes neural networks "universal classifiers" for classification problems: With an appropriately chosen set of neural networks, one can recover known minimax rates (up to disagreement coefficients in the active setting) in the rich nonparametric regimes. We provide informal statements of our main results in the sequel, with detailed statements and associated definitions/algorithms deferred to later sections.
52
+
53
+ In Section 2, we analyze the label complexity of deep active learning under the standard Tsybakov noise condition with smoothness parameter $\beta \geq 0$ (Tsybakov, 2004). Let $\mathcal{H}_{\mathrm{dnn}}$ be an appropriately chosen set of neural network classifiers and denote $\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon)$ as the disagreement coefficient (Hanneke, 2007, 2014) at level $\varepsilon$ . We develop the following label complexity guarantees for deep active learning.
54
+
55
+ Theorem 1 (Informal). There exists an algorithm that returns a neural network classifier $\widehat{h} \in \mathcal{H}_{\mathrm{dnn}}$ with excess error $\widetilde{O}(\varepsilon)$ after querying $\widetilde{O}(\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon^{\frac{\beta}{1 + \beta}}) \cdot \varepsilon^{-\frac{d + 2\alpha}{\alpha + \alpha\beta}})$ labels.
56
+
57
+ The label complexity presented in Theorem 1 matches the active learning lower bound $\Omega (\varepsilon^{-\frac{d + 2\alpha}{\alpha + \alpha\beta}})$ (Locatelli et al., 2017) up to the dependence on the disagreement coefficient (and other logarithmic factors). Since $\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon)\leq \varepsilon^{-1}$ by definition, the label complexity presented in Theorem 1 is never worse than the passive learning rates $\widetilde{\Theta} (\varepsilon^{-\frac{d + 2\alpha + \alpha\beta}{\alpha + \alpha\beta}})$ (Audibert and Tsybakov, 2007). We also discover conditions under which the disagreement coefficient with respect to a set of neural network classifiers can be properly bounded, i.e., $\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon) = o(\varepsilon^{-1})$ (implying strict improvement over passive learning) and $\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon) = o(1)$ (implying matching active learning lower bound).
58
+
59
+ In Section 3, we develop label complexity guarantees for deep active learning when an additional abstention option is allowed (Chow, 1970; Puchkin and Zhivotovsky, 2021; Zhu and Nowak, 2022). Suppose a cost (e.g. 0.49) that is marginally smaller than random guessing (which has expected cost 0.5) is incurred whenever the classifier abstains from making a predication, we develop the following label complexity guarantees for deep active learning.
60
+
61
+ Theorem 2 (Informal). There exists an efficient algorithm that constructs a neural network classifier $\hat{h}_{\mathrm{dnn}}$ with Chow's excess error $\hat{O}(\varepsilon)$ after querying polylog $(\frac{1}{\varepsilon})$ labels.
62
+
63
+ The above polylog $\left(\frac{1}{\varepsilon}\right)$ label complexity bound is achieved without any low noise assumptions. Such exponential label savings theoretically justify the great empirical performances of deep active learning observed in practice (e.g., in Sener and Savarese (2018)): It suffices to label a few data points to achieve a high accuracy level. Moreover, apart from an initialization step, our algorithm (Algorithm 4) developed for Theorem 2 can be efficiently implemented in $\widetilde{O} (\varepsilon^{-1})$ time, given a convex loss regression oracle over an appropriately chosen set of neural networks; in practice, the regression oracle can be approximated by running stochastic gradient descent.
64
+
65
+ Technical contributions. Besides identifying the "right tools" (ranging from approximation results (Yarotsky, 2017, 2018) to complexity analyses (Bartlett et al., 2019)) to analyze deep active learning, our theoretical guarantees are empowered by novel extensions of active learning algorithms under neural network approximations. In particular, we deal with approximation error in active learning under Tsybakov noise, and identify conditions that greatly relax the approximation requirement in the learning with abstention setup; we also analyze the disagreement coefficient, both classifier-based and value function-based, with a set of neural networks. These analyses together lead to our main results for deep active learning (e.g., Theorem 1 and Theorem 2). More generally, we establish a
66
+
67
+ bridge between approximation theory and active learning; we provide these general guarantees in Appendix B (under Tsybakov noise) and Appendix D (with the abstention option), which can be of independent interests. Benefited from these generic algorithms and guarantees, in Section 4, we extend our results into learning smooth functions in the Radon BV $^2$ space (Ongie et al., 2020; Parhi and Nowak, 2021, 2022a,b; Unser, 2022), which is recently proposed as a natural space to analyze neural networks.
68
+
69
+ # 1.3 Related work
70
+
71
+ Active learning concerns about learning accurate classifiers without extensive human labeling. One of the earliest work of active learning dates back to the CAL algorithm proposed by Cohn et al. (1994), which set the cornerstone for disagreement-based active learning. Since then, a long line of work have been developed, either directly working with a set classifier (Balcan et al., 2006; Hanneke, 2007; Dasgupta et al., 2007; Beygelzimer et al., 2009, 2010; Huang et al., 2015; Cortes et al., 2019) or work with a set of regression functions (Krishnamurthy et al., 2017, 2019). These work mainly focus on the parametric regime (e.g., learning with a set of linear classifiers), and their label complexities rely on the boundedness of the so-called disagreement coefficient (Hanneke, 2007, 2014; Friedman, 2009). Active learning in the nonparametric regime has been analyzed in Castro and Nowak (2008); Minsker (2012); Locatelli et al. (2017, 2018); Kpotufe et al. (2021). These algorithms rely on partitioning of the input space $\mathcal{X} \subseteq [0,1]^d$ into exponentially (in dimension) many small cubes, and then conduct local mean (or some higher-order statistics) estimation within each small cube.
72
+
73
+ It is well known that, in the worst case, active learning exhibits no label complexity gains over the passive counterpart (Käärääinen, 2006). To bypass these worst-case scenarios, active learning has been popularly analyzed under the so-called Tsybakov low noise conditions (Tsybakov, 2004). Under Tsybakov noise conditions, active learning has been shown to be strictly superior than passive learning in terms of label complexity (Castro and Nowak, 2008; Locatelli et al., 2017). Besides analyzing active learning under favorable low noise assumptions, more recently, researchers consider active learning with an abstention option and analyze its label complexity under Chow's error (Chow, 1970). In particular, Puchkin and Zhivotovsky (2021); Zhu and Nowak (2022) develop active learning algorithms with polylog $\left(\frac{1}{\varepsilon}\right)$ label complexity when analyzed under Chow's excess error. Shekhar et al. (2021) study nonparametric active learning under a different notion of the Chow's excess error, and propose algorithms with poly $\left(\frac{1}{\varepsilon}\right)$ label complexity; their algorithms follow similar procedures of those partition-based nonparametric active learning algorithms (e.g., Minsker (2012); Locatelli et al. (2017)).
74
+
75
+ Inspired by the success of deep learning in the passive regime, active learning with neural networks has been extensively explored in recent years (Sener and Savarese, 2018; Ash et al., 2019; Citovsky et al., 2021; Ash et al., 2021; Kothawade et al., 2021; Emam et al., 2021; Ren et al., 2021). Great empirical performances are observed in these papers, however, rigorous label complexity guarantees have largely remains elusive (except in Karzand and Nowak (2020); Wang et al. (2021), with limitations discussed before). We bridge the gap between practice and theory by providing the first near-optimal label complexity guarantees for deep active learning. Our results are built upon approximation results of deep neural networks (Yarotsky, 2017, 2018; Parhi and Nowak, 2022b) and VC/pseudo dimension analyses of neural networks with given structures (Bartlett et al., 2019).
76
+
77
+ # 2 Label complexity of deep active learning
78
+
79
+ We analyze the label complexity of deep active learning in this section. We first introduce the Tsybakov noise condition in Section 2.1, and then identify the "right tools" to analyze classification problems with neural network classifiers in Section 2.2 (where we also provide passive learning guarantees). We establish our main active learning guarantees in Section 2.3.
80
+
81
+ # 2.1 Tsybakov noise condition
82
+
83
+ It is well known that active learning exhibits no label complexity gains over the passive counterpart without additional low noise assumptions (Kääräinen, 2006). We next introduce the Tsybokov low noise condition (Tsybakov, 2004), which has been extensively analyzed in active learning literature.
84
+
85
+ Definition 1 (Tsybakov noise). A distribution $\mathcal{D}_{\mathcal{X}\mathcal{Y}}$ satisfies the Tsybakov noise condition with parameter $\beta \geq 0$ and a universal constant $c \geq 1$ if, $\forall \tau > 0$ ,
86
+
87
+ $$
88
+ \mathbb {P} _ {x \sim \mathcal {D} _ {\mathcal {X}}} (| \eta (x) - 1 / 2 | \leq \tau) \leq c \tau^ {\beta}.
89
+ $$
90
+
91
+ The case with $\beta = 0$ corresponds to the general case without any low noise conditions, where no active learning algorithm can outperform the passive counterpart (Audibert and Tsybakov, 2007; Locatelli et al., 2017). We use $\mathcal{P}(\alpha, \beta)$ to denote the set of distributions satisfying: (i) the smoothness conditions introduced in Section 1.1 with parameter $\alpha > 0$ ; and (ii) the Tsybakov low noise condition (i.e., Definition 1) with parameter $\beta \geq 0$ . We assume $\mathcal{D}_{\mathcal{X}\mathcal{Y}} \in \mathcal{P}(\alpha, \beta)$ in the rest of Section 2. As in Castro and Nowak (2008); Hanneke (2014), we assume the knowledge of noise/smoothness parameters.
92
+
93
+ # 2.2 Approximation and expressiveness of neural networks
94
+
95
+ Neural networks are known to be universal approximators (Cybenko, 1989; Hornik, 1991): For any continuous function $g: \mathcal{X} \to \mathbb{R}$ and any error tolerance $\kappa > 0$ , there exists a large enough neural network $f_{\mathrm{dnn}}$ such that $\| f_{\mathrm{dnn}} - g \|_{\infty} \coloneqq \sup_{x \in \mathcal{X}} |f_{\mathrm{dnn}}(x) - g(x)| \leq \kappa$ . Recently, non-asymptotic approximation rates by ReLU neural networks have been developed for smooth functions in the Sobolev space, which we restate in the following.
96
+
97
+ Theorem 3 (Yarotsky (2017)). Fix any $\kappa > 0$ . For any $f^{\star} = \eta \in \mathcal{W}_1^{\alpha, \infty}([0,1]^d)$ , there exists a neural network $f_{\mathrm{dnn}}$ with $W = O(\kappa^{-\frac{d}{\alpha}} \log \frac{1}{\kappa})$ total number of parameters arranged in $L = O(\log \frac{1}{\kappa})$ layers such that $\| f_{\mathrm{dnn}} - f^{\star}\|_{\infty} \leq \kappa$ .
98
+
99
+ The architecture of the neural network $f_{\mathrm{dnn}}$ appearing in the above theorem only depends on the smooth function space $\mathcal{W}_1^{\alpha, \infty}([0, 1]^d)$ , but otherwise is independent of the true regression function $f^{\star}$ ; also see Yarotsky (2017) for details. Let $\mathcal{F}_{\mathrm{dnn}}$ denote the set of neural network regression functions with the same architecture. We construct a set of neural network classifiers by thresholding the regression function at $\frac{1}{2}$ , i.e., $\mathcal{H}_{\mathrm{dnn}} := \{h_f : = \text{sign}(2f(x) - 1) : f \in \mathcal{F}_{\mathrm{dnn}}\}$ . The next result concerns about the expressiveness of the neural network classifiers, in terms of a well-known complexity measure: the VC dimension (Vapnik and Chervonenkis, 1971).
100
+
101
+ Theorem 4 (Bartlett et al. (2019)). Let $\mathcal{H}_{\mathrm{dnn}}$ be a set of neural network classifiers of the same architecture and with $W$ parameters arranged in $L$ layers. We then have
102
+
103
+ $$
104
+ \Omega (W L \log (W / L)) \leq \operatorname {V C d i m} (\mathcal {H} _ {\mathrm {d n n}}) \leq O (W L \log (W)).
105
+ $$
106
+
107
+ With these tools, we can construct a set of neural network classifiers $\mathcal{H}_{\mathrm{dnn}}$ such that (i) the best in-class classifier $\bar{h} \in \mathcal{H}_{\mathrm{dnn}}$ has small excess error, and (ii) $\mathcal{H}_{\mathrm{dnn}}$ has a well-controlled VC dimension that is proportional to smooth/noise parameters. More specifically, we have the following proposition.
108
+
109
+ Proposition 1. Suppose $\mathcal{D}_{\mathcal{X}\mathcal{Y}}\in \mathcal{P}(\alpha ,\beta)$ . One can construct a set of neural network classifier $\mathcal{H}_{\mathrm{dnn}}$ such that the following two properties hold simultaneously:
110
+
111
+ $$
112
+ \inf _ {h \in \mathcal {H} _ {\mathrm {d n n}}} \operatorname {e r r} (h) - \operatorname {e r r} (h ^ {\star}) = O (\varepsilon) \quad a n d \quad \operatorname {V C d i m} (\mathcal {H} _ {\mathrm {d n n}}) = \widetilde {O} (\varepsilon^ {- \frac {d}{\alpha (1 + \beta)}}).
113
+ $$
114
+
115
+ With the approximation results obtained above, to learn a classifier with $O(\varepsilon)$ excess error, one only needs to focus on a set of neural networks $\mathcal{H}_{\mathrm{dnn}}$ with a well-controlled VC dimension. As a warm-up, we first analyze the label complexity of such procedure in the passive regime (with fast rates).
116
+
117
+ Theorem 5. Suppose $\mathcal{D}_{\mathcal{X}\mathcal{Y}}\in \mathcal{P}(\alpha ,\beta)$ . Fix any $\varepsilon ,\delta >0$ . Let $\mathcal{H}_{\mathrm{dnn}}$ be the set of neural network classifiers constructed in Proposition 1. With $n = \widetilde{O} (\varepsilon^{-\frac{d + 2\alpha + \alpha\beta}{\alpha(1 + \beta)}})$ i.i.d. sampled points, with probability at least $1 - \delta$ , the empirical risk minimizer $\widehat{h}\in \mathcal{H}_{\mathrm{dnn}}$ achieves excess error $O(\varepsilon)$ .
118
+
119
+ The label complexity results obtained in Theorem 5 matches, up to logarithmic factors, the passive learning lower bound $\Omega\left(\varepsilon^{-\frac{d + 2\alpha + \alpha\beta}{\alpha(1 + \beta)}}\right)$ established in Audibert and Tsybakov (2007), indicating that our proposed learning procedure with a set of neural networks is near minimax optimal.
120
+
121
+ # 2.3 Deep active learning and guarantees
122
+
123
+ The passive learning procedure presented in the previous section treats every data point equally, i.e., it requests the label of every data point. Active learning reduces the label complexity by only querying labels of data points that are "more important". We present deep active learning results in this section. Our algorithm (Algorithm 1) is inspired by RobustCAL (Balcan et al., 2006; Hanneke, 2007, 2014) and the seminal CAL algorithm (Cohn et al., 1994); we call our algorithm NeuralCAL to emphasize that it works with a set of neural networks.
124
+
125
+ For any accuracy level $\varepsilon > 0$ , NeuralCAL first initialize a set of neural network classifiers $\mathcal{H}_0 := \mathcal{H}_{\mathrm{dnn}}$ such that (i) the best in-class classifier $\check{h} := \arg \min_{h \in \mathcal{H}_{\mathrm{dnn}}} \operatorname{err}(h)$ has excess error at most $O(\varepsilon)$ , and (ii) the VC dimension of $\mathcal{H}_{\mathrm{dnn}}$ is upper bounded by $\widetilde{O}\left(\varepsilon^{-\frac{d}{\alpha(1 + \beta)}}\right)$ (see Section 2.2 for more details). NeuralCAL then runs in epochs of geometrically increasing lengths. At the beginning of epoch $m$ , based on previously labeled data points, NeuralCAL updates a set of active classifier $\mathcal{H}_m$ such that, with high probability, the best classifier $\check{h}$ remains uneliminated. Within each epoch $m$ , NeuralCAL only queries the label $y$ of a data point $x$ if it lies in the region of disagreement with respect to the current active set of classifier $\mathcal{H}_m$ , i.e., $\mathrm{DIS}(\mathcal{H}_m) := \{x \in \mathcal{X} : \exists h_1, h_2 \in \mathcal{H}_m \text{ s.t. } h_1(x) \neq h_2(x)\}$ . NeuralCAL returns any classifier $\widehat{h} \in \mathcal{H}_m$ that remains uneliminated after $M - 1$ epoch.
126
+
127
+ # Algorithm 1 NeuralCAL
128
+
129
+ Input: Accuracy level $\varepsilon \in (0,1)$ , confidence level $\delta \in (0,1)$ .
130
+ 1: Let $\mathcal{H}_{\mathrm{dnn}}$ be a set of neural networks classifiers constructed in Proposition 1.
131
+ 2: Define $T := \varepsilon^{-\frac{2 + \beta}{1 + \beta}} \cdot \mathrm{VCdim}(\mathcal{H}_{\mathrm{dnn}})$ , $M := \lceil \log_2 T \rceil$ , $\tau_m := 2^m$ for $m \geq 1$ and $\tau_0 := 0$ .
132
+ 3: Define $\rho_{m}:= O\left(\left(\frac{\mathrm{VCdim}(\mathcal{H}_{\mathrm{dnn}})\cdot\log(\tau_{m-1})\cdot\log(M / \delta)}{\tau_{m-1}}\right)^{\frac{1 + \beta}{2 + \beta}}\right)$ for $m\geq 2$ and $\rho_{1}\coloneqq 1$
133
+ 4: Define $\widehat{R}_m(h) \coloneqq \sum_{t=1}^{\tau_{m-1}} Q_t \mathbb{1}(h(x_t) \neq y_t)$ with the convention that $\sum_{t=1}^{0} \ldots = 0$ .
134
+ 5: Initialize $\mathcal{H}_0\coloneqq \mathcal{H}_{\mathrm{dnn}}$
135
+ 6: for epoch $m = 1,2,\ldots ,M$ do
136
+ 7: Update active set $\mathcal{H}_m\coloneqq \left\{h\in \mathcal{H}_{m - 1}:\widehat{R}_m(h)\leq \inf_{h\in \mathcal{H}_{m - 1}}\widehat{R}_m(h) + \tau_{m - 1}\cdot \rho_m\right\}$
137
+ 8: if epoch $m = M$ then
138
+ 9: Return any classifier $\widehat{h} \in \mathcal{H}_M$ .
139
+ 10: for time $t = \tau_{m - 1} + 1,\dots ,\tau_m$ do
140
+ 11: Observe $x_{t} \sim \mathcal{D}_{\mathcal{X}}$ . Set $Q_{t} \coloneqq \mathbb{1}(x_{t} \in \mathrm{DIS}(\mathcal{H}_{m}))$ .
141
+ 12: if $Q_{t} = 1$ then
142
+ 13: Query the label $y_{t}$ of $x_{t}$ .
143
+
144
+ Since NeuralCAL only queries labels of data points lying in the region of disagreement, its label complexity should intuitively be related to how fast the region of disagreement shrinks. More formally, the rate of collapse of the (probability measure of) region of disagreement is captured by the (classifier-based) disagreement coefficient (Hanneke, 2007, 2014), which we introduce next.
145
+
146
+ Definition 2 (Classifier-based disagreement coefficient). For any $\varepsilon_0$ and classifier $h\in \mathcal{H}$ , the classifier-based disagreement coefficient of $h$ is defined as
147
+
148
+ $$
149
+ \theta_ {\mathcal {H}, h} \left(\varepsilon_ {0}\right) := \sup _ {\varepsilon > \varepsilon_ {0}} \frac {\mathbb {P} _ {x \sim \mathcal {D} _ {\mathcal {X}}} \left(\mathrm {D I S} \left(\mathcal {B} _ {\mathcal {H}} (h , \varepsilon)\right)\right)}{\varepsilon} \vee 1,
150
+ $$
151
+
152
+ where $\mathcal{B}_{\mathcal{H}}(h,\varepsilon)\coloneqq \{g\in \mathcal{H}:\mathbb{P}(x\in \mathcal{X}:g(x)\neq h(x))\leq \varepsilon \}$ . We also define $\theta_{\mathcal{H}}(\varepsilon_0)\coloneqq \sup_{h\in \mathcal{H}}\theta_{\mathcal{H},h}(\varepsilon_0)$
153
+
154
+ The guarantees of NeuralCAL follows from a more general analysis of RobustCAL under approximation. In particular, to achieve fast rates (under Tsybakov noise), previous analysis of RobustCAL requires that the Bayes classifier is in the class (or a Bernstein condition for every $h \in \mathcal{H}$ ) (Hanneke, 2014). These requirements are stronger compared to what we have in the case with neural network approximations. Our analysis extends the understanding of RobustCAL under approximation. We defer such general analysis to Appendix B, and present the following guarantees.
155
+
156
+ Theorem 6. Suppose $\mathcal{D}_{\mathcal{X}\mathcal{Y}}\in \mathcal{P}(\alpha ,\beta)$ . Fix any $\varepsilon, \delta >0$ . With probability at least $1 - \delta$ , Algorithm 1 returns a classifier $\widehat{h}\in \mathcal{H}_{\mathrm{dnn}}$ with excess error $\widetilde{O} (\varepsilon)$ after querying $\widetilde{O} (\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon^{\frac{\beta}{1 + \beta}})\cdot \varepsilon^{-\frac{d + 2\alpha}{\alpha + \alpha\beta}})$ labels.
157
+
158
+ We next discuss in detail the label complexity of deep active learning proved in Theorem 6.
159
+
160
+ - Ignoring the dependence on disagreement coefficient, the label complexity appearing in Theorem 6 matches, up to logarithmic factors, the lower bound $\Omega\left(\varepsilon^{-\frac{d + 2\alpha}{\alpha + \alpha\beta}}\right)$ for active learning (Locatelli et al., 2017). At the same time, the label complexity appearing in Theorem 6 is never worse than the passive counterpart (i.e., $\widetilde{\Theta}\left(\varepsilon^{-\frac{d + 2\alpha + \alpha\beta}{\alpha(1 + \beta)}}\right)$ since $\theta_{\mathcal{H}_{\mathrm{dnn}}}\left(\varepsilon^{\frac{\beta}{1 + \beta}}\right) \leq \varepsilon^{-\frac{\beta}{1 + \beta}}$ .
161
+ - We also identify cases when $\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon^{\frac{\beta}{1 + \beta}}) = o(\varepsilon^{-\frac{\beta}{1 + \beta}})$ , indicating strict improvement over passive learning (e.g., when $\mathcal{D}_{\mathcal{X}}$ is supported on countably many data points), and when $\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon^{\frac{\beta}{1 + \beta}}) = O(1)$ , indicating matching the minimax active lower bound (e.g., when $\mathcal{D}_{\mathcal{X}\mathcal{Y}}$ satisfies conditions such as decomposability defined in Definition 4. See Appendix C.2 for detailed discussion).
162
+
163
+ Our algorithm and theorems lead to the following results, which could benefit both deep active learning and nonparametric learning communities.
164
+
165
+ - Near minimax optimal label complexity for deep active learning. While empirical successes of deep active learning have been observed, rigorous label complexity analysis remains elusive except for two attempts made in Karzand and Nowak (2020); Wang et al. (2021). The guarantees provided in Karzand and Nowak (2020) only work in very special cases (i.e., data uniformly sampled from [0, 1] and labeled by well-separated piece-constant functions in a noise-free way). Wang et al. (2021) study deep active learning in the NTK regime by linearizing the neural network at its random initialization and analyzing it as a linear function; moreover, as the authors agree, their error bounds and label complexity guarantees are vacuous in certain cases. On the other hand, our guarantees are minimax optimal, up to disagreement coefficient and other logarithmic factors, which bridge the gap between theory and practice in deep active learning.
166
+ - New perspective on nonparametric learning. Nonparametric learning of smooth functions have been mainly approached by partitioning-based methods (Tsybakov, 2004; Audibert and Tsybakov, 2007; Castro and Nowak, 2008; Minsker, 2012; Locatelli et al., 2017, 2018; Kpotufe et al., 2021): Partition the unit cube $[0,1]^d$ into exponentially (in dimension) many sub-cubes and conduct local mean estimation within each sub-cube (which additionally requires a strictly stronger membership querying oracle). Our results show that, in both passive and active settings, one can learn globally with a set of neural networks and achieve near minimax optimal label complexities.
167
+
168
+ # 3 Deep active learning with abstention: Exponential speedups
169
+
170
+ While the theoretical guarantees provided in Section 2 are near minimax optimal, the label complexity scales as $\mathrm{poly}\left(\frac{1}{\varepsilon}\right)$ , which doesn't match the great empirical performance observed in deep active learning. In this section, we fill in this gap by leveraging the idea of abstention and provide a deep active learning algorithm that achieves exponential label savings. We introduce the concepts of abstention and Chow's excess error in Section 3.1, and provide our label complexity guarantees in Section 3.2.
171
+
172
+ # 3.1 Active learning without low noise conditions
173
+
174
+ The previous section analyzes active learning under Tsybakov noise, which has been extensively studied in the literature since Castro and Nowak (2008). More recently, promising results are observed in active learning under Chow's excess error, but otherwise without any low noise assumption (Puchkin and Zhivotovsky, 2021; Zhu and Nowak, 2022). We introduce this setting in the following.
175
+
176
+ Abstention and Chow's error (Chow, 1970). We consider classifier of the form $\widehat{h}:\mathcal{X}\to \mathcal{Y}\cup \{\bot \}$ where $\bot$ denotes the action of abstention. For any fixed $0 < \gamma < \frac{1}{2}$ , the Chow's error is defined as
177
+
178
+ $$
179
+ \mathrm {e r r} _ {\gamma} (\widehat {h}) := \mathbb {P} _ {(x, y) \sim \mathcal {D} _ {x \mathcal {Y}}} (\widehat {h} (x) \neq y, \widehat {h} (x) \neq \bot) + (1 / 2 - \gamma) \cdot \mathbb {P} _ {(x, y) \sim \mathcal {D} _ {x \mathcal {Y}}} (\widehat {h} (x) = \bot).
180
+ $$
181
+
182
+ The parameter $\gamma$ can be chosen as a small constant, e.g., $\gamma = 0.01$ , to avoid excessive abstention: The price of abstention is only marginally smaller than random guess (which incurs cost 0.5). The Chow's excess error is then defined as $\text{excess}_{\gamma}(\widehat{h}) \coloneqq \text{err}_{\gamma}(\widehat{h}) - \text{err}(h^{\star})$ (Puchkin and Zhivotovskiy, 2021).
183
+
184
+ At a high level, analyzing with Chow's excess error allows slackness in predications of hard examples (e.g., data points whose $\eta(x)$ is close to $\frac{1}{2}$ ) by leveraging the power of abstention. Puchkin and Zhivotovsky (2021); Zhu and Nowak (2022) show that $\mathrm{polylog}(\frac{1}{\varepsilon})$ is always achievable in the parametric settings. We generalize their results to the nonparametric setting and analyze active learning with a set of neural networks.
185
+
186
+ # 3.2 Exponential speedups with abstention
187
+
188
+ In this section, we work with a set of neural network regression functions $\mathcal{F}_{\mathrm{dnn}}: \mathcal{X} \to [0,1]$ (that approximates $\eta$ ) and then construct classifiers $h: \mathcal{X} \to \mathcal{Y} \cup \{\bot\}$ with an additional abstention action. To work with a set of regression functions $\mathcal{F}_{\mathrm{dnn}}$ , we analyze its "complexity" from the lenses of pseudo dimension $\mathrm{Pdim}(\mathcal{F}_{\mathrm{dnn}})$ (Pollard, 1984; Haussler, 1989, 1995) and value function disagreement coefficient $\theta_{\mathcal{F}_{\mathrm{dnn}}}^{\mathrm{val}}(\iota)$ (for some $\iota > 0$ ) (Foster et al., 2020). We defer detailed definitions of these complexity measures to Appendix D.1.
189
+
190
+ # Algorithm 2 NeuralCAL++
191
+
192
+ Input: Accuracy level $\varepsilon \in (0,1)$ , confidence level $\delta \in (0,1)$ , abstention parameter $\gamma \in (0,1/2)$ .
193
+
194
+ 1: Let $\mathcal{F}_{\mathrm{dnn}}$ be a set of neural network regression functions obtained by (i) applying Theorem 3 with an appropriate approximation level $\kappa$ (which satisfies $\frac{1}{\kappa} = \mathrm{poly}\left(\frac{1}{\gamma}\right)$ polylog( $\frac{1}{\varepsilon\gamma}$ ), and (ii) applying a preprocessing step on the set of neural networks obtained from step (i). See Appendix E for details.
195
+ 2: Define $T := \frac{\theta_{\mathcal{F}_{\mathrm{dnn}}}^{\mathrm{val}}(\gamma / 4) \cdot \mathrm{Pdim}(\mathcal{F}_{\mathrm{dnn}})}{\varepsilon \gamma}$ , $M := \lceil \log_2 T \rceil$ , and $C_\delta := O(\mathrm{Pdim}(\mathcal{F}_{\mathrm{dnn}}) \cdot \log (T / \delta))$ .
196
+ 3: Define $\tau_{m} := 2^{m}$ for $m \geq 1$ , $\tau_{0} := 0$ , and $\beta_{m} := 3(M - m + 1)C_{\delta}$ .
197
+ 4: Define $\widetilde{R}_m(f) \coloneqq \sum_{t=1}^{\tau_m-1} Q_t (\widehat{f}(x_t) - y_t)^2$ with the convention that $\sum_{t=1}^{0} \ldots = 0$ .
198
+ 5: for epoch $m = 1,2,\ldots ,M$ do
199
+ 6: Get $\widehat{f}_m\coloneqq \arg \min_{f\in \mathcal{F}_{\mathrm{dnn}}}\sum_{t = 1}^{\tau_{m - 1}}Q_t(f(x_t) - y_t)^2$
200
+ 7: (Implicitly) Construct active set $\mathcal{F}_m := \left\{f \in \mathcal{F}_{\mathrm{dnn}} : \widehat{R}_m(f) \leq \widehat{R}_m(\widehat{f}_m) + \beta_m\right\}$ .
201
+ 8: Construct classifier $\widehat{h}_m:\mathcal{X}\to \{+1, - 1,\bot \}$ as
202
+
203
+ $$
204
+ \widehat {h} _ {m} (x) := \left\{ \begin{array}{l l} \bot , & \text {i f [ l c b} (x; \mathcal {F} _ {m}) - \frac {\gamma}{4}, \text {u c b} (x; \mathcal {F} _ {m}) + \frac {\gamma}{4} ] \subseteq \big [ \frac {1}{2} - \gamma , \frac {1}{2} + \gamma \big ]; \\ \operatorname {s i g n} (2 \widehat {f} _ {m} (x) - 1), & \text {o . w .} \end{array} \right.
205
+ $$
206
+
207
+ and query function $g_{m}(x)\coloneqq \mathbb{1}\left(\frac{1}{2}\in \bigl (\mathsf{lcb}(x;\mathcal{F}_{m}) - \frac{\gamma}{4},\mathsf{ucb}(x;\mathcal{F}_{m}) + \frac{\gamma}{4}\bigr)\right)\cdot \mathbb{1}(\widehat{h}_{m}(x)\neq \bot).$ 9: if epoch $m = M$ then
208
+
209
+ 10: Return classifier $h_M$ .
210
+ 11: for time $t = \tau_{m-1} + 1, \dots, \tau_m$ do
211
+ 12: Observe $x_{t} \sim \mathcal{D}_{\mathcal{X}}$ . Set $Q_{t} \coloneqq g_{m}(x_{t})$
212
+ 13: if $Q_{t} = 1$ then
213
+ 14: Query the label $y_{t}$ of $x_{t}$ .
214
+
215
+ We now present NeuralCAL++ (Algorithm 2), a deep active learning algorithm that leverages the power of abstention. NeuralCAL++ first initialize a set of neural network regression functions $\mathcal{F}_{\mathrm{dnn}}$ by applying a preprocessing step on top of the set of regression functions obtained from Theorem 3 with a carefully chosen approximation level $\kappa$ . The preprocessing step mainly contains two actions: (1) clipping $f_{\mathrm{dnn}}: \mathcal{X} \to \mathbb{R}$ into $\check{f}_{\mathrm{dnn}}: \mathcal{X} \to [0,1]$ (since we obviously have $\eta(x) \in [0,1]$ ); and (2) filtering out $f_{\mathrm{dnn}} \in \mathcal{F}_{\mathrm{dnn}}$ that are clearly not a good approximation of $\eta$ . After initialization, NeuralCAL++ runs in epochs of geometrically increasing lengths. At the beginning of epoch $m \in [M]$ , NeuralCAL++ (implicitly) constructs an active set of regression functions $\mathcal{F}_m$ that are "close" to the true conditional probability $\eta$ . For any $x \sim \mathcal{D}_{\mathcal{X}}$ , NeuralCAL++ constructs a lower bound $\operatorname{lcb}(x; \mathcal{F}_m) := \inf_{f \in \mathcal{F}_m} f(x)$ and an upper bound $\operatorname{ucb}(x; \mathcal{F}_m) := \sup_{f \in \mathcal{F}_m} f(x)$ as a confidence range of $\eta(x)$ (based on $\mathcal{F}_m$ ). An empirical classifier with an abstention option
216
+
217
+ $\widehat{h}_m: \mathcal{X} \to \{+1, -1, \bot\}$ and a query function $g_m: \mathcal{X} \to \{0, 1\}$ are then constructed based on the confidence range (and the abstention parameter $\gamma$ ). For any time step $t$ within epoch $m$ , NeuralCAL++ queries the label of the observed data point $x_t$ if and only if $Q_t := g_m(x_t) = 1$ . NeuralCAL++ returns $\widehat{h}_M$ as the learned classifier.
218
+
219
+ NeuralCAL++ is adapted from the algorithm developed in Zhu and Nowak (2022), but with novel extensions. In particular, the algorithm presented in Zhu and Nowak (2022) requires the existence of a $\bar{f} \in \mathcal{F}$ such that $\| \bar{f} - \eta \|_{\infty} \leq \varepsilon$ (to achieve $\varepsilon$ Chow's excess error). Such an approximation requirement directly leads to $\mathrm{poly}\left(\frac{1}{\varepsilon}\right)$ label complexity in the nonparametric setting, which is unacceptable. The initialization step of NeuralCAL++ (line 1) is carefully chosen to ensure that $\mathrm{Pdim}(\mathcal{F}_{\mathrm{dnn}}), \theta_{\mathcal{F}_{\mathrm{dnn}}}^{\mathrm{val}}(\frac{\gamma}{4}) = \mathrm{poly}\left(\frac{1}{\gamma}\right) \cdot \mathrm{polylog}\left(\frac{1}{\varepsilon}\right)$ ; together with a sharper analysis of concentration results, these conditions help us derive the following deep active learning guarantees (also see Appendix D for a more general guarantee).
220
+
221
+ Theorem 7. Fix any $\varepsilon, \delta, \gamma > 0$ . With probability at least $1 - \delta$ , Algorithm 2 (with an appropriate initialization at line 1) returns a classifier $\widehat{h}$ with Chow's excess error $\widetilde{O}(\varepsilon)$ after querying $\mathrm{poly}\left(\frac{1}{\gamma}\right) \cdot \mathrm{polylog}\left(\frac{1}{\varepsilon\delta}\right)$ labels.
222
+
223
+ We discuss two important aspects of Algorithm 2/Theorem 7 in the following, i.e., exponential savings and computational efficiency. We defer more detailed discussions to Appendix F.1.
224
+
225
+ - Exponential speedups. Theorem 7 shows that, equipped with an abstention option, deep active learning enjoys polylog $\left(\frac{1}{\varepsilon}\right)$ label complexity. This provides theoretical justifications for great empirical results of deep active learning observed in practice. Moreover, Algorithm 2 outputs a classifier that abstains properly, i.e., it abstains only if abstention is the optimal choice; such a property further implies polylog $\left(\frac{1}{\varepsilon}\right)$ label complexity under standard excess error and Massart noise (Massart and Nédélec, 2006).
226
+ - Computational efficiency. Suppose one can efficiently implement a (weighted) square loss regression oracle over the initialized set of neural networks $\mathcal{F}_{\mathrm{dnn}}$ : Given any set $S$ of weighted examples $(w, x, y) \in \mathbb{R}_+ \times \mathcal{X} \times \mathcal{Y}$ as input, the regression oracle outputs $\widehat{f}_{\mathrm{dnn}} := \arg \min_{f \in \mathcal{F}_{\mathrm{dnn}}} \sum_{(w, x, y) \in S} w(f(x) - y)^2$ . Algorithm 2 can then be efficiently implemented with $\mathrm{poly}\left(\frac{1}{\gamma}\right) \cdot \frac{1}{\varepsilon}$ oracle calls.
227
+
228
+ While the label complexity obtained in Theorem 7 has desired dependence on $\mathrm{polylog}(\frac{1}{\varepsilon})$ , its dependence on $\gamma$ can be of order $\gamma^{-\mathrm{poly}(d)}$ . Our next result shows that, however, such dependence is unavoidable even in the case of learning a single ReLU function.
229
+
230
+ Theorem 8. Fix any $\gamma \in (0,1/8)$ . For any accuracy level $\varepsilon$ sufficiently small, there exists a problem instance such that (1) $\eta \in \mathcal{W}_1^{1,\infty}(\mathcal{X})$ and is of the form $\eta(x) \coloneqq \mathrm{ReLU}(\langle w,x\rangle + a) + b$ ; and (2) for any active learning algorithm, it takes at least $\gamma^{-\Omega(d)}$ labels to identify an $\varepsilon$ -optimal classifier, for either standard excess error or Chow's excess error (with parameter $\gamma$ ).
231
+
232
+ # 4 Extensions
233
+
234
+ Previous results are developed in the commonly studied Sobolev/Hölder spaces. Our techniques, however, are generic and can be adapted to other function spaces, given neural network approximation results. In this section, we provide extensions of our results to the Radon BV $^2$ space, which was recently proposed as the natural function space associated with ReLU neural networks (Ongie et al., 2020; Parhi and Nowak, 2021, 2022a,b; Unser, 2022).<sup>6</sup>
235
+
236
+ The Radon $\mathsf{BV}^2$ space. The Radon $\mathsf{BV}^2$ unit ball over domain $\mathcal{X}$ is defined as $\mathcal{R}\mathsf{BV}_1^2 (\mathcal{X})\coloneqq \{f:\| f\|_{\mathcal{R}\mathsf{BV}^2 (\mathcal{X})}\leq 1\}$ , where $\| f\|_{\mathcal{R}\mathsf{BV}^2 (\mathcal{X})}$ denotes the Radon $\mathsf{BV}^2$ norm of $f$ over domain $\mathcal{X}$ . Following Parhi and Nowak (2022b), we assume $\mathcal{X} = \{x\in \mathbb{R}^d:\| x\| _2\leq 1\}$ and $\eta \in \mathcal{R}\mathsf{BV}_1^2 (\mathcal{X})$ .
237
+
238
+ The Radon BV $^2$ space naturally contains neural networks of the form $f_{\mathrm{dnn}}(x) = \sum_{k=1}^{K} v_i \cdot \mathrm{ReLU}(w_i^\top x + b_i)$ . On the contrary, such $f_{\mathrm{dnn}}$ doesn't lie in any Sobolev space of order $\alpha \geq 2$ (since $f_{\mathrm{dnn}}$ doesn't have second order weak derivative). Thus, if $\eta$ takes the form of the aforementioned neural network (e.g., $\eta = f_{\mathrm{dnn}}$ ), approximating $\eta$ up to $\kappa$ from a Sobolev perspective requires $\widetilde{O}(\kappa^{-d})$ total parameters, which suffers from the curse of dimensionality. On the other side, however, such bad dependence on dimensionality goes away when approximating from a Radon BV $^2$ perspective, as shown in the following theorem.
239
+
240
+ Theorem 9 (Parhi and Nowak (2022b)). Fix any $\kappa > 0$ . For any $f^{\star} \in \mathcal{R}\mathsf{BV}_1^2(\mathcal{X})$ , there exists a one-hidden layer neural network $f_{\mathrm{dnn}}$ of width $K = O(\kappa^{-\frac{2d}{d+3}})$ such that $\| f^{\star} - f_{\mathrm{dnn}} \|_{\infty} \leq \kappa$ .
241
+
242
+ Equipped with this approximation result, we provide the active learning guarantees for learning a smooth function within the Radon BV $^2$ unit ball as follows.
243
+
244
+ Theorem 10. Suppose $\eta \in \mathcal{R}\mathsf{BV}_1^2 (\mathcal{X})$ and the Tsybakov noise condition is satisfied with parameter $\beta \geq 0$ . Fix any $\varepsilon, \delta > 0$ . There exists an algorithm such that, with probability at least $1 - \delta$ , it learns a classifier $\widehat{h} \in \mathcal{H}_{\mathrm{dnn}}$ with excess error $\widetilde{O} (\varepsilon)$ after querying $\widetilde{O} (\theta_{\mathcal{H}_{\mathrm{dnn}}}(\varepsilon^{\frac{\beta}{1 + \beta}}) \cdot \varepsilon^{-\frac{4d + 6}{(1 + \beta)(d + 3)}})$ labels.
245
+
246
+ Compared to the label complexity obtained in Theorem 6, the label complexity obtained in the above theorem doesn't suffer from the curse of dimensionality: For $d$ large enough, the above label complexity scales as $\varepsilon^{-O(1)}$ yet label complexity in Theorem 6 scales as $\varepsilon^{-O(d)}$ . Active learning guarantees under Chow's excess error in the Radon BV $^2$ space are similar to results presented in Theorem 7, and are thus deferred to Appendix G.
247
+
248
+ # 5 Discussion
249
+
250
+ We provide the first near-optimal deep active learning guarantees, under both standard excess error and Chow's excess error. Our results are powered by generic algorithms and analyses developed for active learning that bridge approximation guarantees into label complexity guarantees. We outline some natural directions for future research below.
251
+
252
+ - Disagreement coefficients for neural networks. While we have provided some results regarding the disagreement coefficients for neural networks, we believe a comprehensive investigation on this topic is needed. For instance, can we discover more general settings where the classifier-based disagreement coefficient can be upper bounded by $O(1)$ ? It is also interesting to explore sharper analyses on the value function disagreement coefficient.
253
+ - Adaptivity in deep active learning. Our current results are established with the knowledge of some problem-dependent parameters, e.g., the smoothness parameters regarding the function spaces and the noise levels. It will be interesting to see if one can develop algorithms that can automatically adapt to unknown parameters, e.g., by leveraging techniques developed in Locatelli et al. (2017, 2018).
254
+
255
+ # Acknowledgments and Disclosure of Funding
256
+
257
+ The authors would like to thank Rahul Parhi for many helpful discussions regarding his papers. We also would like to thank anonymous reviewers for their constructive comments. This work is partially supported by NSF grant 1934612 and AFOSR grant FA9550-18-1-0166.
258
+
259
+ # References
260
+
261
+ Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert Schapire. Taming the monster: A fast and simple algorithm for contextual bandits. In International Conference on Machine Learning, pages 1638-1646. PMLR, 2014.
262
+ Martin Anthony. Uniform glivenko-cantelli theorems and concentration of measure in the mathematical modelling of learning. Research Report LSE-CDAM-2002-07, 2002.
263
+ Jordan Ash, Surbhi Goel, Akshay Krishnamurthy, and Sham Kakade. Gone fishing: Neural active learning with fisher embeddings. Advances in Neural Information Processing Systems, 34, 2021.
264
+
265
+ Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv preprint arXiv:1906.03671, 2019.
266
+ Jean-Yves Audibert and Alexandre B Tsybakov. Fast learning rates for plug-in classifiers. The Annals of statistics, 35(2):608-633, 2007.
267
+ Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning, pages 65-72, 2006.
268
+ Peter L Bartlett, Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. The Journal of Machine Learning Research, 20(1):2285-2301, 2019.
269
+ Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance weighted active learning. In Proceedings of the 26th annual international conference on machine learning, pages 49-56, 2009.
270
+ Alina Beygelzimer, Daniel J Hsu, John Langford, and Tong Zhang. Agnostic active learning without constraints. Advances in neural information processing systems, 23, 2010.
271
+ Stéphane Boucheron, Olivier Bousquet, and Gábor Lugosi. Theory of classification: A survey of some recent advances. *ESAIM: probability and statistics*, 9:323-375, 2005.
272
+ Rui M Castro and Robert D Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5):2339-2353, 2008.
273
+ CK Chow. On optimum recognition error and reject tradeoff. IEEE Transactions on information theory, 16(1):41-46, 1970.
274
+ Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. Batch active learning at scale. Advances in Neural Information Processing Systems, 34, 2021.
275
+ David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15(2):201-221, 1994.
276
+ Corinna Cortes, Giulia DeSalvo, Mehryar Mohri, Ningshan Zhang, and Claudio Gentile. Active learning with disagreement graphs. In International Conference on Machine Learning, pages 1379-1387. PMLR, 2019.
277
+ George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303-314, 1989.
278
+ Sanjoy Dasgupta, Daniel J Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. Advances in neural information processing systems, 20, 2007.
279
+ Zeyad Ali Sami Emam, Hong-Min Chu, Ping-Yeh Chiang, Wojciech Czaja, Richard Leapman, Micah Goldblum, and Tom Goldstein. Active learning at theImagenet scale. arXiv preprint arXiv:2111.12880, 2021.
280
+ Dylan Foster, Alekh Agarwal, Miroslav Dudík, Haipeng Luo, and Robert Schapire. Practical contextual bandits with regression oracles. In International Conference on Machine Learning, pages 1539-1548. PMLR, 2018.
281
+ Dylan J Foster, Alexander Rakhlin, David Simchi-Levi, and Yunzong Xu. Instance-dependent complexity of contextual bandits and reinforcement learning: A disagreement-based perspective. arXiv preprint arXiv:2010.03104, 2020.
282
+ David A Freedman. On tail probabilities for martingales. the Annals of Probability, pages 100-118, 1975.
283
+ Eric Friedman. Active learning for smooth problems. In $COLT$ . CiteSeer, 2009.
284
+ Steve Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pages 353-360, 2007.
285
+ Steve Hanneke. Theory of active learning. Foundations and Trends in Machine Learning, 7(2-3), 2014.
286
+ David Haussler. Decision theoretic generalizations of the pac model for neural net and other learning applications. 1989.
287
+ David Haussler. Sphere packing numbers for subsets of the boolean n-cube with bounded vapnik-chervonenkis dimension. Journal of Combinatorial Theory, Series A, 69(2):217-232, 1995.
288
+
289
+ Juha Heinonen. Lectures on Lipschitz analysis. Number 100. University of Jyväskylä, 2005.
290
+ Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2): 251-257, 1991.
291
+ Tzu-Kuo Huang, Alekh Agarwal, Daniel J Hsu, John Langford, and Robert E Schapire. Efficient and parsimonious agnostic active learning. Advances in Neural Information Processing Systems, 28, 2015.
292
+ Matti Kääräinen. Active learning in the non-realizable case. In International Conference on Algorithmic Learning Theory, pages 63-77. Springer, 2006.
293
+ Mina Karzand and Robert D Nowak. Maximin active learning in overparameterized model classes. IEEE Journal on Selected Areas in Information Theory, 1(1):167-177, 2020.
294
+ Yongdai Kim, Ilsang Ohn, and Dongha Kim. Fast convergence rates of deep neural networks for classification. Neural Networks, 138:179-197, 2021.
295
+ Suraj Kothawade, Nathan Beck, Krishnateja Killamsetty, and Rishabh Iyer. Similar: Submodular information measures based active learning in realistic scenarios. Advances in Neural Information Processing Systems, 34, 2021.
296
+ Samory Kpotufe, Gan Yuan, and Yunfan Zhao. Nuances in margin conditions determine gains in active learning. arXiv preprint arXiv:2110.08418, 2021.
297
+ Akshay Krishnamurthy, Alekh Agarwal, Tzu-Kuo Huang, Hal Daumé III, and John Langford. Active learning for cost-sensitive classification. In International Conference on Machine Learning, pages 1915-1924. PMLR, 2017.
298
+ Akshay Krishnamurthy, Alekh Agarwal, Tzu-Kuo Huang, Hal Daumé III, and John Langford. Active learning for cost-sensitive classification. Journal of Machine Learning Research, 20:1-50, 2019.
299
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
300
+ Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015.
301
+ Gene Li, Pritish Kamath, Dylan J Foster, and Nathan Srebro. Eluder dimension and generalized rank. arXiv preprint arXiv:2104.06970, 2021.
302
+ Andrea Locatelli, Alexandra Carpentier, and Samory Kpotufe. Adaptivity to noise parameters in nonparametric active learning. In Proceedings of the 2017 Conference on Learning Theory, PMLR, 2017.
303
+ Andrea Locatelli, Alexandra Carpentier, and Samory Kpotufe. An adaptive strategy for active learning with smooth decision boundary. In Algorithmic Learning Theory, pages 547-571. PMLR, 2018.
304
+ Jianfeng Lu, Zuowei Shen, Haizhao Yang, and Shijun Zhang. Deep network approximation for smooth functions. SIAM Journal on Mathematical Analysis, 53(5):5465-5506, 2021.
305
+ Pascal Massart and Élodie Nédélec. Risk bounds for statistical learning. The Annals of Statistics, 34 (5):2326-2366, 2006.
306
+ Stanislav Minsker. Plug-in approach to active learning. Journal of Machine Learning Research, 13 (1), 2012.
307
+ Greg Ongie, Rebecca Willett, Daniel Soudry, and Nathan Srebro. A function space view of bounded norm infinite width relu nets: The multivariate case. In International Conference on Learning Representations, 2020.
308
+ Rahul Parhi and Robert D Nowak. Banach space representer theorems for neural networks and ridge splines. J. Mach. Learn. Res., 22(43):1-40, 2021.
309
+ Rahul Parhi and Robert D Nowak. What kinds of functions do deep neural networks learn? insights from variational spline theory. SIAM Journal on Mathematics of Data Science, 4(2):464-489, 2022a.
310
+ Rahul Parhi and Robert D Nowak. Near-minimax optimal estimation with shallow relu neural networks. IEEE Transactions on Information Theory, 2022b.
311
+ D Pollard. Convergence of Stochastic Processes. David Pollard, 1984.
312
+
313
+ Nikita Puchkin and Nikita Zhivotovskiy. Exponential savings in agnostic active learning through abstention. arXiv preprint arXiv:2102.00451, 2021.
314
+ Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B Gupta, Xiaojiang Chen, and Xin Wang. A survey of deep active learning. ACM Computing Surveys (CSUR), 54(9):1-40, 2021.
315
+ Daniel Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic exploration. In NIPS, pages 2256-2264. CiteSeer, 2013.
316
+ Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018.
317
+ Burr Settles. Active learning literature survey. 2009.
318
+ Shubhanshu Shekhar, Mohammad Ghavamzadeh, and Tara Javidi. Active learning for classification with abstention. IEEE Journal on Selected Areas in Information Theory, 2(2):705-719, 2021.
319
+ Alexander B Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135-166, 2004.
320
+ Michael Unser. Ridges, neural networks, and the radon transform. arXiv preprint arXiv:2203.02543, 2022.
321
+ VN Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264, 1971.
322
+ Martin J Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019.
323
+ Liwei Wang. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning. Journal of Machine Learning Research, 12(7), 2011.
324
+ Zhilei Wang, Pranjal Awasthi, Christoph Dann, Ayush Sekhari, and Claudio Gentile. Neural active learning with performance guarantees. Advances in Neural Information Processing Systems, 34, 2021.
325
+ Andrew Chi-Chin Yao. Probabilistic computations: Toward a unified measure of complexity. In 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pages 222-227. IEEE Computer Society, 1977.
326
+ Dmitry Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks, 94: 103-114, 2017.
327
+ Dmitry Yarotsky. Optimal approximation of continuous functions by very deep relu networks. In Conference on learning theory, pages 639-649. PMLR, 2018.
328
+ Yinglun Zhu and Robert Nowak. Efficient active learning with abstention. arXiv preprint arXiv:2204.00043, 2022.
329
+
330
+ # Checklist
331
+
332
+ 1. For all authors...
333
+
334
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
335
+ (b) Did you describe the limitations of your work? [Yes] See Section 5 for discussions on limitations and directions for future work.
336
+ (c) Did you discuss any potential negative societal impacts of your work? [N/A] Our paper is theoretical in nature, and there is no negative societal impact of our work in the foreseeable future.
337
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
338
+
339
+ 2. If you are including theoretical results...
340
+
341
+ (a) Did you state the full set of assumptions of all theoretical results? [Yes] Assumptions are clearly stated in the statement of each theorem.
342
+ (b) Did you include complete proofs of all theoretical results? [Yes] Complete proofs are provided in the Appendix.
343
+
344
+ 3. If you ran experiments...
345
+
346
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
347
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A]
348
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A]
349
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
350
+
351
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
352
+
353
+ (a) If your work uses existing assets, did you cite the creators? [N/A]
354
+ (b) Did you mention the license of the assets? [N/A]
355
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
356
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
357
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
358
+
359
+ 5. If you used crowdsourcing or conducted research with human subjects...
360
+
361
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
362
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
363
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
activelearningwithneuralnetworksinsightsfromnonparametricstatistics/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2426a02d5eb85a197016387983fd3f1d163e9cf24d0751fad3e7c618f33bf86f
3
+ size 47414
activelearningwithneuralnetworksinsightsfromnonparametricstatistics/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b78adb405d89f1e55470f6595ddb40bdb2a25e2da481c655521bc4da5456b93f
3
+ size 666705
activelearningwithsafetyconstraints/7bfa4ba0-1aef-4f31-9e0a-e6c240480ef1_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d280bd8cffb914f35a0421865dc881504bbf1737a1243842b8fbb52a7a37e13
3
+ size 91253
activelearningwithsafetyconstraints/7bfa4ba0-1aef-4f31-9e0a-e6c240480ef1_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea91d1a99517bf1dea569c4b29925bb7b612a8230aa5496e7b9aaff483720d87
3
+ size 115099
activelearningwithsafetyconstraints/7bfa4ba0-1aef-4f31-9e0a-e6c240480ef1_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28d5d61bddaaaf241c67e78be90345052d0e349f27d752fdb89359f3954684b1
3
+ size 1098372
activelearningwithsafetyconstraints/full.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning with Safety Constraints
2
+
3
+ Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, Kevin Jamieson
4
+
5
+ University of Washington, Seattle, WA
6
+
7
+ {camilr,ajwagen,jamiemmt,jamieson}@cs.washington.edu,lalitj@uw.edu
8
+
9
+ # Abstract
10
+
11
+ Active learning methods have shown great promise in reducing the number of samples necessary for learning. As automated learning systems are adopted into real-time, real-world decision-making pipelines, it is increasingly important that such algorithms are designed with safety in mind. In this work we investigate the complexity of learning the best safe decision in interactive environments. We reduce this problem to a constrained linear bandits problem, where our goal is to find the best arm satisfying certain (unknown) safety constraints. We propose an adaptive experimental design-based algorithm, which we show efficiently trades off between the difficulty of showing an arm is unsafe vs suboptimal. To our knowledge, our results are the first on best-arm identification in linear bandits with safety constraints. In practice, we demonstrate that this approach performs well on synthetic and real world datasets.
12
+
13
+ # 1 Introduction
14
+
15
+ In many problems in online decision-making, the goal of the learner is to take measurements in such a way as to learn a near-optimal policy. Oftentimes, though the space of policies may be large, the set of feasible, or safe policies could be much smaller, effectively constraining the search space of the learner. Furthermore, these constraints may themselves depend on unknown problem parameters.
16
+
17
+ For example, consider the problem of bidding sequentially in a series of auctions where the bidder bids a price $w_{t}$ , the value of winning an item $t$ is denoted $v_{t}$ , and the utility of winning that item and paying price $p_{t}$ is $v_{t} - p_{t}$ . The goal of the bidder is to choose an optimal strategy amongst bidding strategies $s \in S, s : \mathbb{R} \to \mathbb{R}$ . When a bidder is deciding how to choose these strategies, they often face constraints: they may have a budget $B$ they must abide to; they may wish to have those auctions they win be well-distributed across time (e.g. in the case of advertising campaigns); they may want to ensure the set of items they win satisfy some other property (e.g. for advertisements, they might want to ensure they are not over-targeting any demographic group).
18
+
19
+ As another example, inventory management systems may face similar issues of deciding amongst strategies, where there is some objective function (such as revenue) and a variety of constraints at play in this choice (e.g. capacity of a set of warehouses, employee scheduling constraints, or limits on the duration of delivery lag). They also operate in markets with changing demand and other uncertainties, leading to uncertainty about which strategies are feasible or safe (satisfy constraints) and uncertainty about the revenue they generate.
20
+
21
+ Both of these scenarios motivate understanding the sample complexity of selecting an action or strategy which approximately maximizes an objective while also satisfying some constraints, where samples are needed to both learn the objective value of actions and whether or not they satisfy said constraints. In this work, we study the active sample complexity of this task—if the learner can choose which examples to observe and have labeled, how many fewer samples might they need compared to the number needed in a passive setting? We pose this as a best-arm identification problem in the setting of linear bandits with safety constraints, where the goal is to estimate the best arm, subject to it meeting certain (initially unknown) safety constraints. We propose an experiment design-based algorithm which efficiently learns the best safe decision, and show the efficacy of this
22
+
23
+ approach in practice through several experimental examples. To the best of our knowledge, ours is the first approach to handle best-arm identification in linear bandits with safety constraints.
24
+
25
+ # 1.1 Linear Bandits with Safety Constraints
26
+
27
+ Let $\delta \in (0,1)$ be a confidence parameter, $\mathcal{X},\mathcal{Z}\subseteq \mathbb{R}^d$ be finite known sets of vectors, and assume there exists $\theta_{*}\in \mathbb{R}^{d}$ , $\mu_{*}\in \mathbb{R}^{m\times d}$ unknown to the learner. For simplicity, we assume that $\| \theta_{*}\|_{2}\leq 1$ , and $\| \mu_{*,i}\| _2\leq 1,i\in [m]$ and $\| x\| _2\leq 1$ , $\| z\| _2\leq 1$ , $\forall x\in \mathcal{X},z\in \mathcal{Z}$ . The learner plays according to the following protocol: at each time step $t$ the learner chooses some action $x_{t}\in \mathcal{X}$ , observes $(r_t,\{s_{t,i}\}_{i = 1}^m)$ where $r_t = \theta_*^\top x_t + w_t^\theta$ and $s_{t,i} = \mu_{*,i}^\top x_t + w_{t,i}^\mu$ for all $i\in [m]$ , where $w_{t}^{\theta},w_{t,i}^{\mu}$ are i.i.d. mean zero 1-subGaussian noise. The choice of action $x_{t}$ is measurable with respect to the history $\mathcal{F}_t = \{(x_j,r_j,\{s_{j,i}\}_{i = 1}^m)\}_{j = 1}^{t - 1}$ . The learner stops at a stopping time $\tau_{\delta}$ which is measurable with respect to the filtration generated by $\mathcal{F}_{t\leq \tau}$ , and returns $\widehat{z}_{\tau}\in \mathcal{Z}$ . In general, when referring to any expectation $\mathbb{E}$ or probability $\mathbb{P}$ , the underlying measure will be with respect to the actions, observed rewards, and internal randomness of the algorithm.
28
+
29
+ We are interested in the safe transductive best-arm identification problem (STBAI), where the goal of the learner is to identify
30
+
31
+ $$
32
+ z _ {*} := \arg \max _ {z \in \mathcal {Z}} z ^ {\top} \theta_ {*} \quad \mathrm {s . t .} \quad z ^ {\top} \mu_ {*}, i \leq \gamma , \forall i \in [ m ]
33
+ $$
34
+
35
+ for some (known) threshold $\gamma$ . In words, our goal is to identify the best safe arm in $\mathcal{Z}$ , $z_{*}$ , where we say an arm $z$ is safe if it satisfies every linear constraint: $z^{\top} \mu_{*,i} \leq \gamma, \forall i \in [m]$ . We are interested in obtaining learners that take the fewest number of samples possible to accomplish this. In practice, we will consider a slightly easier objective. Fix some tolerance $\epsilon > 0$ and let
36
+
37
+ $$
38
+ \mathcal {Z} _ {\epsilon} := \left\{z \in \mathcal {Z}: z ^ {\top} \theta_ {*} \geq z _ {*} ^ {\top} \theta_ {*} - \epsilon , z ^ {\top} \mu_ {*}, i \leq \gamma + \epsilon , \forall i \in [ m ] \right\}.
39
+ $$
40
+
41
+ Then our goal is to obtain an $(\epsilon, \delta)$ -PAC learner defined as follows:
42
+
43
+ Definition 1 $((\epsilon, \delta)$ -PAC Learner). A learner is $(\epsilon, \delta)$ -PAC if for any instance it returns $\widehat{z}_{\tau}$ such that $\mathbb{P}[\widehat{z}_{\tau} \in \mathcal{Z}_{\epsilon}] \geq 1 - \delta$ .
44
+
45
+ We define the optimality gap for any $z \in \mathcal{Z}$ as $\Delta(z) \coloneqq \theta_*^\top (z_*(-z))$ , and the safety gap for constraint $i$ as $\Delta_{\mathrm{safe}}^i(z) \coloneqq \gamma - \mu_{*,i}^\top z$ . Note that either $\Delta(z)$ or $\Delta_{\mathrm{safe}}^i(z)$ can be negative. If $\Delta(z) < 0$ , it follows that $z$ has larger value— $z^\top \theta_*$ —than the best safe arm $z_*$ , which implies it must be unsafe. If $\Delta_{\mathrm{safe}}^i(z) < 0$ for some $i$ , then arm $z$ is unsafe. We also define the $\epsilon$ -safe optimality gap as:
46
+
47
+ $$
48
+ \Delta^ {\epsilon} (z) = \max _ {z ^ {\prime} \in \mathcal {Z}} \left(z ^ {\prime} - z\right) ^ {\top} \theta_ {*} \quad \text {s . t .} \quad \min _ {i \in [ m ]} \Delta_ {\text {s a f e}} ^ {i} (z) \geq \epsilon . \tag {1}
49
+ $$
50
+
51
+ $\Delta^{\epsilon}(z)$ is then the gap in value between arm $z$ and the best arm with minimum safety gap at least $\epsilon$ .
52
+
53
+ Mathematical Notation. Let $\| x\| _A^2 = x^\top Ax$ and $\mathfrak{p}(x)\coloneqq \max \{x,0\}$ . $\widetilde{\mathcal{O}} (\cdot)$ hides factors that are logarithmic in the arguments. $\lesssim$ denotes inequality up to constants. We denote the simplex as $\triangle_{\mathcal{X}}\coloneqq \{\lambda \in \mathbb{R}_{\geq 0}^{|\mathcal{X}|}:\sum_{x\in \mathcal{X}}\lambda_x = 1\}$ .
54
+
55
+ # 2 Safe Best-Arm Identification in Linear Bandits
56
+
57
+ # 2.1 Algorithm Definition
58
+
59
+ The main challenge in algorithm design for the safe best-arm identification problem is ensuring that we are efficiently balancing our exploration between refining our estimates of both the safety gaps, as well as the optimality gaps. Our approach is given in Algorithm 1. BesIDE.
60
+
61
+ BESIDE relies on a round-based adaptive experimental design approach. In each round BESIDE consists of three phases. In the first phase, it solves an experimental design over $\lambda_{\ell} \in \triangle_{\mathcal{X}}$ , with the goal of refining our estimates of the safety gaps. It then takes $\tau_{\ell}$ samples from $\lambda_{\ell}$ . In the second phase these samples are used to estimate the safety constraints, $\widehat{\mu}^{i,\ell}$ , and the safety gaps of each arm, $\widehat{\Delta}_{\mathrm{safe}}^{i,\ell}(z)$ . Finally, in Phase 3, an additional experimental design is solved which now aims to refine our estimates of the optimality gaps, and the estimates of the optimality gaps $\widehat{\Delta}^{\ell}(z)$ for each $z \in \mathcal{Z}$ are then computed. We encapsulate Phase 3 in a subroutine, RAGE, which we outline in the following. We now carefully describe each phase—we begin with Phase 2 to explain how our estimator works.
62
+
63
+ # Algorithm 1 Best Safe Arm Identification (BESIDE)
64
+
65
+ 1: input: tolerance $\epsilon$ , confidence $\delta$
66
+ 2: $\iota_{\epsilon} \gets \lceil \log \left(\frac{20}{\epsilon}\right) \rceil, \widehat{\Delta}_{\text{safe}}^{i,0}(z) \gets 0, \widehat{\Delta}^{0}(z) \gets 0$ for all $z \in \mathcal{Z}$
67
+ 3: for $\ell = 1,2,\dots ,\iota_{\epsilon}$ do
68
+ 4: $\epsilon_{\ell} \gets 20 \cdot 2^{-\ell}$
69
+
70
+ // Phase 1: Solve design to reduce uncertainty in safety constraints
71
+
72
+ 5: Define
73
+
74
+ $$
75
+ c _ {\ell} (z) = \min _ {j} | \widehat {\Delta} _ {\text {s a f e}} ^ {j, \ell - 1} (z) | + \max _ {j} \mathfrak {p} (- \widehat {\Delta} _ {\text {s a f e}} ^ {j, \ell - 1} (z)) + \mathfrak {p} (\widehat {\Delta} ^ {\ell - 1} (z))
76
+ $$
77
+
78
+ 6: Let $\tau_{\ell}$ be the minimal value of $\tau \in \mathbb{R}_{+}$ which is greater than $4\log \frac{4m|\mathcal{Z}|\ell^2}{\delta}$ such that the objective to the following is no greater than $\epsilon_{\ell} / 100$ , and $\lambda_{\ell}$ the corresponding optimal distribution
79
+
80
+ $$
81
+ \inf _ {\lambda \in \triangle_ {\mathcal {X}}} \max _ {z \in \mathcal {Z}} - \frac {1}{1 0 0} \left(c _ {\ell} (z) + \epsilon_ {\ell}\right) + \sqrt {\tau^ {- 1} \cdot \| z \| _ {A (\lambda) ^ {- 1}} ^ {2} \cdot \log \left(\frac {4 m | \mathcal {Z} | \ell^ {2}}{\delta}\right)}
82
+ $$
83
+
84
+ 7: Sample $x_{t}\sim \lambda_{\ell}$ , collect $\tau_{\ell}$ observations $\{(x_t,r_t,s_{t,1},\dots ,s_{t,m})\}_{t = 1}^{\tau_{\ell}}$
85
+ // Phase 2: Estimate safety constraints
86
+ 8: $\{\widehat{\mu}^{i,\ell}\}_{i = 1}^{m}\gets \mathsf{RIPS}(\{(x_{t},s_{t,i})\}_{t = 1}^{\tau_{\ell}},\mathcal{Z},\frac{\delta}{2m\ell^{2}})$
87
+ 9: $\widehat{\Delta}_{\mathrm{safe}}^{i,\ell}(z) \gets \gamma - z^{\top} \widehat{\mu}^{i,\ell} + \|z\|_{A(\lambda_{\ell})^{-1}} \sqrt{\tau_{\ell}^{-1} \log\left(\frac{4m|\mathcal{Z}| \ell^{2}}{\delta}\right)}$
88
+
89
+ // Phase 3: Refine estimates of optimality gaps
90
+
91
+ 10: $\{\widehat{\Delta}^{\ell}(z)\}_{z\in \mathcal{Z}}\gets \mathrm{RAGE}^{\epsilon}\Big(\mathcal{Z},\mathcal{Y}_{\ell},\epsilon_{\ell},\frac{\delta}{4\ell^{2}},\{\widehat{\Delta}_{\mathrm{safe}}(z)\leftarrow \max_{j}\mathfrak{p}(-\widehat{\Delta}_{\mathrm{safe}}^{j,\ell}(z))\}_{z\in \mathcal{Z}}\Big)$
92
+
93
+ // Perform final round of exploration to ensure we find $\epsilon$ -good arm
94
+
95
+ 11: $\mathcal{V}_{\mathrm{end}}\gets \{z\in \mathcal{Z}:c_{\ell}(z)\lesssim \widehat{\Delta}_{\mathrm{safe}}^{i,\ell}(z) + \epsilon \}$
96
+ 12: $\{\widehat{\Delta}^{\mathrm{end}}(z)\}_{z\in \mathcal{V}_{\mathrm{end}}} \gets \mathrm{RAGE}^{\epsilon}(\mathcal{V}_{\mathrm{end}},\mathcal{V}_{\mathrm{end}},\epsilon ,\delta ,\{\widehat{\Delta}_{\mathrm{safe}}(z) \leftarrow \max_{j}\mathfrak{p}(-\widehat{\Delta}_{\mathrm{safe}}^{j,\ell}(z))\}_{z\in \mathcal{Z}}$
97
+ 13: return $\widehat{z} = \arg \min_{z\in \mathcal{Y}_{\mathrm{end}}}\widehat{\Delta}^{\mathrm{end}}(z)$
98
+
99
+ Phase 2: In Phase 2 the algorithm would like to use the $\tau_{\ell}$ samples drawn from the design $\lambda_{\ell}$ to estimate the constraints for each $z \in \mathcal{Z}$ : $z^{\top} \mu_{*,i}$ for each $i \in [m]$ . Past works using adaptive experimental design in the linear bandits literature have utilized the least-squares estimator along with complicated rounding schemes [13] which may require an additional poly(d) samples each round (this poly(d) factor could be prohibitively large—for example, in active classification problems, $d$ is the total number of data points). We instead utilize the RIPS estimator of [6] which gives us a guarantee of the form: with probability greater than $1 - \delta$ , for all $z \in \mathcal{Z}$ ,
100
+
101
+ $$
102
+ \left| z ^ {\top} \left(\widehat {\mu} ^ {i, \ell} - \mu_ {*, i}\right) \right| \lesssim \| z \| _ {A \left(\lambda_ {\ell}\right) ^ {- 1}} \cdot \sqrt {\tau_ {\ell} ^ {- 1} \log \left(\frac {4 m | \mathcal {Z} | \ell^ {2}}{\delta}\right)}. \tag {2}
103
+ $$
104
+
105
+ We describe the RIPS estimator in more detail in Appendix B.
106
+
107
+ Phase 1: By our definition of the experimental design on Line $\overline{6}$ ,our safety gap estimation error bound in (2) satisfies,for each $z \in \mathcal{Z}$ :
108
+
109
+ $$
110
+ \left| z ^ {\top} \left(\widehat {\mu} ^ {i, \ell} - \mu_ {* , i}\right) \right| \lesssim \| z \| _ {A \left(\lambda_ {\ell}\right) ^ {- 1}} \cdot \sqrt {\tau_ {\ell} ^ {- 1} \log \left(\frac {4 m | \mathcal {Z} | \ell^ {2}}{\delta}\right)} \lesssim c _ {\ell} (z) + \epsilon_ {\ell}. \tag {3}
111
+ $$
112
+
113
+ Note that our design chooses an allocation that minimizes the variance in our estimate of each safety constraint (up to some tolerance), which scales as $\|z\|_{A(\lambda)^{-1}}^2$ . This can be thought of as a form of $\mathcal{X}\mathcal{Y}$ -design—a design of the form $\inf_{\lambda \in \triangle x} \max_{y \in \mathcal{Y}} \|y\|_{A(\lambda)^{-1}}^2$ —where here $\mathcal{Y} \gets \mathcal{Z}$ is chosen to reduce our uncertainty in estimating the safety value for each $z \in \mathcal{Z}$ . We refer to such a design objective henceforth as $\mathcal{X}\mathcal{Y}_{\mathrm{safe}}$ . Assume that at round $\ell - 1$ , we can guarantee
114
+
115
+ $$
116
+ \begin{array}{l} c _ {\ell} (z) = \min _ {j} | \widehat {\Delta} _ {\text {s a f e}} ^ {j, \ell - 1} (z) | + \max _ {j} \mathfrak {p} (- \widehat {\Delta} _ {\text {s a f e}} ^ {j, \ell - 1} (z)) + \mathfrak {p} (\widehat {\Delta} ^ {\ell - 1} (z)) + \epsilon_ {\ell} \\ \lesssim \min _ {j} \left| \Delta_ {\text {s a f e}} ^ {j} (z) \right| + \max _ {j} \mathfrak {p} \left(- \Delta_ {\text {s a f e}} ^ {j} (z)\right) + \mathfrak {p} \left(\Delta^ {\epsilon_ {\ell - 1}} (z)\right) + \epsilon_ {\ell}. \tag {4} \\ \end{array}
117
+ $$
118
+
119
+ Then combining the above inequalities, we see that the experiment design on Line 6 aims to minimize the uncertainty in our estimate of $z^{\top} \mu_{*,i}$ up to a tolerance that scales as the maximum of the four
120
+
121
+ terms in 4. It follows that if any of these terms is large, we will only allocate a small number of samples to refining our estimate of arm $z$ . Each one of these terms can be intuitively motivated by thinking through what is needed to prove that an arm $z \neq z_{*}$ .
122
+
123
+ - $z$ has small safety gap $\min_{j}|\Delta_{\mathrm{safe}}^{j}(z)|$ : if this term is large, it implies that minimum safety gap for $z$ is large. To show an arm is safe or unsafe, it suffices to learn each safety gap up to a tolerance a constant factor from its value—regularizing by this term ensures we do just that.
124
+ - $z$ fails some safety constraint $\max_{j} \mathfrak{p}(-\Delta_{\mathrm{safe}}^{j}(z))$ : if this term is large, it implies that arm $z$ is very unsafe for some constraint. In this case, we can easily determine $z$ is unsafe, and therefore do not need to reduce our uncertainty in the safety gap any more.
125
+ - $z$ is sub-optimal $\mathfrak{p}(\Delta^{\epsilon_{\ell -1}}(z))$ : if this term is large, it implies that $z$ is very suboptimal compared to some safe arm with safety gap at least $\epsilon_{\ell -1}$ . In this case, we do not need to estimate $z$ 's safety gap, as we will have already eliminated it.
126
+
127
+ It remains to ensure that (4) holds. As we show in Appendix D through a careful inductive argument, combining (3) with our guarantee on the estimates of the optimality gaps obtained in Phase 3, $\widehat{\Delta}^{\ell}(z)$ , is sufficient to guarantee (4) holds. In particular, if any gap is greater than $\epsilon_{\ell}$ it is estimated up to a constant factor, and otherwise it is estimated up to $\mathcal{O}(\epsilon_{\ell})$ . This ensures that our gaps are estimated at the correct rate while guaranteeing we do not collect too many samples in each round.
128
+
129
+ Phase 3: In this phase we estimate the suboptimality gaps using $\mathrm{RAGE}^{\epsilon}$ . $\mathrm{RAGE}^{\epsilon}$ is inspired by the RAGE algorithm of [13] for best-arm identification. In the interest of space, we defer the full definition of $\mathrm{RAGE}^{\epsilon}$ to Appendix C but provide some intuition here. After Phase 2, by (3) the set of arms $\mathcal{Y}_{\ell} := \{z \in \mathcal{Z} : c_{s}(z) \lesssim \widehat{\Delta}^{i,s}(z), \forall i \in [m]\}$ for $s \leq \ell$ are precisely the ones that we can certify are safe (note that we do not need to ever explicitly construct such a set—we can instead maintain an implicit definition through the constraints). $\mathrm{RAGE}^{\epsilon}$ uses an adaptive experimental design procedure to sample in such a way as to optimally estimate the gaps $(z - \widehat{y})^{\top} \theta_{*}, \forall z \in \mathcal{Z}$ and some $\widehat{y} \in \mathcal{Y}_{\ell}$ up to some (sufficient) tolerance. In particular, it also solves an $\mathcal{X}\mathcal{Y}$ -design, but now on the set $\mathcal{Y} \gets \{z - \widehat{y} : z \in \mathcal{Z}\}$ . Thus, rather than minimizing $\|z\|_{A(\lambda)^{-1}}^{2}$ , we minimize $\|z - \widehat{y}\|_{A(\lambda)^{-1}}^{2}$ . This design reduces uncertainty on the differences between arms, which allows us to refine our estimates of their optimality gaps. Henceforth we refer to such a design as $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ . We describe the importance of the choice of design in more detail in Section 2.4. Ultimately, if an arm $z$ has value within a factor of $\epsilon_{\ell}$ of the best safe arm in $\mathcal{Y}_{\ell}$ , and if we have not yet shown arm $z$ is unsafe, then we will estimate its optimality gap up to a constant factor of $\epsilon_{\ell}$ . If we were maintaining arm sets explicitly (similar to the original RAGE algorithm of [13]) we would eliminate arms at this point.
130
+
131
+ Remark 1 (Computational Complexity). The main computational challenge in BESIDE and RAGE $^{\epsilon}$ is the calculation of the experimental designs (i.e. Line $\mathbf{6}$ and the corresponding design in RAGE $^{\epsilon}$ ). In general, the presence of the square root implies that the resulting optimization problem may not be convex in $\lambda$ . To handle this issue we note that $2\sqrt{xy} = \min_{\alpha > 0} \alpha x + \frac{y}{\alpha}$ thus we can replace the existing design with $\inf_{\lambda \in \triangle_{X}} \max_{z \in \mathcal{Z}} \min_{\alpha > 0} -\frac{1}{100}(c_{\ell}(z) + \epsilon_{\ell}) + \alpha \|z\|_{A(\lambda)^{-1}}^{2} + \log\left(\frac{4m|\mathcal{Z}|\ell^{2}}{\delta}\right)/(\alpha\tau)$ . By appropriately discretizing the space we search over for $\tau$ and $\alpha$ we can then apply the Frank-Wolfe algorithm to minimize over $\lambda$ . While computationally efficient in theory, this procedure is quite complicated and impractical for large problems. In the experiments section we provide a practical heuristic that is motivated by the above algorithm and is computationally efficient for larger problems.
132
+
133
+ # 2.2 Main Result
134
+
135
+ BESIDE achieves the following complexity.
136
+
137
+ Theorem 1. BESIDE is $(\epsilon, \delta)$ -PAC. In other words, with probability at least $1 - \delta$ , BESIDE returns an arm $\widehat{z} \in \mathcal{Z}$ such that
138
+
139
+ $$
140
+ \widehat {z} ^ {\top} \theta_ {*} \geq z _ {*} ^ {\top} \theta_ {*} - \epsilon , \quad \min _ {i \in [ m ]} \Delta_ {\mathrm {s a f e}} ^ {i} (\widehat {z}) \geq - \epsilon
141
+ $$
142
+
143
+ and terminates after collecting at most
144
+
145
+ $$
146
+ C \cdot \sup _ {\tilde {\epsilon} \geq \epsilon} \inf _ {\lambda \in \triangle_ {x}} \max _ {z \in \mathcal {Z}} \frac {\| z \| _ {A (\lambda) ^ {- 1}} ^ {2} \cdot \log \left(\frac {m | \mathcal {Z} |}{\delta}\right)}{\left(\min _ {j} | \Delta_ {\text {s a f e}} ^ {j} (z) | + \max _ {j} \mathfrak {p} \left(- \Delta_ {\text {s a f e}} ^ {j} (z)\right) + \mathfrak {p} \left(\Delta^ {\tilde {\epsilon}} (z)\right) + \tilde {\epsilon}\right) ^ {2}} \quad (\text {s a f e t y})
147
+ $$
148
+
149
+ $$
150
+ + C \cdot \sup _ {\widetilde {\epsilon} \geq \epsilon} \inf _ {\lambda \in \triangle_ {\mathcal {X}}} \max _ {z \in \mathcal {Z}} \frac {\| z - z _ {*} \| _ {A (\lambda) ^ {- 1}} ^ {2} \cdot \log \left(\frac {| \mathcal {Z} |}{\delta}\right)}{\left(\max _ {j} \mathfrak {p} \left(- \Delta_ {\text {s a f e}} ^ {j} (z)\right) + \mathfrak {p} \left(\Delta^ {\widetilde {\epsilon}} (z)\right) + \widetilde {\epsilon}\right) ^ {2}} + C _ {0} \quad \text {(o p t i m a l i t y)}
151
+ $$
152
+
153
+ samples for some $C = \mathrm{poly}\log (\frac{1}{\epsilon})$ and $C_0 = \mathrm{poly}\log (\frac{1}{\epsilon},|\mathcal{Z}|)\cdot \log \frac{1}{\delta}.$
154
+
155
+ The complexity bound given in Theorem 1 may, at first glance, appear rather opaque, yet it in fact yields a very intuitive interpretation. The first term in the complexity, the safety term, is the complexity needed to show each arm is safe or unsafe, if they have not otherwise been eliminated. As described in the previous section, if $\mathfrak{p}(\Delta^{\widetilde{\epsilon}}(z))$ is large, this implies we have found an arm better than $z$ , so learning its safety value is irrelevant.
156
+
157
+ The second term in the complexity, the optimality term, corresponds to the difficulty of showing an arm is worse than the best arm we can guarantee is safe. Note that we can only guarantee an arm is suboptimal if we can find a safe arm with higher value. Recall the definition of $\Delta^{\widetilde{\epsilon}}(z)$ given in (1). Intuitively, $\Delta^{\widetilde{\epsilon}}(z)$ denotes the gap in value between arm $z$ and the best arm with safety gap at least $\widetilde{\epsilon}$ . As we make $\widetilde{\epsilon}$ smaller, we can show additional arms are safe, which increases $\Delta^{\widetilde{\epsilon}}(z)$ . While this makes it easier to show $z$ is suboptimal, it comes at a cost—the extra samples necessary to decrease our safety tolerance, given by the first term in the complexity. BESIDE trades off between optimizing for each of these terms—gradually decreasing its tolerance on both the safety and optimality terms to more easily eliminate suboptimal arms, while not allocating too many samples to guarantee safety.
158
+
159
+ To help illustrate this complexity, we consider a simple example with orthogonal arms, i.e. a multi-armed bandit example.
160
+
161
+ Example 1 (BESIDE on Multi-Armed Bandits). In the multi-armed bandit setting, we have $\mathcal{X} = \mathcal{Z} = \{e_1, \ldots, e_d\}$ . Let $m = 1, d = 3$ , and consider the settings of $\theta_*$ and $\mu_*$ given in Figure 1. Here we see that arm $e_1$ is safe and has value much higher than any other arm, so $z_* = e_1$ , and can be shown to be safe relatively easily; arm $e_2$ has near-optimal value but is very unsafe; and arm $e_3$ is unsafe with very small safety gap, but has the smallest value.
162
+
163
+ ![](images/f642bfffe1a76a696fd1dcc768c731e71037b673d1033625bbf0721b3a741bff.jpg)
164
+ Figure 1: Multi-Armed Bandit Instance
165
+
166
+ ![](images/52cd5db30156152187e99ff78060486aee0a6fff60ffacd7fe9a3385f1154c9b.jpg)
167
+
168
+ Showing $e_2$ is Suboptimal. As $e_2$ has near-optimal value, $\Delta(e_2)$ is very small and it is very difficult to show $e_2$ is suboptimal. However, $-\Delta_{\mathrm{safe}}(e_2) = \mathcal{O}(1)$ , so it is very easy to show $e_2$ is unsafe. It follows that $\mathfrak{p}(-\Delta_{\mathrm{safe}}(e_2)) = \mathcal{O}(1)$ so both denominators in our complexity will always be $\mathcal{O}(1)$ for $z = e_2$ — BESIDE does not attempt to show $e_2$ is suboptimal, but instead shows it is unsafe, and therefore does not pay for the small optimality gap of $\Delta(e_2)$ in the complexity.
169
+
170
+ Showing $e_3$ is Suboptimal. Recall the definition of $\Delta^{\epsilon}(z) = \max_{z' : \Delta_{\mathrm{safe}}(z') \geq \epsilon} \theta_*^\top(z' - z)$ . In this case, for $\epsilon = \mathcal{O}(1)$ , we will have $\Delta_{\mathrm{safe}}(e_1) \geq \epsilon$ , which implies that $\Delta^{\epsilon}(e_3) = \theta_*^\top(e_1 - e_2) = \Delta(e_3) = \mathcal{O}(1)$ . To show $e_3$ is suboptimal, we could either show it is unsafe (which is very difficult) or suboptimal (which is very easy). Observing the sample complexity of Theorem[7] we see that the denominator of both terms will always be $\mathcal{O}(1)$ for $z = e_3$ since $\Delta^{\epsilon}(e_2) = \mathcal{O}(1)$ —BESIDE never pays for the small safety gap of $e_3$ , it instead takes advantage of the fact that $e_3$ can easily be shown to be suboptimal, and uses this to eliminate it.
171
+
172
+ In both of these cases we see that BESIDE does the "right" thing, always using the easier of the two criteria—either showing an arm is unsafe or suboptimal—to show that $z \neq z_{*}$ . Combining the above observations, for $\epsilon \approx \min\{\Delta(e_3), -\Delta_{\mathrm{safe}}(e_2), \Delta_{\mathrm{safe}}(e_1)\}$ , it follows that on this example the total
173
+
174
+ sample complexity of BESIDE given by Theorem 7 scales as:
175
+
176
+ $$
177
+ \widetilde {\mathcal {O}} \left(\left(\frac {1}{\Delta_ {\mathrm {s a f e}} (e _ {1}) ^ {2}} + \frac {1}{\Delta_ {\mathrm {s a f e}} (e _ {2}) ^ {2}} + \frac {1}{\Delta (e _ {3}) ^ {2}}\right) \cdot \log \frac {1}{\delta}\right)
178
+ $$
179
+
180
+ where the $1 / \Delta_{\mathrm{safe}}(e_1)^2$ arises because we must also show $e_1$ is safe.
181
+
182
+ # 2.3 Optimality of BESIDE
183
+
184
+ Optimality in Best-Arm Identification. Consider applying BESIDE to a problem instance where $m = 1$ , $\mu_{*,1} = 0$ , and $\gamma = 1$ . In this case, every arm is safe, and the safety constraints are essentially vacuous—every arm can easily be shown safe. We can therefore think of this as simply an instance of the best-arm identification problem. In this setting, we obtain the following corollary.
185
+
186
+ Corollary 1. Consider running BESIDE on a problem instance where $m = 1$ , $\mu_{*,1} = 0$ , and $\gamma = 1$ and set $\epsilon = \frac{1}{2}\max_{z\neq z_{*}}\theta_{*}^{\top}(z_{*} - z)$ . Then with probability at least $1 - \delta$ , BESIDE returns $z_{*}$ and has sample complexity bounded by:
187
+
188
+ $$
189
+ \widetilde {\mathcal {O}} \left(\inf _ {\lambda \in \triangle_ {\mathcal {X}}} \max _ {z \in \mathcal {Z}} \frac {\| z - z _ {*} \| _ {A (\lambda) ^ {- 1}} ^ {2}}{\Delta (z) ^ {2}} \cdot \log \frac {| \mathcal {Z} |}{\delta} + \inf _ {\lambda \in \triangle_ {\mathcal {X}}} \max _ {z \in \mathcal {Z}} \| z \| _ {A (\lambda) ^ {- 1}} ^ {2} \cdot \log \frac {| \mathcal {Z} |}{\delta}\right).
190
+ $$
191
+
192
+ Up to lower-order terms, this exactly matches the lower bound on best-arm identification given in [13]. Thus, in settings where the safety constraint is vacuous, BESIDE hits the optimal rate.
193
+
194
+ Worst-Case Performance of BESIDE. We next consider the worst-case performance of BESIDE in settings when $\mathcal{X} = \mathcal{Z}$ . We have the following result.
195
+
196
+ Corollary 2. Assume that $\mathcal{X} = \mathcal{Z}$ . Then for any $\theta_*$ and $(\mu_{*,i})_{i=1}^{m}$ , the sample complexity of BESIDE necessary to return an $\epsilon$ -good and $\epsilon$ -safe arm is bounded as $\widetilde{\mathcal{O}}\left(\frac{d}{\epsilon^2} \cdot (\log(m|\mathcal{X}|) + \log\frac{1}{\delta})\right)$ .
197
+
198
+ Theorem 2 of [38] shows a worst-case lower bound of $\Omega(d^2/\epsilon^2)$ on the sample complexity of identifying an $\epsilon$ -optimal arm in the standard linear bandit setting. Safe best-arm identification problems in which the safety constraint is vacuous are at least as hard as the standard best-arm identification problem, since at minimum we need to find the best arm out of every safe arm. Thus, $\Omega(d^2/\epsilon^2)$ is also a worst-case lower bound for the safe best-arm identification problem. The hard instance of [38] has $|\mathcal{X}| = \mathcal{O}(2^d)$ , so it follows that on this instance, BESIDE achieves a complexity of $\widetilde{\mathcal{O}}\left(\frac{d}{\epsilon^2} \cdot (d + \log \frac{1}{\delta})\right)$ , and therefore BESIDE has optimal dimensionality dependence. In addition, this also implies that safe best-arm identification, in the worst-case, is no harder than the standard best-arm identification problem—it is no harder to find the best safe arm, regardless of the number of safety constraints, than to find the best arm, ignoring safety constraints.
199
+
200
+ # 2.4 The Role of Experiment Design
201
+
202
+ We can think of the safe best-arm identification problem, in some sense, as an interpolation of the standard best-arm identification problem, as well as the level-set estimation problem, where the goal is to identify $z \in \mathcal{Z}$ satisfying $z^\top \mu_* \leq \gamma$ [29]. In the former problem, [13] shows that the instance-optimal rate can be attained by running a round-based algorithm and at every round solving an instance of the $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ experiment design, as defined in Section [2.1]. In the latter problem, [29] also show that a round-based algorithm can hit the instance-optimal rate, but instead solving the $\mathcal{X}\mathcal{Y}_{\mathrm{safe}}$ problem at each round. It is natural to ask whether either of these strategies could be applied to the safe best-arm identification problem directly, or if it is necessary to alternate between them. The following results show that, on their own, each of these designs is unable to hit the optimal rate.
203
+
204
+ Proposition 2. Fix some small enough $\epsilon >0$ . Then there exist instances of the safe best-arm identification problem, $\mathcal{I}_i = (\theta_*^i,\mu_*^i,\mathcal{X}^i,\mathcal{Z}^i)$ , $i = 1,2$ , with $d = |\mathcal{X}^i| = |\mathcal{Z}^i| = 2$ , $m = 1$ , such that:
205
+
206
+ - On $\mathcal{I}^1$ , any $(\epsilon, \delta)$ -PAC algorithm which plays only allocations minimizing $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ must have $\mathbb{E}[\tau_{\delta}] \geq \Omega \left(\frac{1}{\epsilon^3} \cdot \log \frac{1}{\delta}\right)$ , while BESIDE identifies an $\epsilon$ -optimal arm after $\widetilde{\mathcal{O}}\left(\frac{1}{\epsilon^2} \cdot \log 1 / \delta\right)$ samples.
207
+ - On $\mathcal{I}^2$ , any $(\epsilon, \delta)$ -PAC algorithm which plays only allocations minimizing $\mathcal{X}\mathcal{Y}_{\mathrm{safe}}$ must have $\mathbb{E}[\tau_{\delta}] \geq \Omega \left(\frac{1}{\epsilon^{3/2}} \cdot \log \frac{1}{\delta}\right)$ , while BESIDE identifies an $\epsilon$ -optimal arm after $\tilde{\mathcal{O}}\left(\frac{1}{\epsilon} \cdot \log 1/\delta\right)$ samples.
208
+
209
+ Proposition implies that, to solve the safe best-arm identification problem optimally, more care must be taken in exploring than either standard experiment design induces—we must trade off between $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ and $\mathcal{X}\mathcal{Y}_{\mathrm{safe}}$ as BESIDE does. We remark briefly on the instance $\mathcal{I}^1$ . On this instance we have $\mathcal{X} = \{e_1,e_2\}$ and $\mathcal{Z} = \{z_{1},z_{2}\}$ with $z_{1} = [1 / 4,1 / 2]$ and $z_{2} = [3 / 4,1 / 2 + \alpha ]$ . We set $\theta_{*}^{1} = [1,0]$ , $\mu_{*}^{1} = [0,1]$ , and $\gamma = 1 / 2 + \alpha /2$ . Here $z_{2}$ is unsafe while $z_{1}$ is safe, so it follows that $z_{*} = z_{1}$ . As $z_{2}^{\top}\theta_{*}^{1} > z_{1}^{\top}\theta_{*}^{1}$ , to show $z_{2}\neq z_{*}$ , we must show it is unsafe. However, if we solve the design $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ , we see that it places nearly all of the mass on the first coordinate. While this would be optimal if both $z_{1}$ and $z_{2}$ were safe and we simply wished to determine which has a higher value, to show $z_{2}$ is unsafe, the optimal strategy places (roughly) the same mass on each coordinate, since each coordinate could contribute to the safety value. This is precisely the allocation BESIDE will play, so it is able to show that $z_{2}$ is unsafe much more efficiently than a naive $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ approach.
210
+
211
+ # 3 Experiments for Safe Best Arm Identification in Linear Bandits
212
+
213
+ We next present experimental results on BESIDE to demonstrate the advantage of experimental design—especially combining $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ and $\mathcal{X}\mathcal{Y}_{\mathrm{safe}}$ designs. As there are no existing algorithms that consider safe best-arm identification, as a benchmark we consider the naive adaptive approach BASELINE that first solves the problem of finding the safe arms up to a desired tolerance, and then solves the problem of finding the best (safe) arm among the arms that were found to be safe. We first describe instances on which we test BESIDE. Our experimental details and precise implementation of BESIDE using elimination are described in Section F
214
+
215
+ Multi-Armed Bandit. We consider a best-arm identification problem in which every arm is safe, but the arm with highest value is very difficult to identify as safe, while the second-best arm can easily be shown safe. We vary the total number of arms and run BESIDE and BASELINE with $\epsilon = 0.5$ and $\delta = 0.1$ . From Figure 2 we observe that the sample complexity of BESIDE is smaller (up to about two times for 100 arms) than the sample complexity of its baseline.
216
+
217
+ Linear Response Model. Random Instance: We also consider the more general setup where $\mathcal{X},\mathcal{Z}\subset \mathbb{R}^d$ $\theta \in \mathbb{R}^d$ and $\mu \in \mathbb{R}^d$ are randomly generated from independent Gaussian random variables with mean 0 and variance 1. We set $|\mathcal{X}| = 50$ and vary the size of $|\mathcal{Z}|$ . In Figure3 we see again that BESIDE significantly outperforms the baseline.
218
+
219
+ Hard Instance: We last consider the instance of Proposition 2 and benchmark against the strategy playing only allocations minimizing $\mathcal{X}\mathcal{Y}_{\mathrm{diff}}$ . In Figure 4 we see again that BESIDE significantly outperforms this baseline, corroborating the theoretical result of Proposition 2.
220
+
221
+ ![](images/6ea4ffc031575dbad5aa10b651a9ca5d1d09beac38a163c87ebb3da7bff69e51.jpg)
222
+ Figure 2: Total arm pulls to termination vs. number of arms
223
+
224
+ ![](images/23210b0c45397d3741c8540145b01fc09274c6a82b2edd65b7de5a8b145ec987.jpg)
225
+ Figure 3: Total arm pulls to termination vs. $|\mathcal{Z}|$
226
+
227
+ ![](images/0101dfaf36ba56f3873e4b12c5340ad73880a48e796bc919200c50be6510c177.jpg)
228
+ Figure 4: Total arm pulls to termination vs. $\epsilon$
229
+
230
+ # 3.1 Practical Algorithms for Active Classification Under Constraints
231
+
232
+ Next, we provide an application of the above ideas to pool-based active classification with constraints—namely, adaptive sampling to learn the highest accuracy classifier with a constraint on the false discovery rate (FDR). We first explain how this problem maps to the linear bandit setting. Precisely, let $\mathcal{X}$ be the example space and $\mathcal{Y} = \{0,1\}$ the label space. Fix a hypothesis class $\mathcal{H}$ such that each $h\in$ $\mathcal{H}$ is a classifier $h:\mathcal{X}\to \mathcal{Y}$ . We represent each $h$ with an associated indicator vector $z_{h}\in \{0,1\}^{|\mathcal{X}|}$ where $z_{h}(x) = 1\iff h(x) = 1$ . Similarly, let $\eta \in [0,1]^{|\mathcal{X}|}$ represent the label distribution, i.e. $\eta (x) = \mathbb{P}(Y = 1|X = x)$ . Then the risk of a classifier $R(h)\coloneqq \mathbb{E}_{x\sim \mathrm{Unif}(\mathcal{X}),Y\sim \mathrm{Ber}(\eta (x))}[\mathbb{1}[h(x)\neq Y]] = z_h^\top (2\eta -1)$ and the FDR is defined as $\mathrm{FDR}(h)\coloneqq (\mathbf{1} - \eta)^{\top}z / \mathbf{1}^{\top}z$ . In the case when $\eta \in \{0,1\}^{|\mathcal{X}|}$ , $\mathrm{FDR}(h)$ is the proportion of examples that $h$ incorrectly labels as 1 out of all
233
+
234
+ examples $h$ labels as 1. Our goal is to solve the following constrained best arm identification problem:
235
+
236
+ $$
237
+ \widehat {h} = \min _ {h \in \mathcal {H}} R (h) \quad \text {s . t .} \quad \operatorname {F D R} (h) \leq q \iff \min _ {h \in \mathcal {H}} z _ {h} ^ {\top} \eta \quad \text {s . t .} \quad ((\mathbf {1} - \eta) ^ {\top} - q \mathbf {1} ^ {\top}) ^ {\top} z \leq 0. \tag {5}
238
+ $$
239
+
240
+ The main challenge in running BESIDE on this problem directly is a potentially high computational cost from computing a design over an extremely large hypothesis class $\mathcal{H}$ (e.g. neural networks of a bounded width). In this section we provide an alternative approach motivated by BESIDE. Algorithm 2 follows a similar design as BESIDE and relies on an oracle, CERM, that can solve (5), i.e. given a dataset it returns the highest accuracy classifier under an FDR constraint. Such oracles are available in, for example in [1][10]. In each round of Algorithm 2 we perform randomized exploration by perturbing the labels on our existing dataset with mean zero Gaussian noise, and then training $k$ classifiers $\widehat{h}_i$ , $i \in [k]$ , on the resulting datasets. Implicitly, we are making the assumption that the loss function in the training of ERM can handle continuous labels, such as the MLE of logistic regression. As described in [25], randomized exploration emulates sampling from a posterior distribution on our possible set of classifiers. We then use the labels generated from these classifiers to compute safe classifiers $h_i$ , $i \in [k]$ . Finally, mimicking the strategy of BESIDE, we compute $\mathcal{X}Y_{\text{safe}}$ and $\mathcal{X}Y_{\text{diff}}$ designs on these $k$ safe classifiers and repeat (note that the designs computed on Line 5 are equivalent to $\mathcal{X}Y_{\text{safe}}$ and $\mathcal{X}Y_{\text{diff}}$ in the classification setting).
241
+
242
+ Algorithm 2 Active constrained classification with randomized exploration
243
+ Require: Batch size $n$ , initial (labeled) data $x_{1}^{(0)}, \ldots, x_{n}^{(0)}$ , number of rounds $L$ , number of classifiers per round $k$ , perturbation variance $\sigma$
244
+ 1: for $\ell = 1, \ldots, L$ do
245
+ 2: for $i = 1, \ldots, k$ do
246
+ 3: $\widehat{h}_{i} = \mathsf{ERM}(\{(x_{t}^{(\ell)}, y_{t}^{(\ell)} + \epsilon_{t}^{(i)})\}_{t=1}^{n})$ , where $\{\epsilon_{t}^{(i)}\}_{1 \leq t \leq n} \stackrel{i.i.d.}{\sim} \mathcal{N}(0, \sigma^{2})$
247
+ 4: $h_{i} = \mathsf{CERM}(\{(x, \widehat{h}_{i}(x))\}_{x \in \mathcal{X}})$
248
+ 5: Compute designs: $\lambda_{\mathrm{safe}} = \arg \min_{\lambda \in \triangle_{\mathcal{X}}} \max_{1 \leq i \leq k} \sum_{x \in \mathcal{X}} \frac{\mathbb{1}\{h_{i}(x) \neq 0\}}{\lambda_{x}}, \lambda_{\mathrm{diff}} = \arg \min_{\lambda \in \triangle_{\mathcal{X}}} \max_{1 \leq i \neq j \leq k} \sum_{x \in \mathcal{X}} \frac{\mathbb{1}\{h_{i}(x) \neq h_{j}(x)\}}{\lambda_{x}}$
249
+ 6: Sample $x_{1}^{(\ell)}, \ldots, x_{n}^{(\ell)}$ from a uniform mixture of $\lambda_{\mathrm{safe}}, \lambda_{\mathrm{diff}}$
250
+ 7: Observe corresponding labels $y_{1}^{(\ell)}, \ldots, y_{n}^{(\ell)}$
251
+ return $\widetilde{h} = \mathsf{CERM}(\{(x_{t}^{(\ell)}, y_{t}^{(\ell)})\}_{1 \leq t \leq n, 0 \leq \ell \leq L})$
252
+
253
+ To validate Algorithm 2 we experiment against a passive baseline that selects points uniformly at randoms from the pool of examples $\mathcal{X}$ , retransits the model using the same Constrained Empirical Risk Minimization oracle (CERM) as Algorithm 2 on its current samples, and report the accuracy and FDR. We evaluate on two real world datasets and on one synthetic dataset next and provide an additional details on the experiments in Section F
254
+
255
+ Adult dataset. We evaluate on the adult income data set [27] (48,842 examples) where the goal is to predict whether someone's income is above $50k per year. We set the constraint to be FDR < 0.15 and report in Figure 5 the accuracy and the FDR obtained when varying the number of labels given to each method (batch size is set to 25 and initial number of queried labels is 50). We observe that for any desired accuracy Algorithm 2 allows us to provide a classifier with lower FDR. Also, for any chosen number of total labels—such as 500, 750, 2000 as reported in Figure 5—the Algorithm 2 gives a classifier with higher accuracy and lower FDR. In general we found that the active method needed half the number of samples as the passive sampling to achieve a given FDR. This demonstrates the effectiveness of Algorithm 2 to learn simultaneously the objective (risk) and the constraint (FDR), in a similar favorable way as characterized by our theorem
256
+
257
+ ![](images/aebc1afbffc638b0f966324a990f12d2c560cee527484cff0bae8189b7b07214.jpg)
258
+ Figure 5: FDR vs accuracy for active (Algorithm2) and passive sampling, ticks report number of samples. FDR and accuracy are averaged over 5 trials
259
+
260
+ German Credit dataset. We consider the German Credit Dataset originally from the Staflog Project Databases [24]. The goal is to predict whether someone's credit is 'bad' or 'good'. We report in Figure 6 the recall (TPR) and the precision (1 - FDR) obtained when varying the number of labels given to each method. We observe that for any desired precision Algorithm 2 allows us to provide a classifier with higher recall. Also, for any chosen number of total labels—such as 170, 270, 330, 450, 600 as reported in Figure 6—the Algorithm 2 gives a classifier with higher precision and higher recall. As for the Adult dataset we found that the active method needed half the number of samples as the passive sampling to achieve a given precision.
261
+
262
+ ![](images/837d95cd7363bca46dbc24bdea50507b18209b32f68335aa52a8a0d1c9fbfba2.jpg)
263
+ Figure 6: TPR vs FDR for active active (Algorithm 2) and passive sampling, ticks report number of samples. Precision is 1 - FDR, recall is TPR. Precision and recall are averaged over 25 trials
264
+
265
+ Half circle dataset. We consider a two-dimensional half circle dataset, visualized on Figure 10. We report in Figures 11 and 12 the precision and (respectively) the recall obtained when varying the number of labels given to each method. The confidence intervals are obtained over 25 repetitions. We observe that Algorithm 2 allows us to provide a classifier satisfying a given recall or precision in far fewer queries. This is in line with the results of 16 on One Dimensional Thresholds, where the sample complexity of the active strategy is $O(log(n))$ while the sample complexity of the passive strategy is at least of order $n$ .
266
+
267
+ ![](images/c52e63b10a3a3f30d73abbe6eec29b53d786a548aecf8d9cbc834e6a584d4523.jpg)
268
+ Figure 7: Half circle dataset.
269
+
270
+ ![](images/ec2f4e80d1d618d1adc3c703ec6ac6d54053e03b5d1d8bbb3fa84ef3e77d33af.jpg)
271
+ Figure 8: Precision
272
+
273
+ ![](images/152796024c6e1cb123fa500e4fd33a2df1fa2b04f2627e51ededd0a031f52a3e.jpg)
274
+ Figure 9: Recall
275
+
276
+ # 4 Related works
277
+
278
+ Constrained Bandits. A growing body of work seeks to address the question of safe learning in interactive environments. In particular, the majority of such works have considered the problem of regret minimization in linear bandits with linear safety constraints. Here, the goal is to maximize online reward, $x_{t}^{\top}\theta_{*}$ , by choosing actions $x_{t} \in \mathcal{X} \subseteq \mathbb{R}^{d}$ , while ensuring a safety constraint of the form $x_{t}^{\top}\mu_{*} \leq \gamma$ is met at all times (either in expectation or with high probability). A variety of algorithms have been proposed, including UCB-style [23, 232], and Thompson Sampling [30, 31]. While these works show that $\sqrt{T}$ regret is attainable, they only provide worst-case bounds (while we obtain instance-dependent bounds) and do not study the pure-exploration best-arm identification problem. To our knowledge, the only work to offer instance-dependent guarantees is [9], yet they focus exclusively on the regret setting, and offer a relatively coarse notion of instance-dependence — analogous to $\mathcal{O}(d \cdot \text{poly log } T / \Delta_{\min})$ bounds in the unconstrained linear bandits setting — in contrast to the more fine-grained notion of instance-dependence we provide.
279
+
280
+ To our knowledge, only several existing works consider the question of best-arm identification with safety constraints [36][37][39][28]. The most related to ours is [28] which focuses on the easier problem of safe best arm identification with known rewards and unknown constraints. Since the reward is known, the main challenge in the setting of [28] is to learn constraints via G-optimal designs. The key and novel challenge of our framework is to carefully balance between G and $\mathcal{X}\mathcal{Y}$ designs: naively spending enough budget to either learn the reward model (via a $\mathcal{X}\mathcal{Y}$ design) or to learn the safety constraints (via a G design) will fail catastrophically (see Example 2.1 and Proposition 1). [36][37] consider a general constrained optimization setting where the goal of the learner is to minimize some function $f(x)$ over a domain $x\in \mathcal{D}$ , while only having access to noisy samples of $f(x)$ , $f(x_{t}) + w_{t}$ , and guaranteeing that a safety constraint $g(x_{t})\geq h$ is met for every query point $x_{t}$ .
281
+
282
+ While they do provide a sample complexity upper bound, they give no lower bound, and, as shown in [39], their approach can be very suboptimal. [39] considers the setting of best-arm identification in multi-armed bandits. In their setting, at every step $t$ they query a value $a_{t} \in \mathcal{A}$ for a particular coordinate $i_{t}$ , and their goal is to identify the coordinate $i^{*}$ such that $a_{i^{*}}^{*}\theta_{i^{*}} \geq \max_{i}a_{i}^{*}\theta_{i}$ , where $a_{i}^{*}$ is the largest value respecting the safety constraint: $a_{i}^{*} = \arg \max_{a\in \mathcal{A}}a\theta_{i}$ s.t. $a\mu_i \leq \gamma$ . Similar to [36, 37], they require that the safety constraint $a_{t}\mu_{i_{t}} \leq \gamma$ must be met while learning. Though they do show matching upper and lower bounds, and in addition consider a slightly more general setting that allows for nonlinear (but monotonic) response functions, they treat every coordinate as independent, and do not allow for information-sharing between coordinates—the key generalization the linear bandit setting targets. We remark as well that in our setting, unlike these works, we allow the learner to query unsafe points during exploration, and only require that they output a safe decision at termination.
283
+
284
+ Best-Arm Identification in Linear Bandits. The best-arm identification problem in multi-armed bandits (without safety constraints) is a classical and well-studied problem [3, 33, 12, 5], and near-optimal algorithms exist [18, 22]. More recently, there has been a growing interest in understanding the sample complexity of best-arm identification in linear bandits [35, 19, 40, 13, 20, 11]. We highlight in particular the work of [13] which proposes an experiment-design based algorithm, RAGE, that our approach takes inspiration from. While much progress has been made in understanding best-arm identification in linear bandits, to our knowledge, no existing works consider the setting of best-arm identification in linear bandits with safety constraints, the setting of this work.
285
+
286
+ Active Classification under FDR constraints We finally mention one other related body of work—the problem of actively sampling to find a classifier with high accuracy or recall under precision constraints. Motivated by the experimental design approach of our main algorithm, BESIDE, we provide a heuristic algorithm for this problem with good empirical performance in Section 3.1. There is an extensive body of work on active learning (see the survey [14]) but only recently have works made the connection between best-arm identification for linear bandits and classification [21, 16, 7]. Precision constraints has been less studied in the adaptive context, we only know of [16, 4].
287
+
288
+ # 5 Conclusion
289
+
290
+ In this work we have shown that it is possible to efficiently find the best safe arm in linear bandits with a carefully designed adaptive experiment design-based approach. Our results open up several interesting directions for future work.
291
+
292
+ Instance Optimality. While BESIDE is worst-case optimal, in Appendix A we show an instance-dependent lower bound which BESIDE does not, in general, seem to hit. We conjecture that this lower bound may be loose—addressing this discrepancy and showing matching instance-dependent upper and lower bounds is an exciting direction for future work.
293
+
294
+ Safety During Exploration. Though there are many interesting applications where we may not require safety during exploration (i.e. only querying safe arms), in other cases we may need to ensure safety is met during exploration. Extending our work to this setting is an interesting open problem.
295
+
296
+ Potential Impacts. As with any algorithm making stochastic assumptions, if assumptions are not met we can not guarantee the performance. In this case, one limitation is that if the underlying environment is changing (i.e. the constraints vary over time) the algorithm could have unexpected behavior with unintended consequences. Such a situation could lead to harmful results in examples such as the online advertising bidding example from the introduction. To mitigate this limitation of our setting, practitioners are encouraged to monitor many metrics, both short and long-term.
297
+
298
+ # Acknowledgements
299
+
300
+ The work of AW was supported by an NSF GFRP Fellowship DGE-1762114. The work of JM was supported by an NSF Career award, and NSF AI institute (IFML) and the Simons collaborative grant on the foundations of fairness. The work of KJ was funded in part by the AFRL and NSF TRIPODS 2023166.
301
+
302
+ # References
303
+
304
+ [1] Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification, 2018.
305
+ [2] Sanae Amani, Mahnoosh Alizadeh, and Christos Thrampoulidis. Linear stochastic bandits under safety constraints. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché Buc, Edward A. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 9252-9262, 2019. URL http://papers.nips.cc/paper/9124-linear-stochastic-bandits-under-safety-constraints
306
+ [3] Robert E Bechhofer. A sequential multiple-decision procedure for selecting the best one of several normal populations with a common unknown variance, and its use with various experimental designs. Biometrics, 14(3):408-429, 1958.
307
+ [4] Paul N Bennett, David M Chickering, Christopher Meek, and Xiaojin Zhu. Algorithms for active classifier selection: Maximizing recall with precision constraints. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 711-719, 2017.
308
+ [5] Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits problems. In International conference on Algorithmic learning theory, pages 23–37. Springer, 2009.
309
+ [6] Romain Camilleri, Julian Katz-Samuels, and Kevin Jamieson. High-dimensional experimental design and kernel bandits, 2021.
310
+ [7] Romain Camilleri, Zhihan Xiong, Maryam Fazel, Lalit Jain, and Kevin Jamieson. Selective sampling for online best-arm identification, 2021.
311
+ [8] Olivier Catoni. Challenging the empirical mean and empirical variance: a deviation study. In Annales de l'IHP Probabilités et statistiques, volume 48, pages 1148-1185, 2012.
312
+ [9] Tianrui Chen, Aditya Gangrade, and Venkatesh Saligrama. A doubly optimistic strategy for safe linear bandits, 2022. URL https://arxiv.org/abs/2209.13694
313
+ [10] Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, and Seungil You. Training well-generalizing classifiers for fairness metrics and other data-dependent constraints, 2018. URL https://arxiv.org/abs/1807.00028
314
+ [11] Rémy Degenne, Pierre Ménard, Xuedong Shang, and Michal Valko. Gamification of pure exploration for linear bandits. In International Conference on Machine Learning, pages 2432-2442. PMLR, 2020.
315
+ [12] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Pac bounds for multi-armed bandit and markov decision processes. In International Conference on Computational Learning Theory, pages 255–270. Springer, 2002.
316
+ [13] Tanner Fiez, Lalit Jain, Kevin Jamieson, and Lillian Ratliff. Sequential experimental design for transductive linear bandits, 2019.
317
+ [14] Steve Hanneke. Theory of active learning. Foundations and Trends in Machine Learning, 7 (2-3), 2014.
318
+ [15] Elad Hazan and Satyen Kale. Projection-free online learning. arXiv preprint arXiv:1206.4657, 2012.
319
+ [16] Lalit Jain and Kevin Jamieson. A new perspective on pool-based active classification and false-discovery control, 2020.
320
+ [17] Kevin Jamieson and Lalit Jain. Interactive machine learning. 2022.
321
+ [18] Kevin Jamieson, Matthew Malloy, Robert Nowak, and Sébastien Bubeck. lil'ucb: An optimal exploration algorithm for multi-armed bandits. In Conference on Learning Theory, pages 423–439. PMLR, 2014.
322
+
323
+ [19] Zohar S Karnin. Verification based solution for structured mab problems. Advances in Neural Information Processing Systems, 29, 2016.
324
+ [20] Julian Katz-Samuels, Lalit Jain, Kevin G Jamieson, et al. An empirical process approach to the union bound: Practical algorithms for combinatorial and linear bandits. Advances in Neural Information Processing Systems, 33:10371-10382, 2020.
325
+ [21] Julian Katz-Samuels, Jifan Zhang, Lalit Jain, and Kevin Jamieson. Improved algorithms for agnostic pool-based active classification, 2021. URL https://arxiv.org/abs/2105.06499.
326
+ [22] Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On the complexity of best arm identification in multi-armed bandit models, 2016.
327
+ [23] Abbas Kazerouni, Mohammad Ghavamzadeh, Yasin Abbasi Yadjkori, and Benjamin Van Roy. Conservative contextual linear bandits. Advances in Neural Information Processing Systems, 30, 2017.
328
+ [24] E. Keogh, C.; Blake, and C. J. Merz. Uci repository of machine learning databases., 1998. URL http://archive.ics.uci.edu/ml
329
+ [25] Branislav Kveton, Manzil Zaheer, Csaba Szepesvari, Lihong Li, Mohammad Ghavamzadeh, and Craig Boutilier. Randomized exploration in generalized linear bandits, 2019. URL https://arxiv.org/abs/1906.08947
330
+ [26] Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
331
+ [27] M Lichman. Uci machine learning repository., 2013. URL http://archive.ics.uci.edu/ml
332
+ [28] David Lindner, Sebastian Tschiatschek, Katja Hofmann, and Andreas Krause. Interactively learning preference constraints in linear bandits, 2022. URL https://arxiv.org/abs/2206.05255
333
+ [29] Blake Mason, Romain Camilleri, Subhojyoti Mukherjee, Kevin Jamieson, Robert Nowak, and Lalit Jain. Nearly optimal algorithms for level set estimation. arXiv preprint arXiv:2111.01768, 2021.
334
+ [30] Ahmadreza Moradipari, Sanae Amani, Mahnoosh Alizadeh, and Christos Thrampoulidis. Safe linear thompson sampling with side information, 2019. URL https://arxiv.org/abs/1911.02156
335
+ [31] Ahmadreza Moradipari, Christos Thrampoulidis, and Mahnoosh Alizadeh. Stage-wise conservative linear bandits. 2020. doi: 10.48550/ARXIV.2010.00081. URL https://arxiv.org/abs/2010.00081
336
+ [32] Aldo Pacchiano, Mohammad Ghavamzadeh, Peter Bartlett, and Heinrich Jiang. Stochastic bandits with linear constraints, 2020. URL https://arxiv.org/abs/2006.10185
337
+ [33] Edward Paulson. A sequential procedure for selecting the population with the largest mean from $k$ normal populations. The Annals of Mathematical Statistics, pages 174-180, 1964.
338
+ [34] Max Simchowitz, Kevin Jamieson, and Benjamin Recht. The simulator: Understanding adaptive sampling in the moderate-confidence regime. In Conference on Learning Theory, pages 1794-1834. PMLR, 2017.
339
+ [35] Marta Soare, Alessandro Lazaric, and Rémi Munos. Best-arm identification in linear bandits. Advances in Neural Information Processing Systems, 27:828-836, 2014.
340
+ [36] Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with gaussian processes. In International Conference on Machine Learning, pages 997-1005. PMLR, 2015.
341
+ [37] Yanan Sui, Joel Burdick, Yisong Yue, et al. Stagewise safe bayesian optimization with gaussian processes. In International Conference on Machine Learning, pages 4781-4789. PMLR, 2018.
342
+
343
+ [38] Andrew Wagenmaker, Yifang Chen, Max Simchowitz, Simon S Du, and Kevin Jamieson. Reward-free rl is no harder than reward-aware rl in linear markov decision processes. arXiv preprint arXiv:2201.11206, 2022.
344
+ [39] Zhenlin Wang, Andrew Wagenmaker, and Kevin Jamieson. Best arm identification with safety constraints. In International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
345
+ [40] Liyuan Xu, Junya Honda, and Masashi Sugiyama. A fully adaptive algorithm for pure exploration in linear bandits. In International Conference on Artificial Intelligence and Statistics, pages 843-851. PMLR, 2018.
346
+
347
+ # Checklist
348
+
349
+ 1. For all authors...
350
+
351
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
352
+ (b) Did you describe the limitations of your work? [Yes] See Conclusion.
353
+ (c) Did you discuss any potential negative societal impacts of your work? [Yes] See Conclusion.
354
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
355
+
356
+ 2. If you are including theoretical results...
357
+
358
+ (a) Did you state the full set of assumptions of all theoretical results? [Yes]
359
+ (b) Did you include complete proofs of all theoretical results? [Yes] Refer to Appendix.
360
+
361
+ 3. If you ran experiments...
362
+
363
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Refer to Appendix.
364
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Refer to Appendix.
365
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes]
366
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix.
367
+
368
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
369
+
370
+ (a) If your work uses existing assets, did you cite the creators? [Yes]
371
+ (b) Did you mention the license of the assets? [N/A]
372
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
373
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
374
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
375
+
376
+ 5. If you used crowdsourcing or conducted research with human subjects...
377
+
378
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
379
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
380
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
activelearningwithsafetyconstraints/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc1039ce6fd8aefa06476ef4b1c1e600e01a00051a64c51d00891666f452fc99
3
+ size 247607
activelearningwithsafetyconstraints/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07f3e459414cfd5f5c9cfba2655669c6448c8111720c420546f253ffd51e0db3
3
+ size 654278
activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/24020f84-ae3b-43a2-8fed-9595ccc6eb74_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:564a1da6da53cb4467edaf1605538ca6dc2ab7a37bce05948fe3d65d6218a06b
3
+ size 81552
activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/24020f84-ae3b-43a2-8fed-9595ccc6eb74_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:017cff7f9982677f3c802974677b21f3c5c24a2ccc366959608eb05795715a53
3
+ size 99841
activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/24020f84-ae3b-43a2-8fed-9595ccc6eb74_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39c234ba8181e7cdf5a434503e8deca84c6724e63610aa9962d27f48daff5d30
3
+ size 16380537
activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/full.md ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active-Passive SimStereo – Benchmarking the Cross-Generalization Capabilities of Deep Learning-based Stereo Methods
2
+
3
+ Laurent Jospin<sup>1*</sup> Allen Antony<sup>1</sup> Lian Xu<sup>1</sup> Hamid Laga<sup>2</sup> Farid Boussaid<sup>1</sup> Mohammed Bennamoun<sup>1</sup>
4
+
5
+ <sup>1</sup>University of Western Australia <sup>2</sup>Murdoch University
6
+
7
+ {laurent.jospin,lian.xu,farid.boussaid,mohammed.bennamoun}@uwa.edu.au H.Laga@murdoch.edu.au
8
+
9
+ # Abstract
10
+
11
+ In stereo vision, self-similar or bland regions can make it difficult to match patches between two images. Active stereo-based methods mitigate this problem by projecting a pseudo-random pattern on the scene so that each patch of an image pair can be identified without ambiguity. However, the projected pattern significantly alters the appearance of the image. If this pattern acts as a form of adversarial noise, it could negatively impact the performance of deep learning-based methods, which are now the de-facto standard for dense stereo vision. In this paper, we propose the Active-Passive SimStereo dataset and a corresponding benchmark to evaluate the performance gap between passive and active stereo images for stereo matching algorithms. Using the proposed benchmark and an additional ablation study, we show that the feature extraction and matching modules of a selection of twenty selected deep learning-based stereo matching methods generalize to active stereo without a problem. However, the disparity refinement modules of three of the twenty architectures (ACVNet, CascadeStereo, and StereoNet) are negatively affected by the active stereo patterns due to their reliance on the appearance of the input images.
12
+
13
+ # 1 Introduction
14
+
15
+ Stereo vision is used by many artificial or natural vision systems to acquire depth information from a pair of 2D projective views of the 3D world. In the context of computer vision, stereo matching operates in a multi-step pipeline (Fig. 2) composed of: (i) a feature volume construction from the left and right views, (ii) a cost volume computation, which may be coupled with a regularization module, (iii) a disparity extraction from the cost volume, which is done using the argmin function, and (iv) a disparity refinement module, which may also use the cost volume and/or the image features as additional cues. The central step in this pipeline is the construction of the cost volume, which is a function $C(x,y,d)$ that measures how unlikely a pixel of spatial coordinates $(x,y)$ is to be assigned a disparity value $d$ . Textureless and repetitive patterns in images can produce flat or periodic cost curves in the cost volume, leading to erroneous disparity maps in passive stereo systems, where only a pair of cameras is used. To address this issue, active stereo-based methods [13] project a pseudorandom light pattern on the scene to remove the textureless or self-similar areas in the stereo images (Fig. 1). Active stereo is now a critical component in many applications such as augmented reality [22] and robotics [3]. They are also part of consumer electronics devices such as smartphones [26].
16
+
17
+ Traditional stereo matching pipelines rely solely on closed-form formulations [29]. However, in recent years, learning-based methods have led to a series of breakthroughs in the field. Early learning-
18
+
19
+ ![](images/6b9bae1683157aaa4710d07ca35805fe80cdcafaba1a717215363a56008fb6e2.jpg)
20
+ (a)
21
+
22
+ ![](images/7349c85c1591a008e69c2f375e4ca0b3079ab23e790503a64c1d947e2c15c255.jpg)
23
+ (b)
24
+
25
+ ![](images/934fcdd1d1d5b39e183ec4918cb1ac552f8743c525ac472b8b06d11894aae20f.jpg)
26
+ (c)
27
+
28
+ ![](images/d5feec740f0d84d55b157339281eaae98319c3e908c80616baf43e456a379df5.jpg)
29
+ Figure 1: A sample from our dataset, with realistic (a) passive and (b) active stereo images along with (c) their corresponding perfect ground truth disparities. The proposed dataset allows the comparison of the relative performance of stereo vision methods when used for passive or active stereo matching.
30
+ Figure 2: The typical stereo matching pipeline.
31
+
32
+ based methods focused on replacing one or more blocks in the traditional pipeline with a deep neural network. The latest methods, however, address the problem in an end-to-end fashion; see [15] for a detailed survey. Due to the lack of public active stereo datasets and the fact that passive stereo was perceived as more challenging, most of these models have been trained for the passive stereo problem. An important property of the closed-form formulae used in traditional stereo matching methods is that when non-self similar texture is added to the scene, their performance monotonically increases. This key feature is at the core of active stereo systems [21]. If one can determine that the latest deep learning methods can also leverage active pseudo random noise to improve their prediction, this would show that these methods are indeed learning to match similar regions of the images rather than fitting some bias into the data. Additionally, it provides some insight into the models' generalization ability, which is important for their safe deployment in their intended application (e.g., autonomous driving).
33
+
34
+ Under ideal conditions, deep learning-based methods are expected to behave in a similar fashion to their non learning counterpart and exhibit improved performance when additional pseudo-random texture is added to the scene. Yet, many large-scale deep learning models see a degradation of their performances when used on datasets that are only slightly different from their original training datasets [43]. They often require an adaptation procedure to generalize to new unseen domains [33]. Furthermore, they can be severely affected by even little adversarial noise under certain circumstances [32], as they are prone to overfitting on small biases present in their training data [23]. However, these flaws are not everywhere. For example, it has already been shown that once simulated images are close enough to real images, deep learning stereo systems generalize without issues [34]. Also, unlike adversarial noise, the pseudo random patterns used in active stereo have not been learned specifically to cause failure for deep learning models. This means that existing deep learning methods might generalize to the active stereo domain without any form of fine-tuning.
35
+
36
+ In this work, we investigate how different state-of-the-art deep learning-based stereo matching architectures are impacted when presented with active, instead of passive, stereo images. To make the evaluation of the generalization ability of stereo vision models easier, we propose Active-Passive SimStereo, a novel dedicated dataset composed of computer-generated images rendered using a physically-based rendering engine. The proposed dataset provides both active and passive frames for each given scene. This allows to evaluate and compare the performance of each algorithm on active and passive stereo using exactly the same scenes. The data set is publicly available at https://dx.doi.org/10.21227/gf1e-t452.
37
+
38
+ The remaining parts of the paper are organized as follows. Section 2 reviews the related work. Section 3 describes the proposed dataset. Section 4 presents the proposed benchmark used for
39
+
40
+ evaluation. Section 5 presents and discusses the results of existing methods. Finally, Section 8 concludes the paper.
41
+
42
+ # 2 Related Work
43
+
44
+ Many datasets and benchmarks have been proposed for passive stereo vision including the popular Middlebury dataset [29, 11], whose latest version uses a precise but expensive reconstruction pipeline to acquire the ground truth [30]. The corresponding Middlebury Stereo Evaluation benchmark is widely used to evaluate stereo vision algorithms. Due to the challenges associated with the 3D ground-truth acquisition, the aforementioned dataset only contains a small amount of labelled data, which is not sufficient to train large-scale deep architectures. Subsequently, the Scene Flow datasets have been proposed [19]. They contain a large number of simulated image pairs with ground-truth optical flows and disparities generated from open source motion graphics short movies or randomized virtual 3D objects. However, the appearance of these simulated scenes is not realistic. Thus, most deep learning-based models for stereo vision need to be fine-tuned after being trained on the Scene Flow datasets. The UnrealStereo4K simulated dataset [34] was later proposed to provide higher resolution and more realistic images, taken from video games scenes.
45
+
46
+ One of the most popular applications of stereo vision is autonomous driving, since vision based systems offer a cost-effective alternative or complement to LIDAR-based systems for depth measurement. Thus, many datasets and benchmarks have been specifically developed for this application. Examples include the KITTI Vision suite [20, 7], which is currently the most popular stereo vision benchmark for autonomous driving, DrivingStereo [40], which is a large dataset commonly used for training rather than evaluation, and ApolloScape [12], which provides a benchmark suite for different challenges related to autonomous driving, including stereo vision. The ground truth of these datasets was obtained using a LIDAR-based system. Occasionally, a recognition system was also used to detect and categorize cars in images before aligning a CAD model onto the LIDAR depth map [20, 12]. The inherent noise associated with these various processing steps implies that the ground truth cannot be trusted for very precise reconstructions. However, given that autonomous driving scenarios do accommodate a disparity error of one or two pixels, this is not a problem for the intended use of those datasets.
47
+
48
+ For active stereo vision, there are far fewer public datasets, none of which has become popular for training or evaluating deep learning-based stereo matching methods. The few end-to-end methods trained for active stereo use soft labels, i.e., labels with associated uncertainty, such as the depth generated by stereo cameras [44, 41]. Other methods did also use self-supervision, e.g., by using the information conserved when compressing a given image patch as supervisory signal [28]. Simulation techniques have also been proposed to generate semi-realistic images from CAD models [25] based on screen space projection of texture. This approach has been used on multiple occasions [28, 44], but none of the produced datasets has been made public. In this work, We use a similar approach but with a physically-based rendering pipeline to improve the realism of the scenes, making our dataset more suitable for evaluation.
49
+
50
+ Datasets providing images for both active and passive stereo matching are even more scarce. To the best of our knowledge, only UnrealStereo4K [34] has monocular active frames for a subset of the images, but this part of the dataset has not been made publicly available. Furthermore, monocular active depth estimation is a slightly different problem from active stereo vision [27], as the matching is performed between a pattern and an image, rather than between two images with a projected pattern. Thus, there exists no public dataset available for evaluating stereo models on active stereo images or evaluating the generalization capabilities of these models.
51
+
52
+ # 3 The ACTIVE-PASSIVE SIMSTEREO Dataset
53
+
54
+ Simulation offers both benefits and challenges for dataset creation. The size of real stereo datasets with high quality ground truth like Middlebury 2014 is limited because of the complex setups and amount of work needed [30]. On the other hand, automated pipelines like the ones used for Kitti [20] are noisy. The Kitti benchmark is, therefore, limited to $BAD_{N}$ metrics (see Section 4) with large $N$ and is not suitable for subpixel accuracy comparisons. Simulation on the other hand makes labelling cheaper and noiseless. It also allows to generate the exact same image twice, once for
55
+
56
+ ![](images/2cb04772f91b4fded0a99969ebac462b04310ead1ed1e965a56ba1cf4e9b74dc.jpg)
57
+ (a) Sceneflow [19]
58
+
59
+ ![](images/b20cb7956d15f0f018c2380e52c7d75637fde2a4d86dd03fdc87c88f0036e861.jpg)
60
+ (b) UnrealStereo4K [34]
61
+ Figure 3: Illustration of the benefits of a physically-based rendering pipeline. In our images (c), (d), indirect lighting creates soft and realist shadows and specularities, which is not the case with existing simulated datasets (a), (b).
62
+
63
+ ![](images/58adefd06fee15c4ce05490eb34aa6cb970fbbad8834262995ea170d4d7949fd.jpg)
64
+ (c) Ours
65
+
66
+ ![](images/109e9f7ca01a01e4e6835edf05932f656474e6ba43bb7dca8a22b1fb98c72957.jpg)
67
+ (d) Ours
68
+
69
+ active stereo and once for passive stereo. This greatly reduces biases when measuring the generalization ability of a given model, since the geometry does not change between frames. However, the simulation procedure may add a domain gap with real images. In computer vision, simulated data can be obtained using computer generated imagery (CGI). In recent years, CGI software has made tremendous advances. Using advanced CGI techniques, generating synthetic scenes that are indistinguishable from real images and match quite closely the acquisition process of real cameras became possible [8]. This is done by using physically-based rendering (PBR), i.e., a rendering engine simulating the physical behaviour of light [24]. Figure 3 compares images from our dataset with images from other simulated datasets [34, 19], which used real-time render instead of PBR. The most visible benefit of using PBR is that it produces more accurate indirect lighting, thereby creating softer and more accurate shadows. A better rendering of non-Lambertian materials [24] is another benefit. This significantly reduces the domain gap between real and simulated data once a synthetic scene is created. In this paper, we purchased a series of high-quality realistic 3D assets to create realistic scenes for our dataset. Our dataset is not intended for training but for performance evaluation and fine-tuning, so we prioritize quality and diversity over quantity. To increase the number and the diversity of scenes, we also generated images that contain procedurally-generated shapes and images with abstract objects.
70
+
71
+ # 3.1 Simulation Procedure
72
+
73
+ For each 3D scene, we designed two lighting setups. The first corresponds to a passive stereo acquisition scenario without any pseudo-random pattern while the second one corresponds to an active stereo acquisition scenario with a collection of lights projecting a pseudo-random pattern, see Figure 4). We used the Cycles path tracing engine integrated into Blender [4] for rendering. We used standard shaders for the non-textured light sources. For the pseudo-random pattern light projectors, we used a custom programmed shader to generate a pattern resembling the one from the RealSense cameras. The light intensity $I$ , is a function of the incoming direction $d$ :
74
+
75
+ $$
76
+ I (\boldsymbol {d}) = p \left(\left(1 - W _ {s _ {1}} (\boldsymbol {d} + \boldsymbol {t}) ^ {2}\right) ^ {p _ {1}} + c\right) \left(1 - W _ {s _ {2}} (\boldsymbol {d} + \boldsymbol {t}) ^ {2}\right) ^ {p _ {2}}. \tag {1}
77
+ $$
78
+
79
+ Here, $W_{s_1}$ and $W_{s_2}$ are Whorley noise patterns [36]; $s_1$ and $s_2$ are the scale factors of their respective Whorley noise pattern with $s_1 < s_2$ ; $p_1$ and $p_2$ are two light intensity correction factors with $p_1 \ll p_2$ ; $c$ is the minimal power of the $W_{s_2}$ pattern; $t$ is a random translation of the texture space to generate different patterns for different lights; and $p$ is the power gain of the lamp, expressed in Watts.
80
+
81
+ The ground truth is then extracted from the depth pass $z$ generated by the rendering software. $z$ measures the distance between the visible point in the 3D scene and the optical center of the camera. The ground truth disparity $\tilde{d}$ can be computed as:
82
+
83
+ $$
84
+ \check {d} = \frac {B f}{z}, \tag {2}
85
+ $$
86
+
87
+ where $B$ is the stereo camera baseline and $f$ is the camera focal length in pixel. We used a Baseline of $0.16\mathrm{m}$ (where m here refers to the unit used internally by Blender, not the actual meters in the real world), and cameras with a focal length of $48.61\mathrm{mm}$ , which amounts to 888.89 pixels at standard resolution for a $35\mathrm{mm}$ film equivalent sensor.
88
+
89
+ ![](images/c9ab025437d4c88e23b2b7a0c1892009347cc0ac0fe10aa4f8359e4e067ce009.jpg)
90
+ (a)
91
+
92
+ ![](images/1356a2f4f357e1f0bff664fc34affa2acee000bf33b6b1b931e164fe87d8d98e.jpg)
93
+ (b)
94
+
95
+ ![](images/8fbc3824a0181ef7c46275cf111b38136915ddd6c2d2478e7092246a344393b3.jpg)
96
+ Figure 4: The two different light setups to simulate (a) passive and (b) active stereo acquisition in a given 3D scene.
97
+ Figure 5: Sample images from the proposed test set.
98
+
99
+ # 3.2 The Dataset
100
+
101
+ The proposed dataset contains 515 image pairs split into a training set (80% of the images) and a test set (20% of the images) used for benchmarking. The test set contains 103 image pairs (Fig. 5), comprising different shapes (e.g., large flat surfaces such as floors or small areas with depth discontinuities such as plant leaves), depth ranges, and styles (i.e., realistic scenes or abstract compositions). The remaining 412 image pairs are contributed as a training set. We use the standard resolution $(640 \times 480$ pixels) for the benchmark as it matches or approaches the resolution of most stereo cameras. It also guarantees that the images can be processed by most methods even on memory-constrained hardware. Despite the small number of images, the test set is shown to be large enough to evaluate deep learning methods. Detailed experiments to demonstrate this are provided in the Supplementary Material. Our dataset is also large enough for fine-tuning (see Section 7).
102
+
103
+ In addition to the benchmark, we provide a simulation Blender file with the specific Shader, as well as the python code to post-process the images.
104
+
105
+ Table 1: Comparison of the resolution and disparity ranges of different datasets.
106
+
107
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Generation method</td><td rowspan="2"># images pairs</td><td rowspan="2">Resolution [px]</td><td colspan="3">Disparity [px]</td></tr><tr><td>Min</td><td>Mean</td><td>Max</td></tr><tr><td>Ours (train)</td><td>Ray tracing rendering</td><td>412</td><td>640 × 480</td><td>0.00</td><td>21.77</td><td>212.67</td></tr><tr><td>Ours (test)</td><td>Ray tracing rendering</td><td>103</td><td>640 × 480</td><td>0.00</td><td>25.12</td><td>129.67</td></tr><tr><td>Middlebury 2014 [30]</td><td>Large baseline active stereo</td><td>33</td><td>2850 × 19001</td><td>28.94</td><td>148.36</td><td>695.61</td></tr><tr><td>Kitti 2012 [7]</td><td>Real images + Laser scanner</td><td>389</td><td>1240 × 3801</td><td>4.11</td><td>38.32</td><td>227.99</td></tr><tr><td>Kitti 2015 [20]</td><td>Real images + Laser scanner + 3D cad object alignment</td><td>400</td><td>1240 × 3801</td><td>4.46</td><td>33.62</td><td>229.96</td></tr><tr><td>Sceneflow [19]</td><td>Screen space rasterization</td><td>39049</td><td>960 × 540</td><td>1.12</td><td>39.87</td><td>940.75</td></tr><tr><td>UnrealStereo4k [34]</td><td>Screen space rasterization</td><td>7200</td><td>3840 × 2160</td><td>0.01</td><td>173.20</td><td>1515.60</td></tr></table>
108
+
109
+ <sup>1</sup> The images in these datasets have variable sizes. Thus, the values given here are an approximation.
110
+
111
+ # 4 The Benchmark
112
+
113
+ We use five different scores to compare the generalisation abilities of different methods. The most important metric in stereo vision is the BAD-N, which measures the proportion of pixels above a given error threshold $N$ . It is computed over a test set $T$ as:
114
+
115
+ $$
116
+ B A D _ {N} = \frac {1}{\| T \|} \sum_ {I \in T} \frac {\sum_ {i = 0} ^ {h _ {I}} \sum_ {j = 0} ^ {w _ {I}} \mathbf {1} _ {| \Delta d _ {i , j} | > N}}{h w}, \tag {3}
117
+ $$
118
+
119
+ where $h$ and $w$ , respectively, are the height and the width of the image $I$ , $\mathbf{1}_A$ is the indicator function of $A$ and $\Delta d$ is the disparity error for the image $I$ . We report results for $N \in \{0.5, 1, 2, 4\}$ . Lower $\mathrm{BAD}_N$ scores indicate a better accuracy in reconstructing the disparity. In the rest of the paper, we will indicate this by appending a down arrow $(\downarrow)$ after the name of these metrics.
120
+
121
+ We also use the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE). The former is a good estimate of the expected amplitude of the error of a given method, and the latter, especially when compared to the MAE, is a good indicator of the presence of outlier points with large errors. The RMSE and MAE scores over $T$ are computed as:
122
+
123
+ $$
124
+ R M S E _ {T} = \frac {1}{\| T \|} \sum_ {I \in T} \sqrt {\frac {\sum_ {i = 0} ^ {h} \sum_ {j = 0} ^ {w} \Delta d _ {i , j} ^ {2}}{h w}}, \quad M A E _ {T} = \frac {1}{\| T \|} \sum_ {I \in T} \frac {\sum_ {i = 0} ^ {h} \sum_ {j = 0} ^ {w} | \Delta d _ {i , j} |}{h w}. \tag {4}
125
+ $$
126
+
127
+ These scores are the average of the corresponding metric over the test set. Lower MAE and RMSE indicate a better accuracy in reconstructing the disparity. In the rest of the paper, we will indicate this by appending a down arrow $(\downarrow)$ after the name of these metrics.
128
+
129
+ A consequence of using the MAE and RMSE is that if a method exhibits low performances on a specific image, the overall score of the method will be greatly influenced by this single image. For an absolute performance evaluation benchmark, this is not a problem. However, this is an issue when evaluating the relative performance variation resulting from a domain change (e.g., from passive stereo to active stereo). To mitigate this use, we measure the mean relative score variation across all images. Given a metric $M$ , the relative score variation $R_{M}$ is computed as:
130
+
131
+ $$
132
+ R _ {M} = \frac {1}{\| T \|} \sum_ {I \in T} \frac {M _ {I _ {P}} - M _ {I _ {A}}}{M _ {I _ {P}}} \tag {5}
133
+ $$
134
+
135
+ where $M_{I_P}$ and $M_{I_A}$ are the metric scores evaluated on the passive stereo results and active stereo results of an image $I$ , respectively. In this paper, we focus on $R_{MAE}$ and $R_{BAD_2}$ .
136
+
137
+ We also report the proportion $P_{M}$ of the testing images in which the active stereo results outperform their passive stereo counterparts in terms of the metric $M$ . It is formulated as:
138
+
139
+ $$
140
+ P _ {M} = \frac {1}{\| T \|} \sum_ {I \in T} \mathbf {1} _ {M _ {I _ {A}} < M _ {I _ {P}}}. \tag {6}
141
+ $$
142
+
143
+ In this paper, we focus on $P_{MAE}$ and $P_{BAD_2}$ . We report in the supplementary material other variants of those metrics.
144
+
145
+ Higher P and R scores indicate a better generalization of the method from passive stereo to active stereo. In the rest of the paper, we will indicate this by appending an up arrow $(\uparrow)$ after the name of these metrics.
146
+
147
+ Finally, discontinuities in the depth image are an important parameter in evaluating stereo reconstruction methods. Discussing the detailed performances of existing methods would be beyond the scope of this paper, which focuses on the generalization capabilities from passive to active stereo and vice versa. Nonetheless, we also included, in the supplementary material, versions of all the metrics presented above, restricted either on the edge regions of the image or on its flat regions.
148
+
149
+ # 5 Results on Existing Methods
150
+
151
+ We used the proposed benchmark to evaluate state-of-the-art, end-to-end deep neural networks for stereo-matching. We considered 20 methods for which the source code and the pre-trained models are available. In the supplementary material, we also evaluate a selection of traditional non-learning methods. Those methods are listed in Table 2, along with the datasets used to train the models. We used mainly the models weights trained on SceneFlow [19] and fine-tuned on KITTI 2015 [20] as this approach is the standard for deep stereo models [15]. For each method, we list the reported D1 score in the KITTI 2015 benchmark [20]. This is the proportion of pixels with an error greater than 3px or $5\%$ of the disparity. We also report, if available, the $\mathrm{BAD}_2$ and MAE scores of the method in the Middlebury benchmark [30].
152
+
153
+ Table 2: Evaluated methods with training sets and results on public benchmarks
154
+
155
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Stereo type</td><td colspan="2">Training for our evaluation</td><td>KITTI 2015 [20]</td><td colspan="2">Middlebury [30]</td></tr><tr><td>Train set</td><td>Fine-tuned set</td><td>D1-all ↓</td><td>BAD2 ↓</td><td>MAE ↓</td></tr><tr><td>AANet [38]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.55%</td><td>25.20%</td><td>8.88px</td></tr><tr><td>ACVNet [37]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>1.65%</td><td>13.60%</td><td>8.24px</td></tr><tr><td>AnyNet [35]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>6.20%</td><td>-</td><td>-</td></tr><tr><td>CascadeStereo [9]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.00%</td><td>18.80%</td><td>4.50px</td></tr><tr><td>CREStereo [16]</td><td>Passive</td><td>CREStereo</td><td>ETH3D</td><td>1.69%</td><td>3.71%</td><td>1.15px</td></tr><tr><td>Deep-Pruner (best) [6]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.15%</td><td>30.10%</td><td>4.80px</td></tr><tr><td>Deep-Pruner (fast) [6]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.59%</td><td>-</td><td>-</td></tr><tr><td>GA-Net [42]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.59%</td><td>18.90%</td><td>12.20px</td></tr><tr><td>GwcNet [10]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.11%</td><td>-</td><td>-</td></tr><tr><td>HighResStereo [39]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.14%</td><td>10.20%</td><td>2.07px</td></tr><tr><td>Lac-GwcNet [18]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>1.77%</td><td>-</td><td>-</td></tr><tr><td>MobileStereoNet3D [31]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.10%</td><td>-</td><td>-</td></tr><tr><td>MobileStereoNet2D [31]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PSM-Net [1]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>2.32%</td><td>42.10%</td><td>6.68px</td></tr><tr><td>RAFT-Stereo [17]</td><td>Passive</td><td>SceneFlow</td><td>-</td><td>-</td><td>4.74%</td><td>1.27px</td></tr><tr><td>RealTimeStereo [2]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>7.54%</td><td>-</td><td>-</td></tr><tr><td>SMD-Nets [34]</td><td>Passive</td><td>UnrealStereo4K</td><td>KITTI 2015</td><td>2.08%</td><td>-</td><td>-</td></tr><tr><td>SRH-Net [5]</td><td>Passive</td><td>SceneFlow</td><td>KITTI 2015</td><td>-</td><td>-</td><td>-</td></tr><tr><td>StereoNet [14]</td><td>Passive</td><td>SceneFlow</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ActiveStereoNet [44]</td><td>Active</td><td>Self-supervised</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
156
+
157
+ As we are more interested in the generalization abilities of the different methods than their peak performances, we did not fine-tune any of these methods to our dataset. This is to ensure the measured generalization performances are not biased by fine-tuning for one specific modality in our dataset. On the other hand, fine-tuning for both modalities would not give the same insights on the generalization abilities of each method. However, this means that the reported performances should not be considered as the best achievable performances of the studied methods. In a second step, we fine-tune the methods which failed to generalize on our dataset to check if it can be used to fine-tune passive stereo methods on active stereo (Section 7).
158
+
159
+ In this paper, we focus on the main aggregate results as well as on our general observations when analysing the error maps. The detailed scores per image are provided in the supplementary material.
160
+
161
+ Table 3: Evaluation results of state-of-the-art methods on passive and then active stereo images.
162
+
163
+ <table><tr><td rowspan="2">Method</td><td colspan="6">Passive stereo images</td><td colspan="6">Active stereo images</td></tr><tr><td>RMSET↓</td><td>MAET↓</td><td>BAD0.5↓</td><td>BAD1↓</td><td>BAD2↓</td><td>BAD4↓</td><td>RMSET↓</td><td>MAET↓</td><td>BAD0.5↓</td><td>BAD1↓</td><td>BAD2↓</td><td>BAD4↓</td></tr><tr><td>AANet [38]</td><td>8.35px</td><td>4.07px</td><td>66%</td><td>45%</td><td>30%</td><td>20%</td><td>6.73px</td><td>2.02px</td><td>43%</td><td>20%</td><td>12%</td><td>8%</td></tr><tr><td>ACVNet [37]</td><td>9.22px</td><td>4.00px</td><td>36%</td><td>25%</td><td>17%</td><td>12%</td><td>3.49px</td><td>1.31px</td><td>23%</td><td>13%</td><td>8%</td><td>5%</td></tr><tr><td>AnyNet [35]</td><td>8.80px</td><td>5.31px</td><td>84%</td><td>69%</td><td>46%</td><td>29%</td><td>6.13px</td><td>3.43px</td><td>80%</td><td>61%</td><td>32%</td><td>18%</td></tr><tr><td>Cascade-Stereo [9]</td><td>10.12px</td><td>4.95px</td><td>66%</td><td>39%</td><td>23%</td><td>16%</td><td>4.48px</td><td>2.07px</td><td>64%</td><td>33%</td><td>14%</td><td>8%</td></tr><tr><td>CREStereo [16]</td><td>1.75px</td><td>0.71px</td><td>21%</td><td>14%</td><td>8%</td><td>4%</td><td>1.44px</td><td>0.32px</td><td>7%</td><td>4%</td><td>2%</td><td>1%</td></tr><tr><td>Deep-Pruner (best) [6]</td><td>7.07px</td><td>3.80px</td><td>50%</td><td>34%</td><td>24%</td><td>17%</td><td>3.16px</td><td>1.12px</td><td>23%</td><td>12%</td><td>7%</td><td>5%</td></tr><tr><td>Deep-Pruner (fast) [6]</td><td>7.97px</td><td>4.73px</td><td>67%</td><td>47%</td><td>32%</td><td>22%</td><td>4.27px</td><td>1.91px</td><td>41%</td><td>23%</td><td>13%</td><td>9%</td></tr><tr><td>GA-Net [42]</td><td>7.89px</td><td>4.01px</td><td>59%</td><td>39%</td><td>27%</td><td>19%</td><td>5.05px</td><td>1.56px</td><td>27%</td><td>14%</td><td>9%</td><td>7%</td></tr><tr><td>GwcNet [10]</td><td>10.38px</td><td>5.17px</td><td>74%</td><td>50%</td><td>33%</td><td>22%</td><td>4.41px</td><td>1.80px</td><td>53%</td><td>24%</td><td>11%</td><td>8%</td></tr><tr><td>High-Res-Stereo [39]</td><td>4.87px</td><td>2.94px</td><td>54%</td><td>37%</td><td>24%</td><td>16%</td><td>3.11px</td><td>1.30px</td><td>38%</td><td>20%</td><td>10%</td><td>5%</td></tr><tr><td>Lac-GwcNet [18]</td><td>7.71px</td><td>3.95px</td><td>58%</td><td>39%</td><td>26%</td><td>18%</td><td>3.34px</td><td>1.25px</td><td>31%</td><td>14%</td><td>8%</td><td>5%</td></tr><tr><td>MobileStereoNet3D [31]</td><td>9.81px</td><td>5.06px</td><td>81%</td><td>61%</td><td>38%</td><td>23%</td><td>4.44px</td><td>2.11px</td><td>70%</td><td>38%</td><td>14%</td><td>8%</td></tr><tr><td>MobileStereoNet2D [31]</td><td>7.68px</td><td>4.49px</td><td>79%</td><td>59%</td><td>36%</td><td>23%</td><td>4.37px</td><td>1.88px</td><td>59%</td><td>28%</td><td>13%</td><td>8%</td></tr><tr><td>PSM-Net [1]</td><td>6.57px</td><td>4.23px</td><td>93%</td><td>82%</td><td>45%</td><td>21%</td><td>3.94px</td><td>2.32px</td><td>93%</td><td>80%</td><td>27%</td><td>7%</td></tr><tr><td>RAFT-Stereo [17]</td><td>2.20px</td><td>0.92px</td><td>27%</td><td>17%</td><td>10%</td><td>5%</td><td>1.68px</td><td>0.47px</td><td>13%</td><td>7%</td><td>3%</td><td>2%</td></tr><tr><td>RealTimeStereo [2]</td><td>7.69px</td><td>4.71px</td><td>80%</td><td>64%</td><td>44%</td><td>28%</td><td>5.30px</td><td>2.87px</td><td>72%</td><td>50%</td><td>28%</td><td>15%</td></tr><tr><td>SMD-Nets [34]</td><td>12.32px</td><td>6.48px</td><td>77%</td><td>58%</td><td>40%</td><td>27%</td><td>5.26px</td><td>2.24px</td><td>63%</td><td>34%</td><td>15%</td><td>9%</td></tr><tr><td>SRH-Net [5]</td><td>7.43px</td><td>4.24px</td><td>76%</td><td>49%</td><td>29%</td><td>19%</td><td>3.95px</td><td>1.59px</td><td>57%</td><td>21%</td><td>9%</td><td>5%</td></tr><tr><td>StereoNet [14]</td><td>10.98px</td><td>4.11px</td><td>55%</td><td>37%</td><td>26%</td><td>18%</td><td>7.74px</td><td>1.79px</td><td>44%</td><td>23%</td><td>12%</td><td>7%</td></tr><tr><td>ActiveStereoNet [44]</td><td>21.57px</td><td>9.39px</td><td>60%</td><td>46%</td><td>35%</td><td>28%</td><td>6.92px</td><td>2.32px</td><td>36%</td><td>22%</td><td>14%</td><td>9%</td></tr></table>
164
+
165
+ Table 3 reports results on passive and active stereo images. We observed that, when evaluated on active stereo images, all considered methods show an improvement in their performance for all considered metrics. Please keep in mind that not all methods have been trained on the same dataset or even domain; our benchmark aims to evaluate the relative performances of the different methods rather than their absolute performances. ActiveStereoNet [44] stands out as the worst performing method for passive stereo but its performance drastically improves when presented with active stereo images. This is due to the fact that it is the only one trained on active stereo images. A model trained for active stereo is not expected to generalize well on passive stereo without adaptation since the domain shift makes matching much harder. ActiveStereoNet is far from being the best performing
166
+
167
+ Table 4: Relative scores (equations 5 and 6) for the state-of-the-art methods.
168
+
169
+ <table><tr><td>Method</td><td>PMAE↑</td><td>PBAD2↑</td><td>RMAE↑</td><td>RBAD2↑</td></tr><tr><td>AANet [38]</td><td>87%</td><td>99%</td><td>35%</td><td>55%</td></tr><tr><td>ACVNet [37]</td><td>71%</td><td>74%</td><td>30%</td><td>37%</td></tr><tr><td>AnyNet [35]</td><td>93%</td><td>98%</td><td>29%</td><td>29%</td></tr><tr><td>Cascade-Stereo [9]</td><td>65%</td><td>71%</td><td>25%</td><td>28%</td></tr><tr><td>CREStereo [16]</td><td>80%</td><td>62%</td><td>41%</td><td>34%</td></tr><tr><td>Deep-Pruner (best) [6]</td><td>94%</td><td>100%</td><td>52%</td><td>62%</td></tr><tr><td>Deep-Pruner (fast) [6]</td><td>95%</td><td>98%</td><td>47%</td><td>56%</td></tr><tr><td>GA-Net [42]</td><td>87%</td><td>96%</td><td>43%</td><td>59%</td></tr><tr><td>GwcNet [10]</td><td>99%</td><td>99%</td><td>55%</td><td>63%</td></tr><tr><td>High-Res-Stereo [39]</td><td>84%</td><td>85%</td><td>36%</td><td>49%</td></tr><tr><td>Lac-GwcNet [18]</td><td>96%</td><td>96%</td><td>55%</td><td>63%</td></tr><tr><td>MobileStereoNet3D [31]</td><td>96%</td><td>99%</td><td>45%</td><td>59%</td></tr><tr><td>MobileStereoNet2D [31]</td><td>96%</td><td>99%</td><td>52%</td><td>65%</td></tr><tr><td>PSM-Net [1]</td><td>90%</td><td>93%</td><td>30%</td><td>38%</td></tr><tr><td>RAFT-Stereo [17]</td><td>81%</td><td>73%</td><td>37%</td><td>39%</td></tr><tr><td>RealTimeStereo [2]</td><td>92%</td><td>97%</td><td>34%</td><td>35%</td></tr><tr><td>SMD-Nets [34]</td><td>95%</td><td>97%</td><td>54%</td><td>60%</td></tr><tr><td>SRH-Net [5]</td><td>97%</td><td>98%</td><td>48%</td><td>64%</td></tr><tr><td>StereoNet [14]</td><td>84%</td><td>85%</td><td>38%</td><td>45%</td></tr><tr><td>ActiveStereoNet [44]</td><td>99%</td><td>99%</td><td>68%</td><td>58%</td></tr></table>
170
+
171
+ model on active stereo images despite being the only one trained on that domain. The neural network architecture is probably at play here. ActiveStereoNet uses the same architecture as StereoNet [14], albeit a different training method. StereoNet is not the best performing method on passive stereo and is outperformed by ActiveStereoNet on active stereo images, which demonstrates the benefits of the training strategy proposed by the authors of ActiveStereoNet.
172
+
173
+ The relative score improvements, reported in Table 4, also show that, overall, existing methods have a good capability to generalize to active stereo. ACVNet [37], Cascade Stereo [9] as well as CREStereo [16] and RAFT-Stereo [17] are the only methods that seem to have issues generalizing, as their $\mathrm{P_{MAE}}$ and $\mathrm{P_{BAD_2}}$ scores are way lower than the other methods. For CREStereo [16] and RAFT-Stereo [17], it turns out this is more due to the fact that these methods perform so well on passive stereo, see Table 3, that they have less room for improvement when moving to active stereo. For ACVNet [37] and Cascade Stereo [9], this is because moving to the active stereo domain causes artifacts in the reconstructed disparities, see Section 6 for more details.
174
+
175
+ These results are encouraging and show that, when trained on passive stereo, current state-of-the-art deep learning methods are able to generalize to active stereo. Generalizing to passive stereo for a method trained on active stereo, seems to be much harder. However, the results are not very conclusive, since only one method is covered, and it has been trained in a self-supervised fashion.
176
+
177
+ The details of these results, as well as the performance of each method on each image, are provided in the results Excel sheet included in the supplementary material.
178
+
179
+ # 6 Ablation study
180
+
181
+ The visual inspection of the results shows that many visual artifacts are difficult to capture by aggregate metrics; see for example Figure 7). The refinement module at the end of the stereo matching pipeline is the one most likely to be impacted by a switch from passive to active stereo. Consequently, we chose to test all methods a second time, but with their final refinement module deactivated. Figure 6 shows error maps for methods which did generalize to active stereo while Figure 7 shows error maps of the methods which had issues generalizing (ACVNet [37], Cascade Stereo [9] and StereoNet [14]).
182
+
183
+ Looking into more details at the error maps of Fig. 7, we can notice different effects that may explain these observations. In certain rare situations, artifacts will appear in the depth map reconstructed using active stereo; see Fig. 7a and Fig. 7b. This indicates that under specific circumstances, the pseudo random pattern used for active stereo can act like an adversarial noise. ACVNet has such artifacts around small objects edges, like for example the handle of the chair in Fig. 7a. ACVNet uses
184
+
185
+ ![](images/7fbc1e1afdde3a86b7e9835504113adec5c886ef000659bc1264a953de2eb5d6.jpg)
186
+ (a) Abstract bowls of Fig. 5a with Deep-Pruner (best) [6].
187
+
188
+ ![](images/d5cb666a6f6799077f38da2ed92cf2761ed6a9e54c6017c97a308457d67b3b06.jpg)
189
+ (b) Plant vintage of Fig. 5g with GwcNet [10].
190
+
191
+ ![](images/6a62df3c86b0be8c193728d44beacfb57706dc7517f2706819b47ef20e4a57be.jpg)
192
+ (c) Park of Fig. 5f with MobileStereoNet3D [31].
193
+
194
+ ![](images/8ef249256c409b5df2e04447707019cd95b89fe4a348df671e4c779f368fc263.jpg)
195
+ Figure 6: Detailed error maps comparison between the unrefined and refined disparity maps for methods without apparent problems in active stereo compared to passive stereo.
196
+ (a) Office of Fig. 5e with ACVNet [37].
197
+ Figure 7: Detailed error maps comparison shows some artifacts not present in the unrefined active disparity appearing in the refined active disparity of certain methods, showing that their refinement module is negatively impacted by the active stereo pseudo random pattern.
198
+
199
+ ![](images/c21a7741815f23d96a216205c6efb75f5adb6cdf68427a44111ffb539b916b40.jpg)
200
+ (b) Bedroom of Fig. 5i with CascadeStereo [9].
201
+
202
+ ![](images/f258f7dcf9aa071f4787f49b4db83ac0728b1c4540b21b055b900dcd78f9e1b9.jpg)
203
+ (c) Fruits of Fig. 5d with Stere- oNet [14].
204
+
205
+ an attention based mechanism to post-process the matching cost volume. This tends to generate edge artifacts in active stereo images. The refinement process is negatively impacted by these artifacts, which further degrade the quality of the final disparity map.
206
+
207
+ Cascade Stereo has some of the most visible examples of such effects, where large error patches seem to appear in the active stereo disparity map, while they are not present in the passive disparity map. If refinement is turned off, the error maps for active and passive stereo are quite similar; see Fig. 7b. Note that the errors between unrefined and refined disparities can vary quite a lot for CascadeStereo compared to the other methods in Figure 6 and Figure 7. This shows that CascadeStereo relies extensively on its final refinement module to improve its results, especially compared to other methods. This, in turn, explains why it does not generalize as well as other architectures; see Table 4.
208
+
209
+ StereoNet also exhibits a strange behaviour; see Fig. 7c. While the network is able to use the pseudo random noise to reconstruct the disparity in uniform regions, the disparity reconstruction appears to be noisy in these areas. Once again, deactivating the refinement module removes the problem. This is not surprising since the StereoNet architecture is an hourglass hierarchical architecture. Other hierarchical methods aggregate their cost volume at different resolution (e.g. [42], [1]) before making the disparity prediction on the upsampled cost volume. StereoNet makes the prediction on a downsampled cost volume and then uses a module guided only by the initial image to upsample the disparity [14]. This approach makes StereoNet fast, but also very reliant on the appearance of the input images. This makes generalization to active stereo difficult.
210
+
211
+ # 7 Fine-tuning the models which failed to generalize
212
+
213
+ Can the problems some methods encountered when trying to generalize their predictions from passive to active stereo image pairs be eliminated by fine-tuning the aforementioned architectures on the active stereo images of our dataset? To test this, we fine-tuned ACVNet [37], Cascade-Stereo [9] and StereoNet [14] on our test set for 10 epochs. To avoid any problem of catastrophic forgetting, we ensured that the learning rate is kept low, at $5e - 5$ per mini-batch. Each mini-batch is made of four images and each epoch contained 103 mini-batches. The loss used for training the initial model was used for fine-tuning along with all hyperparameters specified for the original model.
214
+
215
+ Table 5: Evaluation of the three fine-tuned methods, ACVNet [37], Cascade-Stereo [9] and StereoNet [14].
216
+
217
+ <table><tr><td rowspan="2">Method</td><td colspan="6">Active stereo images</td></tr><tr><td>RMSET↓</td><td>MAET↓</td><td>BAD0.5↓</td><td>BAD1↓</td><td>BAD2↓</td><td>BAD4↓</td></tr><tr><td>ACVNet [37] (original)</td><td>3.49px</td><td>1.31px</td><td>23%</td><td>13%</td><td>8%</td><td>5%</td></tr><tr><td>ACVNet [37] (active stereo fine-tuned)</td><td>2.16px</td><td>0.66px</td><td>17%</td><td>7%</td><td>4%</td><td>3%</td></tr><tr><td>Cascade-Stereo [9] (original)</td><td>4.48px</td><td>2.07px</td><td>64%</td><td>33%</td><td>14%</td><td>8%</td></tr><tr><td>Cascade-Stereo [9] (active stereo fine-tuned)</td><td>2.11px</td><td>0.66px</td><td>20%</td><td>7%</td><td>4%</td><td>2%</td></tr><tr><td>StereoNet [14] (original)</td><td>7.74px</td><td>1.79px</td><td>44%</td><td>23%</td><td>12%</td><td>7%</td></tr><tr><td>StereoNet [14] (active stereo fine-tuned)</td><td>7.50px</td><td>1.78px</td><td>56%</td><td>29%</td><td>13%</td><td>7%</td></tr></table>
218
+
219
+ ![](images/8c0e5d1179662e61a41becc66ba4e46eafd71c2a84a5e5b6fdc6f89bf373f4c8.jpg)
220
+ (a) Office of Fig [37].
221
+
222
+ ![](images/fe644c9f12c5f951f48908909e8562c6ad5f6048a7cb0a664922344c6e090ca9.jpg)
223
+ g. 5e with ACVNet
224
+
225
+ ![](images/f5088b74b253982bdb11201ec4cd55a889d0393af0723223144f8ae312628080.jpg)
226
+ Figure 8: Error maps of ACVNet, CascadeStereo and StereoNet when fine-tuned on our dataset.
227
+
228
+ ![](images/e856e2c946ae5bfa031762a60249f7746ee7ea79356c93ca2020f9669110c3b8.jpg)
229
+ (b) Bedroom of Fig. 5i with CascadeStereo [9].
230
+
231
+ ![](images/84c3ed8f810c061aa561b5b810c0233ed469b7aeff5ab022e4512757256a9c45.jpg)
232
+ (c) Fruits of Fig. 5d with StereoNet [14].
233
+
234
+ ![](images/dd1471a2262a22afbb234d2950eb17d9f863ee6c883986b2de858368a2cfbcbb.jpg)
235
+
236
+ The results after fine-tuning both methods are reported in Table 5. One can observe that ACVNet and Cascade-Stereo exhibit a drastic improvement in their performance.
237
+
238
+ The visual inspection of the results reveals that for most image regions, Cascade-Stereo error is now mostly below one pixel, showing no large visible artifacts. ACVNet sees similar improvements to Cascade-Stereo. This shows that the methods were able to adapt to active stereo after fine-tuning.
239
+
240
+ For StereoNet, the RMSE improved, the $\mathrm{BAD}_{0.5}$ and $\mathrm{BAD}_1$ metric degraded, while the other metrics stagnated. No more artifacts correlated to the active pattern points distribution are visible. However, new edge artifacts appear in certain images. This indicates that the fine-tuning is turning the refinement module off instead of adapting it to the projected dot pattern. This, in turn, shows that an architecture largely reliant on the smoothness of the input images is ill-suited for active stereo vision. ActiveStereoNet solved this issue by decoupling the image convolutions from the disparity convolutions in their refinement module [44], which gave it more flexibility to distinguish the active pattern from real objects.
241
+
242
+ # 8 Conclusion and Perspectives
243
+
244
+ We proposed the first dataset and associated benchmark that enables the comparison of the relative performance of stereo vision algorithms when applied to active and passive stereo. Using this dataset, we undertook extensive experiments to evaluate the performance of twenty state-of-the-art end-to-end deep learning models. The reported results show that it is possible, to a certain extent, to use methods trained for passive stereo for active stereo vision. This work also shows that the weak point of those architectures is the final refinement layers. Using our training set, we were able to improve the performances of StereoNet [14] and CascadeStereo [9], which had difficulty generalizing to active stereo. StereoNet, the architecture reliant on the smoothness of the input image pair for refinement, still had very poor results, indicating that models that favor shapes prior over appearance priors are more robust.
245
+
246
+ Active stereo is an important subfield of stereo-vision. By using our proposed dataset, we were able to examine the generalization ability of current deep learning models. The dataset can also be used to fine-tune these deep learning stereo models for active stereo vision. In the future, we expect to see a growing number of ever larger deep neural networks for stereo vision. Thus, being able to evaluate their generalization ability will become more and more important, and our dataset will prove invaluable in this regard.
247
+
248
+ # Acknowledgments and Disclosure of Funding
249
+
250
+ We thank the reviewers for their helpful comments. The authors have no competing interests to disclose. This research is supported by the Australian Research Council grants ARC DP220102197 and ARC DP210101682.
251
+
252
+ # References
253
+
254
+ [1] Jia-Ren Chang and Yong-Sheng Chen. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410-5418, 2018.
255
+ [2] Jia-Ren Chang, Pei-Chun Chang, and Yong-Sheng Chen. Attention-aware feature aggregation for real-time stereo matching on edge devices. In Proceedings of the Asian Conference on Computer Vision (ACCV), November 2020.
256
+ [3] Shengyong Chen, Youfu Li, and Ngai Ming Kwok. Active vision in robotic systems: A survey of recent developments. The International Journal of Robotics Research, 30(11):1343-1377, 2011.
257
+ [4] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. URL http://www.blender.org.
258
+ [5] Hongzhi Du, Yanyan Li, Yanbiao Sun, Jigui Zhu, and Federico Tombari. Srh-net: Stacked recurrent hourglass network for stereo matching. IEEE Robotics and Automation Letters, 6(4): 8005-8012, 2021.
259
+ [6] Shivam Duggal, Shenlong Wang, Wei-Chiu Ma, Rui Hu, and Raquel Urtasun. Deeperpruner: Learning efficient stereo matching via differentiable patchmatch. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 4383-4392, 2019.
260
+ [7] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
261
+ [8] Gabriel F. Giralt. The interchangeability of vfx and live action and its implications for realism. Journal of Film and Video, 69(1):3-17, 2017. ISSN 07424671, 19346018.
262
+ [9] Xiaodong Gu, Zhiwen Fan, Siyu Zhu, Zuozhuo Dai, Feitong Tan, and Ping Tan. Cascade cost volume for high-resolution multi-view stereo and stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
263
+ [10] Xiaoyang Guo, Kai Yang, Wukui Yang, Xiaogang Wang, and Hongsheng Li. Group-wise correlation stereo network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3273-3282, 2019.
264
+ [11] Heiko Hirschmuller and Daniel Scharstein. Evaluation of cost functions for stereo matching. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2007.
265
+ [12] Xinyu Huang, Peng Wang, Xinjing Cheng, Dingfu Zhou, Qichuan Geng, and Ruigang Yang. The apolloscape open dataset for autonomous driving and its application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(10):2702-2719, 2020.
266
+ [13] Leonid Keselman, John Iselin Woodfill, Anders Grunnet-Jepsen, and Achintya Bhowmik. Intel realsense stereoscopic depth cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
267
+ [14] Sameh Khamis, Sean Fanello, Christoph Rhemann, Adarsh Kowdle, Julien Valentin, and Shahram Izadi. Stereonet: Guided hierarchical refinement for real-time edge-aware depth prediction. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, pages 8-14, 2018.
268
+ [15] Hamid Laga, Laurent Valentin Jospin, Farid Boussaid, and Mohammed Bennamoun. A survey on deep learning techniques for stereo-based depth estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
269
+
270
+ [16] Jiankun Li, Peisen Wang, Pengfei Xiong, Tao Cai, Ziwei Yan, Lei Yang, Jiangyu Liu, Haoqiang Fan, and Shuaicheng Liu. Practical stereo matching via cascaded recurrent network with adaptive correlation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16263-16272, 2022.
271
+ [17] Lahav Lipson, Zachary Teed, and Jia Deng. Raft-stereo: Multilevel recurrent field transforms for stereo matching. In International Conference on 3D Vision (3DV), 2021.
272
+ [18] Biyang Liu, Huimin Yu, and Yangqi Long. Local similarity pattern and cost self-reassembling for deep stereo matching networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2):1647-1655, Jun. 2022.
273
+ [19] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4040-4048, 2016.
274
+ [20] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
275
+ [21] H. K. Nishihara. Practical Real-Time Imaging Stereo Matcher. Optical Engineering, 23(5): 536-545, 1984.
276
+ [22] Sergio Orts-Escolano, Christoph Rhemann, Sean Fanello, Wayne Chang, Adarsh Kowdle, Yury Degtyarev, David Kim, Philip L. Davidson, Sameh Khamis, Mingsong Dou, Vladimir Tankovich, Charles Loop, Qin Cai, Philip A. Chou, Sarah Mennicken, Julien Valentin, Vivek Pradeep, Shenlong Wang, Sing Bing Kang, Pushmeet Kohli, Yuliya Lutchyn, Cem Keskin, and Shahram Izadi. Holoportation: Virtual 3d teleportation in real-time. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST '16, page 741-754, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450341899.
277
+ [23] Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex Hanna. Data and its (dis)contents: A survey of dataset development and use in machine learning research. *Patterns*, 2(11):100336, 2021. ISSN 2666-3899.
278
+ [24] Matt Pharr, Wenzel Jakob, and Greg Humphreys. Physically based rendering: From theory to implementation. Morgan Kaufmann, 2016.
279
+ [25] Benjamin Planche, Ziyan Wu, Kai Ma, Shanhui Sun, Stefan Kluckner, Oliver Lehmann, Terrence Chen, Andreas Hutter, Sergey Zakharov, Harald Kosch, and Jan Ernst. Depthsynth: Real-time realistic synthetic data generation from cad models for 2.5d recognition. In 2017 International Conference on 3D Vision (3DV), pages 1-10, 2017.
280
+ [26] Tomislav Pribanić, Tomislav Petković, Matea Djonlic, Vincent Angladon, and Simone Gasparini. 3d structured light scanner on the smartphone. In Aurélio Campilha and Fakhri Karray, editors, Image Analysis and Recognition, pages 443-450, Cham, 2016. Springer International Publishing. ISBN 978-3-319-41501-7.
281
+ [27] Gernot Riegler, Yiyi Liao, Simon Donne, Vladlen Koltun, and Andreas Geiger. Connecting the dots: Learning representations for active monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
282
+ [28] Sean Ryan Fanello, Julien Valentin, Christoph Rhemann, Adarsh Kowdle, Vladimir Tankovich, Philip Davidson, and Shahram Izadi. Ultrastereo: Efficient learning-based matching for active stereo systems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
283
+ [29] Daniel Scharstein and Richard Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision, 47(1):7-42, 2002.
284
+ [30] Daniel Scharstein, Heiko Hirschmüller, York Kitajima, Greg Krathwohl, Nera Nesić, Xi Wang, and Porter Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German conference on pattern recognition, pages 31–42. Springer, 2014.
285
+
286
+ [31] Faranak Shamsafar, Samuel Woerz,afia Rahim, and Andreas Zell. Mobilestereonet: Towards lightweight deep networks for stereo matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2417-2426, 2022.
287
+ [32] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
288
+ [33] Alessio Tonioni, Matteo Poggi, Stefano Mattoccia, and Luigi Di Stefano. Unsupervised adaptation for deep stereo. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
289
+ [34] Fabio Tosi, Yiyi Liao, Carolin Schmitt, and Andreas Geiger. Smd-nets: Stereo mixture density networks. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
290
+ [35] Yan Wang, Zihang Lai, Gao Huang, Brian H. Wang, Laurens van der Maaten, Mark Campbell, and Kilian Q. Weinberger. Anytime stereo image depth estimation on mobile devices. In 2019 International Conference on Robotics and Automation (ICRA), pages 5893-5900, 2019.
291
+ [36] Steven Worley. A cellular texture basis function. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 291-294, 1996.
292
+ [37] Gangwei Xu, Junda Cheng, Peng Guo, and Xin Yang. Attention concatenation volume for accurate and efficient stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12981-12990, 2022.
293
+ [38] Haofei Xu and Juyong Zhang. Aanet: Adaptive aggregation network for efficient stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1959-1968, 2020.
294
+ [39] Gengshan Yang, Joshua Manela, Michael Happold, and Deva Ramanan. Hierarchical deep stereo matching on high-resolution images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
295
+ [40] Guorun Yang, Xiao Song, Chaoqin Huang, Zhidong Deng, Jianping Shi, and Bolei Zhou. Drivingstereo: A large-scale dataset for stereo matching in autonomous driving scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
296
+ [41] Haojie Zeng, Bin Wang, Xiaoping Zhou, Xiaojing Sun, Longxiang Huang, Qian Zhang, and Yang Wang. Tsfe-net: Two-stream feature extraction networks for active stereo matching. IEEE Access, 9:33954-33962, 2021.
297
+ [42] Feihu Zhang, Victor Prisacariu, Ruigang Yang, and Philip HS Torr. Ga-net: Guided aggregation net for end-to-end stereo matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 185–194, 2019.
298
+ [43] Feihu Zhang, Xiaojuan Qi, Ruigang Yang, Victor Prisacariu, Benjamin Wah, and Philip Torr. Domain-invariant stereo matching networks. In Europe Conference on Computer Vision (ECCV), 2020.
299
+ [44] Yinda Zhang, Sameh Khamis, Christoph Rhemann, Julien Valentin, Adarsh Kowdle, Vladimir Tankovich, Michael Schoenberg, Shahram Izadi, Thomas Funkhouser, and Sean Fanello. Activestereonet: End-to-end self-supervised learning for active stereo systems. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:647eb7f5e52a56c19a3cace61b32ae90d75fb5cafe36943fdce269968f9be82c
3
+ size 733882
activepassivesimstereobenchmarkingthecrossgeneralizationcapabilitiesofdeeplearningbasedstereomethods/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e63e7760f39468ea9e66c0381f4372fd5857f1bb91ca09ec8dc30272d81b908
3
+ size 368801
activerankingwithoutstrongstochastictransitivity/ecbc8857-633b-44d0-b585-da6b70f4b9d0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:095f3075e7476dd9492dce545a3f803c3ea3722ed5cbf204480a1c5e0cff115f
3
+ size 80809
activerankingwithoutstrongstochastictransitivity/ecbc8857-633b-44d0-b585-da6b70f4b9d0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66378640a19d1af7a815c3c41783327cc508ce556a48ec786847f838cd28cdba
3
+ size 102480