| { |
| "title": "Active Self-Semi-Supervised Learning for Few Labeled Samples", |
| "abstract": "Training deep models with limited annotations poses a significant challenge when applied to diverse practical domains. Employing semi-supervised learning alongside the self-supervised model offers the potential to enhance label efficiency. However, this approach faces a bottleneck in reducing the need for labels. We observed that the semi-supervised model disrupts valuable information from self-supervised learning when only limited labels are available. To address this issue, this paper proposes a simple yet effective framework, active self-semi-supervised learning (AS3L). AS3L bootstraps semi-supervised models with prior pseudo-labels (PPL). These PPLs are obtained by label propagation over self-supervised features. Based on the observations the accuracy of PPL is not only affected by the quality of features but also by the selection of the labeled samples. We develop active learning and label propagation strategies to obtain accurate PPL. Consequently, our framework can significantly improve the performance of models in the case of limited annotations while demonstrating fast convergence. On the image classification tasks across four datasets, our method outperforms the baseline by an average of 5.4%. Additionally, it achieves the same accuracy as the baseline method in about 1/3 of the training time.", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "Introduction", |
| "text": "Part of the success of deep learning models arises from large amounts of labeled data [1 ###reference_b1###]. However, the high cost associated with acquiring large numbers of labels hinders the widespread application of deep learning models. This challenge is particularly pronounced in domains that demand expert annotations, such as medical images [2 ###reference_b2###], or biology images [3 ###reference_b3###]. In response, researchers have dedicated considerable effort to exploring semi-supervised learning. Recent works have shown that these techniques can achieve similar accuracy to supervised learning with fewer annotations [4 ###reference_b4###, 5 ###reference_b5###].\nHowever, existing semi-supervised learning techniques face bottlenecks in reducing labeled samples [6 ###reference_b6###]. This is due to the common practice of employing model predictions as pseudo-labels for unlabeled samples, which are not always accurate. Training with these noisy pseudo-labels boosts the model\u2019s confidence in incorrect predictions, inducing the model to resist the correct information, and consequently compromising the performance of semi-supervised training. This issue is commonly referred to as confirmation bias [7 ###reference_b7###]. To unlock the full potential of semi-supervised learning, many current techniques still rely on a substantial number of annotations to rectify these incorrect pseudo-labels. For instance, prevalent benchmarks in semi-supervised learning typically opt for 25, 100, and 400 annotations per class [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###], with some recent studies gradually reducing to 4 annotations per class [5 ###reference_b5###, 11 ###reference_b11###]. As the number of annotations decreases, the semi-supervised learning struggles to correct erroneous pseudo-labels, resulting in a significant decline in its performance [6 ###reference_b6###].\n###figure_1### ###figure_2### Conducting semi-supervised training on top of self-supervised models holds great promise in improving this issue [13 ###reference_b13###]. Self-supervised training yields valuable representations for downstream tasks without any labels [14 ###reference_b14###, 15 ###reference_b15###]. Initializing with a self-supervised model allows the semi-supervised model to swiftly generate accurate pseudo-labels and reduce the influence of erroneous pseudo-labels [12 ###reference_b12###], thereby enhancing final performance. However, with limited labeled samples, the semi-supervised model may not rapidly generate accurate pseudo-labels. These inaccurate pseudo-labels disrupt valuable information obtained from self-supervised learning (Sec. 3 ###reference_###), consequently diminishing the performance gains achieved through the initialization with self-supervised learning. In some cases, this initialization might not yield any performance improvements.\nMotivated by this observation, we propose a more explicit method for transferring valuable information from self-supervised models to semi-supervised models using Prior Pseudo-Labels (PPL). Our proposed framework, Active Self-Semi-Supervised Learning (AS3L) depicted in Fig. 1 ###reference_###, aims to leverage self-supervised pre-training to enhance model performance. Specifically, we use PPL to guide semi-supervised learning. PPL is obtained through label propagation on self-supervised representations (Sec. 4.3 ###reference_###). Additionally, we implement a switching mechanism to integrate PPL and model predictions early in training and gradually phase out PPL as training progresses (Sec. 4.2 ###reference_###). This approach provides a strong starting point for semi-supervised training and prevents PPL from hindering the continuous updating of the pseudo-labels during the semi-supervised training process. Furthermore, considering that the accuracy of PPL depends on both feature quality and labeled sample selection, particularly when annotations are limited, we propose an active learning strategy. This strategy aims to improve the accuracy of PPL by optimizing the selection of labeled samples (Sec. 4.4 ###reference_###).\nValidated across four image classification datasets, our AS3L outperforms the state-of-the-art in most cases, particularly in scenarios with limited labels, and showcases accelerated convergence (Sec. 5 ###reference_###). The contributions of our work are summarized as follows: 1) In scenarios with scarce labeled data, our observation reveals that semi-supervised learning cannot effectively harness valuable information obtained from self-supervised training by weight initialization. 2) Motivated by the observation, we propose the AS3L framework, a novel approach to bootstrap semi-supervised training from a good initial point consisting of accurate PPL, actively selected labeled samples, and weights initialization from self-supervised training. This method readily integrates with existing semi-supervised learning approaches to extend the success of semi-supervised learning to scenarios with even fewer labeled data. Our proposed method outperforms SOTA algorithms in most cases with limited annotations. 3) We develop an active learning strategy that is tightly coupled to the AS3L framework, which can greatly improve the accuracy of PPL when there are few labeled samples, thereby helping AS3L work well with limited labels." |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "Related Work", |
| "text": "" |
| }, |
| { |
| "section_id": "2.1", |
| "parent_section_id": "2", |
| "section_name": "Semi-Supervised Learning", |
| "text": "" |
| }, |
| { |
| "section_id": "2.1.1", |
| "parent_section_id": "2.1", |
| "section_name": "2.1.1 Semi-Supervised Learning from Scratch", |
| "text": "Semi-supervised training commonly leverages unlabeled samples through pseudo-labeling techniques [16 ###reference_b16###] and consistency regularization methods [17 ###reference_b17###]. Pseudo-labeling involves using model predictions as targets for training unlabeled data, typically using only high-confidence predictions to reduce the impact of incorrect targets on semi-supervised training [10 ###reference_b10###]. Consistency regularization aims to make the model\u2019s predictions consistent for perturbed inputs by the consistency loss. Various studies have explored different perturbation methods. For example, UDA uses basic image transformations such as flipping and cropping [9 ###reference_b9###], while VAT generates perturbed inputs through adversarial training [18 ###reference_b18###]. Recent studies propose combining pseudo-labeling and consistency regularization to enhance semi-supervised training performance [8 ###reference_b8###]. For instance, FixMatch employs strong and weak image augmentations on the same input images, using high-confidence model predictions from the weakly augmented version as targets for the strongly augmented images [4 ###reference_b4###]. However, early in the training process, the model\u2019s prediction confidence is low for many samples, leading to slow convergence in FixMatch. To address this issue, FlexMatch introduces dynamic confidence thresholds, gradually increasing them during training to balance convergence speed and pseudo-label accuracy [5 ###reference_b5###].\nHowever, in scenarios with limited annotated data, existing methods exhibit sub-optimal performance due to a lack of supervisory signals in the early stages of training and the impact of confirmation bias. Also, the issue of limited labels often leads to training instability, which often results in the optimal checkpoint being distant from the final training iteration. Sel introduces a metric designed to select high-performing checkpoints, to mitigate the challenge posed by the instability in semi-supervised training [6 ###reference_b6###]." |
| }, |
| { |
| "section_id": "2.1.2", |
| "parent_section_id": "2.1", |
| "section_name": "2.1.2 Semi-Supervised Learning with Self-Supervision", |
| "text": "Through the training of proxy tasks, self-supervised learning leverages all available data to generate a valuable representation and model initialization weights for downstream tasks before acquiring annotations. These self-supervision signals provide additional information to semi-supervised training, improving its performance, especially when annotations are limited. Self-supervised tasks mainly include generative-based and contrastive-based methods [19 ###reference_b19###]. Generative self-supervised learning trains an encoder-decoder model to reconstruct inputs from corrupted inputs. For instance, MAE trains a model to reconstruct input images with large randomly masked-out portions [20 ###reference_b20###], while DMAE reconstructs inputs from noisy, randomly masked images [21 ###reference_b21###]. Contrastive training trains an encoder by comparing the similarities between the output features. Early contrastive training methods utilized the relationships between the overall input sample and its parts. For example, solving jigsaw involves randomly sampling patches from an input image and training the model to predict their relative positions [22 ###reference_b22###]. Rotation prediction trains the model to predict the angle of randomly rotated inputs [23 ###reference_b23###]. More recently, methods comparing features of different samples or various augmented versions of the same sample have been widely explored. SimCLR constructs positive pairs for contrastive learning by applying different augmentations to the same input, and negative pairs using a large batch of different samples, significantly improving self-supervised learning performance [24 ###reference_b24###]. However, this method requires many negative samples, leading to a computational bottleneck. To address this, recent methods explore only positive pairs for contrastive learning. BYOL uses a moving-average teacher model and an online model to perform contrastive learning [25 ###reference_b25###], while SimSiam directly uses contrastive loss to encourage similarity between features of the same input under different augmentations [26 ###reference_b26###].\nHow current semi-supervised training algorithms utilize self-supervised learning techniques can be divided into three categories. The first and most straightforward way is to propagate labels on self-supervised features directly [27 ###reference_b27###]. However, self-supervised features are not perfect for a specific task so simple label propagation based on this feature does not produce ideal results.\nThe second category of methods uses both semi-supervised loss and self-supervised loss to train the model. S4L trains a model with self-supervised loss (predict rotation [23 ###reference_b23###]) and semi-supervised loss, then re-trains the model with the model\u2019s prediction [28 ###reference_b28###]. CoMatch adds contrastive self-supervised loss to the standard semi-supervised loss [11 ###reference_b11###]. LESS incorporates self-supervised signals into the semi-supervised training process through online clustering [29 ###reference_b29###]. These methods improve the performance of semi-supervised training when dealing with extremely limited labeled samples. However, when the number of labels is limited, the poor pseudo-labels may affect the representation generated by the model, thereby hindering the effective utilization of self-supervision loss. Moreover, these methods still fail to address the challenge of effectively transferring knowledge from existing pre-trained models to semi-supervised models.\nThe third category of methods transfers information obtained from self-supervised training primarily relying on weight initialization. SelfMatch proposes initializing the model with self-supervised pre-training weights, followed by fine-tuning with existing semi-supervised training methods [12 ###reference_b12###]. EMAN uses exponential moving averages of model and batch normalization parameters as a teacher model [15 ###reference_b15###]. This approach slows the alteration of self-supervised pre-training weights, improving semi-supervised training performance. While self-supervised training provides commendable initial weights for the backbone, it does not give initial weights for classifiers due to the different loss and network architecture. In scenarios with limited labeled samples, poor pseudo-labels at the beginning of semi-supervised training can rapidly disrupt the good initial weights established through self-supervised learning. Therefore, this paper introduces a framework designed to facilitate the effective transfer of knowledge acquired during the self-supervised stage. This framework utilizes pseudo-labels generated by self-supervised features to guide the process of semi-supervised learning. Additionally, our approach can be combined with techniques like EMAN to yield enhanced results." |
| }, |
| { |
| "section_id": "2.2", |
| "parent_section_id": "2", |
| "section_name": "Active Learning", |
| "text": "Most semi-supervised learning techniques select labeled samples by random stratified sampling [5 ###reference_b5###, 11 ###reference_b11###]. As the number of labels decreases, the gap between labeled and unlabeled sample distributions increases, which worsens the performance of semi-supervised learning. It is straightforward to fuse active learning techniques to fill in the gap. Active learning assumes that different samples have different extents of influence on model accuracy. When we can only afford to label a fraction of samples, choosing samples with higher values to label can result in a more accurate model. One popular active learning approach selects samples for labeling based on the uncertainty of model predictions, with uncertainty measured using metrics like entropy [30 ###reference_b30###], BALD [31 ###reference_b31###] or learning indicator [32 ###reference_b32###]. Another approach aims to select representative samples for labeling. For example, Coreset selects samples that are least similar to the features of already labeled samples [33 ###reference_b33###]. However, this method can easily identify outliers. To address this, some studies have explored selecting samples from high-density regions to avoid the influence of outliers [34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###]." |
| }, |
| { |
| "section_id": "2.2.1", |
| "parent_section_id": "2.2", |
| "section_name": "2.2.1 Active Strategy for Semi-Supervised Learning", |
| "text": "Some researchers have tried to directly combine existing semi-supervised learning with these active strategies developed in the context of supervised learning, but the results were unexpectedly poor, even worse than random selection baseline [37 ###reference_b37###]. To bridge this gap, some new active strategies have been designed in the background of semi-supervised learning. These strategies are more tightly coupled with existing semi-supervised learning approaches. Consistency-based methods [38 ###reference_b38###] argue that we should choose samples that are hard for semi-supervised models, i.e. samples with inconsistent predictions for data augmentation. Guo et al. proposed to combine adversarial training and graph-based label propagation to select samples close to cluster boundaries with high uncertainty [39 ###reference_b39###]. However, these strategies are multiple-shot strategies, which means an unbearably high computational burden in semi-supervised learning scenarios. The substantial training cost generated by multi-round active semi-supervised learning contradicts the overarching goal of minimizing the cost required to train a powerful model throughout the entire research endeavor." |
| }, |
| { |
| "section_id": "2.2.2", |
| "parent_section_id": "2.2", |
| "section_name": "2.2.2 Single-shot Active strategy", |
| "text": "Single-shot active learning strategies request all labels at a single time, which may help us benefit from active learning with little additional computational burden. Until now, little attention has been paid to this field. Several novel active learning approaches have focused on selecting labeled samples based on the representations obtained from self-supervised training [40 ###reference_b40###, 41 ###reference_b41###]. These approaches select all labeled samples in a single shot. Hence, these methods can be easily employed in the context of semi-supervised learning. For example, USL proposes clustering in the self-supervised pre-training feature space and selecting samples at the density peaks of each cluster for labeling [41 ###reference_b41###]. However, these methods often suffer from low accuracy in pseudo-labeling when the labeled samples are limited. In contrast, our method builds a more tightly coupled active self-semi-supervised learning framework and constructs an active learning strategy from the perspective of improving the accuracy of PPL." |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "Revisiting Self-Supervised Pre-training Initialization for Semi-Supervised Training", |
| "text": "This section explores whether semi-supervised training can effectively harness valuable information acquired from self-supervised pre-training through weight initialization. Here, the framework of fine-tuning self-supervised pre-trained models using semi-supervised training methods is also referred to as SelfMatch [12 ###reference_b12###]. Specifically, we initialize a semi-supervised model using weights pre-trained with the self-supervised method, Simsiam [26 ###reference_b26###]. Subsequently, we conduct semi-supervised training using FlexMatch [5 ###reference_b5###], one of the state-of-the-art semi-supervised learning algorithms. The semi-supervised training is executed on CIFAR-10 [42 ###reference_b42###] with 10 and 40 labeled samples respectively, and on CIFAR-100 [42 ###reference_b42###] with 200 and 400 labeled samples, where all the labeled samples are selected randomly. The network architecture and hyper-parameters for semi-supervised training align with the prior work [5 ###reference_b5###].\n###figure_3### ###figure_4### Self-supervised pre-training weight initialization allows downstream tasks to start training in a well-established pre-trained feature space [24 ###reference_b24###, 25 ###reference_b25###], thereby improving the performance of downstream tasks. Therefore, the quality of features is a crucial indicator to assess whether semi-supervised training retains valuable information obtained from self-supervised training. Following [43 ###reference_b43###], we compute the degree of consistency between the label of the sample and the label of its nearest neighbors within the feature space. For the CIFAR-100 dataset, with few labeled samples, semi-supervised training undermines the valuable information obtained from self-supervised pre-training. As depicted in Fig. 2(a) ###reference_sf1###, the label consistency between samples and their nearest neighbor samples in the semi-supervised feature space notably declines compared to that in the self-supervised feature space.\n###figure_5### For the simpler dataset, CIFAR-10, semi-supervised learning can generate features superior to those from self-supervised pre-training, as shown in Fig. 2(b) ###reference_sf2###. Yet, in scenarios with fewer annotations (10 labeled samples), semi-supervised training rarely benefits from self-supervised initialization. As illustrated in Fig. 3 ###reference_###, during the initial phase of semi-supervised training (approximately the first 200 epochs), self-supervised initialization enhances the performance of the semi-supervised model. However, as semi-supervised training progresses, the test accuracy of models initialized randomly and those using self-supervised pre-training initialization becomes nearly identical.\nAdditionally, we demonstrated the impact of initializing with self-supervised pre-training weights on semi-supervised training performance, as shown in Table 1 ###reference_###, indicating that it does not improve performance when labeled data is limited. In conclusion, weight initialization struggles to effectively transmit valuable information obtained from self-supervised training to the semi-supervised model. To address this issue, we propose to construct PPL, serving as an intermediary to facilitate semi-supervised models in fully leveraging the valuable information acquired from self-supervised pre-training." |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "Method", |
| "text": "We frame the problem and introduce the basic framework of semi-supervised learning in section 4.1 ###reference_###. Subsequently, in sec. 4.2 ###reference_###, we elaborate on leveraging PPL as an intermediary to facilitate the transfer of valuable information from self-supervised pre-training models to semi-supervised models. And sec. 4.3 ###reference_### details the generation of PPL through label propagation on the self-supervised features, . Finally, in sec. 4.4 ###reference_###, we propose an active learning strategy aimed at improving the accuracy of PPL." |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "Preliminaries on Semi-Supervised Training", |
| "text": "Semi-supervised learning trains a model, denoted as M(), which comprises a feature extractor and a classification head. This training process leverages both a labeled dataset, , and an unlabeled dataset, . The objective is to achieve superior performance compared to training exclusively on the labeled dataset. Typically, . The loss function for semi-supervised training, as depicted in Eq. (1 ###reference_###), consists of two components: the cross-entropy loss on labeled samples, , and the consistency loss on unlabeled samples, . The hyper-parameter governs the trade-off between these two losses. The , as illustrated in Eq. (2 ###reference_###), represents the cross-entropy loss, CE, on weakly augmented labeled samples, where weak augmentation, , follows the definition in previous research [4 ###reference_b4###], incorporating operations such as image flipping.\nThe basic consistency loss, , serves two purposes: using high-confidence model predictions as pseudo-labels to train unlabeled data and encouraging consistent predictions for inputs subjected to perturbations (image augmentations). As formulated in Eq. (3 ###reference_###), it involves applying both weak augmentation, , and strong augmentation, , to the unlabeled samples [4 ###reference_b4###]. Then, the model predicts unlabeled samples with weak augmentation and strong augmentation as and , respectively. Here, is a ratio of the number of unlabeled samples and labeled samples in each training batch, and is batch size. is an indicator function used to identify reliable predictions. When the confidence of its input surpasses the specified threshold , the returns 1; otherwise, it outputs 0. Additionally, the definition of aligns with that in prior research [4 ###reference_b4###], including techniques such as cutout.\n###figure_6###" |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "Active Self-Semi-Supervised Learning Framework", |
| "text": "Existing methods typically initialize the model using weights pre-trained through self-supervised learning and then proceed with semi-supervised training to enhance performance. However, as demonstrated in sec. 3 ###reference_###, when the number of labeled samples is limited, valuable self-supervised features may be distorted during semi-supervised training, reducing the benefits of initialization with self-supervised pre-training weights. To better utilize self-supervised pre-training information and improve model performance, we propose extracting and transferring information from pre-trained features to semi-supervised training through pseudo-labels. This approach ensures that valuable information obtained from pre-training is preserved and effectively used, even as the model undergoes weight updates.\nAs depicted in Fig. 4 ###reference_###, our framework leverages various techniques. Initially, the entire dataset undergoes self-supervised training to generate feature representations . Our active learning strategy then selects samples and queries their labels to construct the labeled dataset based on . After that, we propagate labels in the pre-trained feature space to obtain a group of pseudo-labels. These pseudo-labels serve as intermediaries to transfer valuable information from self-supervised pre-training to semi-supervised models. Given that these pseudo-labels are obtained before semi-supervised training, we refer to them as prior pseudo-labels (PPL), denoted as . The PPL is used for the unlabeled dataset . Finally, a semi-supervised model is trained by combining these PPL with the model predictions, resulting in enhanced overall performance.\nSemi-Supervised Training Guided by PPL: Assuming we have obtained prior pseudo-labels, , through label propagation on (which will be discussed in the next section), we integrate into the existing semi-supervised training framework by re-formulating the consistency loss as Eq. (4 ###reference_###). Instead of using the model prediction, , as the target for training unlabeled data, we combine with to generate a new training target, , as given in Eq. (5 ###reference_###). Since this target is derived from both the semi-supervised model and the prior pseudo-labels, we slightly abuse the term and refer to it as posterior pseudo-labels.\nIn Eq. (5 ###reference_###), denotes a pre-defined switching point. This adjustment ensures that, during the early stages of semi-supervised training (i.e., before training iterations), when the model prediction is less accurate, helps to guide the semi-supervised training. As the model predictions become more accurate than , the framework uses model predictions as pseudo-labels, resembling a conventional semi-supervised training approach. This transition helps mitigate the impact of PPL inaccuracies on the semi-supervised training. We empirically found that setting a rough switching point yields favorable results, as discussed in sec. 5.4.2 ###reference_.SSS2###." |
| }, |
| { |
| "section_id": "4.3", |
| "parent_section_id": "4", |
| "section_name": "Prior Pseudo-label Generation", |
| "text": "We generate PPL, , by applying label propagation based on clusters. Instead of using basic K-means, we employ constrained seed K-means [44 ###reference_b44###] to leverage labeled sample constraints, which enhances the quality of clustering by using label information when updating cluster centers. Then, the labels of the labeled samples are propagated to all unlabeled samples within the same cluster and then normalized to derive PPL.\nAnother remaining issue is determining the value of for clustering. It is worth noting that as the number of clusters increases, the likelihood of samples within the same cluster sharing the same label also increases. Therefore, to improve the accuracy of PPL, it is advisable to increase . However, as rises, the number of samples per cluster decreases, leading to an increase in the number of clusters devoid of any labeled samples. This is particularly evident when surpasses the number of labeled samples, resulting in a larger proportion of unlabeled samples not being associated with any labels. As a trade-off, we adopt a strategy of performing multiple clustering runs (denoted as ) and using different values of for each run. This approach ensures that the majority of unlabeled samples are encompassed by labeled samples, while those located farther away from any labeled samples exhibit lower confidence levels." |
| }, |
| { |
| "section_id": "4.4", |
| "parent_section_id": "4", |
| "section_name": "Single-shot Active Labeled Sample Selection", |
| "text": "Our active learning strategy primarily focuses on improving the accuracy of PPL, which facilitates the transfer of valuable information from a self-supervised model to a semi-supervised model. For this purpose, existing active learning methods based on uncertainty and diversity principles are not suitable, as they tend to select challenging samples that exhibit high uncertainty [38 ###reference_b38###] or are distant from the feature space of labeled samples [33 ###reference_b33###]. These challenging samples often imply difficulty in differentiation within the self-supervised features, . In other words, these samples are likely to have different labels from their neighbors (samples with similar features), leading to lower PPL accuracy when using these samples for label propagation.\nIn contrast, selecting representative samples [36 ###reference_b36###] is more appropriate for our scenario. Since our PPL is derived through clustering-based label propagation, in scenarios with limited annotations, the PPL for all samples within a cluster is determined by the labeled samples within the same cluster. In other words, a relatively small number of labeled samples play a crucial role in determining the PPL for the entire cluster. Therefore, we aim to select labeled samples that well represent the majority of samples within their cluster, ensuring that the selected labeled samples have consistent labels with most samples in their cluster.\nBuilding upon the observations that within the self-supervised feature space, , samples located in the same cluster tend to have similar labels, and those near the center of the cluster are more likely to share the same label as the majority of samples in that cluster (Sec. 5.4.7 ###reference_.SSS7###). Our active strategy selects samples close to the center of the cluster to query labels.\n###figure_7### ###figure_8### Fine-tuning features: While self-supervised training has been established as effective in producing high-quality features, , with excellent performance in linear evaluation [24 ###reference_b24###, 26 ###reference_b26###], we experimentally find that clustering directly on is not a good choice. One potential reason is that self-supervised features exhibit a distinct distribution compared to features trained with labeled data, as shown in Fig. 5 ###reference_###. Self-supervised features tend to scatter due to their training on finer-grained proxy tasks, resulting in a larger distance between features of the same class and a smaller distance between features of different classes. This characteristic can impact the performance of the clustering algorithm and, potentially, the accuracy of our pseudo-labels. To achieve a more suitable feature space, we fine-tune these features based on multiple clusters before selecting samples for labeling.\nTo bring samples within the same cluster closer together, the mean squared error (MSE) loss is employed to force samples in the same cluster to be close to their centers. Also, to improve robustness, we perform runs of K-means on , with the final loss as defined by Eq. (6 ###reference_###), where is the number of samples within the whole dataset, is sample\u2019s fine-tuned feature, denotes the cluster center assigned to sample during the clustering run, where refers to the specific iteration out of a total of runs of K-means clustering. During the fine-tuning process, as the loss is defined based on times clustering with randomness, those samples that are stable within the same cluster become closer, while those attracted by different cluster centers do not approach any specific center, thereby enhancing the clustering results.\nAdditionally, considering the computational cost while still aiming to retain the approximate self-supervised feature structure, we add a single linear layer to the self-supervised trained encoder. During training, the encoder weights are frozen and only the weights of the newly added linear layer are updated. To reduce the computational cost, we pre-compute the self-supervised features for all samples. This allows us to directly use the as input during the training of the new linear layer, eliminating the need to forward-pass through the backbone in each training iteration. Finally, we select labeled samples and generate PPL based on the output of the linear layer, denoted as .\nSelect Labeled Samples: To improve robustness, conduct rounds of K-means clustering on . Subsequently, identify samples consistently assigned to the same class across all clustering rounds. Calculate the mean of the features for these samples to represent the center of that class. Then, select and annotate as the labeled set the samples that are closest to these class centers." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "Experiments", |
| "text": "" |
| }, |
| { |
| "section_id": "5.1", |
| "parent_section_id": "5", |
| "section_name": "Results on CIFAR-10, CIFAR-100 and STL-10", |
| "text": "Our method is first evaluated on the common benchmark: CIFAR-10 [42 ###reference_b42###], CIFAR-100 [42 ###reference_b42###], STL-10 [45 ###reference_b45###]. Both CIFAR-10 and CIFAR-100 contain 50,000 training samples and 10,000 test samples, all with image size of 3232. STL-10 contains 5,000 labeled training samples, 100,000 unlabeled training samples and 8,000 test samples. The resolution of all samples is 9696. We experimented with labeled sample sets of various sizes, especially with fewer annotation samples than in previous papers (10 labeled samples for CIFAR-10, 200 labeled samples for CIFAR-100, and 20 labeled samples for STL-10)." |
| }, |
| { |
| "section_id": "5.1.1", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.1 Baseline Methods", |
| "text": "We compare our method with (1) Semi-supervised learning from scratch methods: FixMatch [4 ###reference_b4###], FlexMatch [5 ###reference_b5###] and CoMatch [11 ###reference_b11###]. (2) Semi-supervised learning initialized with the self-supervised model: SelfMatch [12 ###reference_b12###]. SelfMatch refers to a training framework that utilizes standard semi-supervised training methods to fine-tune models initialized with self-supervised pre-training weights. This framework applies to various self-supervised pre-training and semi-supervised training methods. To set a fair baseline, we employ identical semi-supervised (FlexMatch) and self-supervised training (SimSiam [26 ###reference_b26###]) methods for SelfMatch as those utilized in our framework. (3) Plug-in model selection method, Sel [6 ###reference_b6###], that is specially optimized for semi-supervised training with few annotations. (4) Constructing self-supervised signals to enhance semi-supervised training in scenarios with limited annotations, Less [29 ###reference_b29###]. The annotated samples for the aforementioned baseline methods were randomly selected. (5) Active semi-supervised learning method: USL [41 ###reference_b41###] conducts semi-supervised training by actively selecting labeled samples. To establish a fair baseline, we utilize the same self-supervised pre-training weight initialization for our method, SelfMatch, and USL." |
| }, |
| { |
| "section_id": "5.1.2", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.2 Implemention details", |
| "text": "Many existing semi-supervised learning methods involve the use of pseudo-labeling techniques. Our method readily integrates with these techniques. In our implementation, we conduct experiments with FlexMatch, one of the state-of-the-art semi-supervised approaches. For the self-supervised pre-training, any self-supervised method can be employed. For this study, we chose SimSiam [26 ###reference_b26###], a state-of-the-art self-supervised method, because it has demonstrated good pre-training performance even on small-scale datasets like CIFAR-10.\nThe same backbone as used in the semi-supervised learning stage is utilized in the self-supervised phase. We adhere to the hyper-parameter settings proposed by SimSiam [26 ###reference_b26###]. The network weights obtained from self-supervised learning serve as the initialization for the backbone of the semi-supervised model.\nWe set network architectures following previous work [5 ###reference_b5###]: WRN-28-2 [46 ###reference_b46###] for CIFAR-10, WRN-28-8 for CIFAR-100 and WRN-37-2 for STL-10. For CIFAR-10, CIFAR-100 and STL-10. We set hyper-parameters following TorchSSL [5 ###reference_b5###]: SGD with momentum 0.9, initial learning rate 0.03, is 1, is 7, the batch size is 64, total training iterations is and a cosine annealing learning rate scheduler.\nThe hyper-parameters involved in the active learning and label propagation in this paper include the number of times clustering was performed, , and the number of classes in each clustering, . Specifically, clustering is performed 6 times ( is set to 6) in both the active sampling strategy and label propagation. For active sampling, the number of classes in each clustering, , is equal to the number of selected samples. In label propagation, clustering is accomplished using Constrained Seed K-means [44 ###reference_b44###]. The minimum value of is set to match the number of categories in the dataset. For CIFAR-10 and STL-10, takes values of 10, 20, 30, 40, 50, and 60. For CIFAR-100, is set to 100, 200, 300, 400, 500, and 600, respectively. The linear layer mentioned in Sec. 4.4 ###reference_### has the same dimension as the final layer of the backbone, and fine-tuning features are trained for 40 epochs. Additionally, the for switching from PPL-guided pseudo-labels to regular pseudo-labels during semi-supervised training is set to the 60th epoch." |
| }, |
| { |
| "section_id": "5.1.3", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.3 Results", |
| "text": "The experimental results are shown in Table 2 ###reference_###. Given the lack of labeled data for semi-supervised training, selecting the best model by validation set is infeasible. Following previous work [6 ###reference_b6###], we report the median accuracy for the last 20 checkpoints. Notably, \u201cSel\u201d denotes the use of the metric proposed by Kim et al. [6 ###reference_b6###] to select the final model for evaluation. Our results are presented in 2 ways: first, the median accuracy of the last 20 models as \u201cOurs\u201d and second, the accuracy of the models selected using \u201cSel\u201d [6 ###reference_b6###] as \u201cOurs-Sel\u201d, where the total training iteration is 10241024. Additionally, to demonstrate the accelerated convergence of our method, we report the accuracy achieved with only 1/4 of the total training iterations, termed as \u201cOurs-Early-Sel\u201d.\nEffectiveness: Our approach demonstrates significant improvements over most baselines, except for LESS, which shows similar performance under some experimental setups. Notably, the results with LESS were obtained under somewhat unfair conditions. LESS selects labeled samples through random stratified sampling (i.e., randomly selecting an equal number of samples per class), ensuring that all classes are covered even with limited annotations, which requires additional knowledge about the dataset. In contrast, our method selects samples without prior knowledge of the dataset\u2019s classes, contributing to the similar results observed between LESS and our method.\nA comparison between FlexMatch and SelfMatch (with FlexMatch) indicates that solely relying on self-supervised pre-training initialization does not significantly improve the performance of semi-supervised training in nearly half of the experiments. Our method, incorporating initialization, PPL, and annotated sample selection, enables a more comprehensive utilization of valuable information from self-supervised pre-training, resulting in superior overall performance.\nFurthermore, our method typically brings more improvements when dealing with a limited number of annotations (approximately 1 or 2 labels per class on average). As the number of labels increases, the benefits of our method decrease, especially for simple datasets. This is because semi-supervised training can achieve impressive performance with limited annotated data, reducing the necessity for additional assistance from PPL. For example, on CIFAR-10, the baseline method, FlexMatch, using only 40 annotated samples, can achieve accuracy comparable to fully supervised training, making the impact of PPL less prominent.\nFast convergence: Our method demonstrates accelerated convergence by explicitly employing PPL to initiate semi-supervised training. As shown in Table 2 ###reference_###, our method (Ours-Early-Sel) achieves about 97% accuracy with only 1/4 of the training iterations. It also outperforms baseline methods in most cases with just 1/4 of the training iterations. Additionally, we compared the computational time required for our method to reach the final accuracy of SelfMatch (w FlexMatch), which is initialized with self-supervised pre-trained weights. As illustrated in Table 3 ###reference_###, our method achieves the final accuracy of SelfMatch in approximately 1/3 of the training time." |
| }, |
| { |
| "section_id": "5.2", |
| "parent_section_id": "5", |
| "section_name": "Results on ImageNet", |
| "text": "We conducted experiments on the large-scale dataset, ImageNet-1k [1 ###reference_b1###], which has 1.28 million training images and 50,000 validation images. Following previous works [5 ###reference_b5###, 47 ###reference_b47###], we adopted ResNet-50 [48 ###reference_b48###] as backbone. Due to computational resource limitations, we set hyper-parameters following the method of training based on self-supervised pre-training weights [47 ###reference_b47###, 15 ###reference_b15###] instead of the original FlexMatch [5 ###reference_b5###]. The total training epoch is 50, with a change point at the 4th epoch.\nFlexMatch and SelfMatch served as our baseline methods. We implemented two versions of the SelfMatch, employing different semi-supervised training approaches: FlexMatch and FixMatch. For FixMatch, we employed a variant known as FixMatch-EMAN, where EMAN [15 ###reference_b15###] is utilized to enhance both pre-training and semi-supervised training effectiveness. To ensure fairness, both SelfMatch and our method are initialized with the same pre-trained weights, as shown in Table 4 ###reference_###. They are respectively pre-trained using BYOL [25 ###reference_b25###] and BYOL-EMAN [15 ###reference_b15###]. The annotated samples for baseline methods were randomly selected, with a difference from existing semi-supervised learning experimental setups. In contrast to the common practice of randomly selecting labeled samples per class, our experiments were conducted in a more realistic setting. Specifically, we randomly selected annotated samples from the entire dataset.\nThe same trend is observed in experiments on ImageNet, as detailed in Table 4 ###reference_###. Our method outperforms SelfMatch significantly, especially when labeled data is limited. This suggests that our method can more effectively leverage pre-trained models to enhance semi-supervised learning performance. Additionally, this improvement is consistent across different semi-supervised training methods, including FlexMatch and FixMatch-EMAN.\n###table_1###" |
| }, |
| { |
| "section_id": "5.3", |
| "parent_section_id": "5", |
| "section_name": "Computational Complexity", |
| "text": "The additional computational complexity of the proposed method can be divided into two main components: the active learning part and the PPL-guided semi-supervised training. The PPL-guided semi-supervised training introduces one additional plus and normalization operation per iteration compared to standard semi-supervised training. This part has a similar computational complexity to that of standard semi-supervised learning methods.\nThe computational complexity in active learning and PPL generation part is influenced by three sets of K-means clustering and feature fine-tuning. The first and second sets of K-means are performed on self-supervised features and fine-tuned features, respectively, for sample selection. The third set of K-means is applied to fine-tuned features for label propagation to generate prior pseudo-labels. Let be the feature dimension, the maximum number of iterations for K-means, and the number of samples in the dataset. Each set of K-means clustering is executed times (details are in Section 5.1.2 ###reference_.SSS2###). The total computational complexity for clustering is . For fine-tuning features, since only a simple linear network is trained rather than the entire neural network, the training time required is considerably less compared to semi-supervised training.\nOverall, compared to standard semi-supervised learning methods, which require extensive training iterations of the entire neural network, the proposed method adds marginal additional computational time for active learning and sample selection." |
| }, |
| { |
| "section_id": "5.4", |
| "parent_section_id": "5", |
| "section_name": "Model Detailed Analysis", |
| "text": "We further analyze our approach from six aspects: ablation studies for the whole framework, influence of switching point, comparisons of active learning strategies, ablation studies for our active sampling strategy, and the detailed analysis of pseudo-label propagation and clustering results on self-supervised features." |
| }, |
| { |
| "section_id": "5.4.1", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.1 Ablation Experiment of Framework", |
| "text": "Ablation experiments were performed in the CIFAR-10 with 10 labels. We maintained the same hyper-parameters, except for reducing the total training iterations to 1122304, equivalent to 2304 epochs. Two components of our method are evaluated: active labeled set selection and PPL. The results are shown in Table 5 ###reference_###.\n(1) When working with a limited number of labels, the active learning component yields more stable and accurate results in semi-supervised training.\n(2) Whether for randomly selected or actively chosen annotated samples, the use of PPL significantly improves the performance of semi-supervised learning." |
| }, |
| { |
| "section_id": "5.4.2", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.2 Influence of switching point", |
| "text": "To assess the impact of the PPL switch point, , on semi-supervised training, we conducted four experiments on the CIFAR-10 dataset with 10 and 40 labeled samples, respectively. The experiments included: (1) Without PPL, (2) Employing PPL as in Eq. (5 ###reference_###) with set to 5 epochs, (3) set to 60 epochs, and (4) Direct PPL, i.e., equal to the total number of training iterations. The results are depicted in Fig. 6(a) ###reference_sf1### and Fig. 6(b) ###reference_sf2###.\n###figure_9### ###figure_10### PPL enhances semi-supervised training accuracy in the early stages, but continuous use of PPL (experiment setting (4)) hinders the sustained update of pseudo-labels, thereby impacting the final semi-supervised performance. For instance, in Fig. 6(b) ###reference_sf2###, the semi-supervised training accuracy of experiment setting (1) surpasses that with continuous PPL (setting (4)) after training epochs exceed about 70. Hence, a switching mechanism, as represented by Eq. (5 ###reference_###), is necessary for guiding the semi-supervised training process.\nThe specific placement of the switch point minimally affects the ultimate performance of semi-supervised training. Although setting a larger (i.e., using PPL in more training iterations) yields higher accuracy in the early stages of semi-supervised training with limited labeled data as shown in Fig. 6(a) ###reference_sf1###, and setting a smaller helps avoid the constraint imposed by fixed PPL on the process of updating pseudo-labels in scenarios with more labeled data as shown in Fig. 6(b) ###reference_sf2###, after the switch at point , the semi-supervised training accuracy becomes consistent quickly for different settings. Therefore, the impact of setting a roughly appropriate switch point on the final performance of semi-supervised training is minimal." |
| }, |
| { |
| "section_id": "5.4.3", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.3 Ablation Study of Our Active Sampling Strategy", |
| "text": "We conducted ablation experiments to compare the effects of various components in our proposed active learning strategy on CIFAR-10. In these experiments, \u201cK-medoids\u201d refers to clustering only once, while \u201cmulti-clustering\u201d involves clustering 6 times, as described in Sec. 5.1.2 ###reference_.SSS2###.\nAs shown in Table 6 ###reference_###, fine-tuning the features results in significant improvements, providing better class coverage and more accurate pseudo-labels. Especially for the case with fewer annotations, where multi-clustering and fine-tuning features yield greater benefits.\nWe investigated the impact of setting different clustering times, , on the samples selected by our proposed active learning strategy. The experiment is implemented on CIFAR-10 with 10 labels. As shown in Fig. 7(a) ###reference_sf1### and Fig. 7(b) ###reference_sf2###, we note that our strategy is robust to the number of clustering . Across different values of , our active learning strategy consistently selects samples that comprehensively cover all classes, even in scenarios with very few annotations. The choice of primarily influences the accuracy of the PPL. The larger is, the smaller the variance, and the accuracy of pseudo-labels can be slightly improved.\n###figure_11### ###figure_12###" |
| }, |
| { |
| "section_id": "5.4.4", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.4 Influence of Active Sampling Strategy", |
| "text": "We compare different active learning strategies on CIFAR-10, CIFAR-100, and STL-10, with all hyper-parameters consistent with those mentioned in Sec. 5.1.2 ###reference_.SSS2###, except for the training iterations. In these experiments, all models are trained for 1122304 iterations. Many active learning strategies involve multiple rounds of iteration for selecting annotated samples, often resulting in high training costs, contradicting the fundamental goal of active semi-supervised learning, which aims to reduce the overall cost. Therefore, we choose active learning methods that allow for the selection of annotated samples in a single iteration as our baseline. Specially, random, K-medoids [33 ###reference_b33###] and Coreset-greedy [33 ###reference_b33###] are selected. As shown in tab. 7 ###reference_###, our method consistently outperforms other active learning strategies. K-medoids outperforms random sampling in most cases, whereas Coreset often performs worse. This discrepancy arises because Coreset selects samples that are least similar to the features of the existing labeled samples, while K-medoids selects samples closest to the cluster centers. Consequently, Coreset is more likely to select outliers, which decreases the performance of semi-supervised training. This observation aligns with previous work [37 ###reference_b37###], which found that some active learning strategies can have adverse effects when the number of labeled samples is limited." |
| }, |
| { |
| "section_id": "5.4.5", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.5 Prior Pseudo-label Propagation", |
| "text": "For PPL generation, we compare our method with LLGC [49 ###reference_b49###], which is a typical baseline for label spreading, with the hyper-parameters of LLGC following [50 ###reference_b50###]. Additionally, we investigate the impact of selecting labeled samples using different active learning methods on the accuracy of pseudo-labels, as presented in Table 8 ###reference_###. The results confirm that the choice of labeled sample selection strategy has a large impact on the accuracy of pseudo-labels. Samples selected by our active learning strategy result in more accurate pseudo-labels across different label propagation methods. Our pseudo-label propagation method outperforms LLGC when the number of labeled samples is close to the number of true classes, but shows slightly weaker performance than LLGC when more labels are available.\nFurthermore, we investigate the impact of the number of classes in clustering, , in our label propagation. The experiments include three settings with different sizes of , as summarized in Table 9 ###reference_###. Expected calibration error (ECE) [51 ###reference_b51###] is employed to assess how well is calibrated to the true accuracy, where smaller values indicate less miscalibration. The results confirm that using different in label propagation is a good compromise between accuracy and calibration." |
| }, |
| { |
| "section_id": "5.4.6", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.6 Influence of Self-Supervised Learning Tasks", |
| "text": "We compared the impact of different self-supervised training methods on prior pseudo-label accuracy on CIFAR-10. Considering the availability of many pre-trained self-supervised models on large datasets in the deep learning community, we also investigated whether our method can generate high-quality prior pseudo-labels using these pre-trained models (i.e. model pre-trained on different datasets). As shown in Table 10 ###reference_###, except for the image reconstruction-based pre-training method MAE [20 ###reference_b20###], our method (using clustering and label propagation) produces high-quality prior pseudo-labels across various contrastive self-supervised pre-training tasks." |
| }, |
| { |
| "section_id": "5.4.7", |
| "parent_section_id": "5.4", |
| "section_name": "5.4.7 Clustering Results on Self-Supervised Features", |
| "text": "We analyzed clustering results in self-supervised feature space, as shown in Fig. 8(a) ###reference_sf1### and Fig. 8(b) ###reference_sf2###. The self-supervised features were clustered five times, with the number of clusters equal to the actual number of classes in the datasets. The samples were divided into 10 bins based on the distance between the sample features and the cluster centers. The proportion of sample labels in each bin matching the dominant labels in the clusters was then calculated. The result shows that samples near the center of the cluster are more likely to share the same label as the dominant label in the cluster.\n###figure_13### ###figure_14###" |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "Conclusions and Limitations", |
| "text": "In this paper, we observed that in scenarios with limited labeled samples, semi-supervised training often quickly compromises the well-established feature representations obtained through self-supervised training. Weight initialization, in such cases, fails to effectively transfer valuable information from self-supervised pre-training to the semi-supervised model, resulting in a reduction of the benefits gained from combining self-supervised and semi-supervised training. Motivated by this observation, we propose using PPL as an intermediate step to assist semi-supervised training in acquiring valuable information from self-supervised pre-training. Additionally, to more fully exploit self-supervised pre-training information, we introduce a novel active learning strategy to enhance the accuracy of PPL. By combining weight initialization, PPL, and the active learning strategy, our framework provides a more effective starting point for semi-supervised training. Experiments on multiple image classification datasets demonstrate the effectiveness of our approach in enhancing semi-supervised learning performance with limited labeled data. Furthermore, our method readily integrates with existing semi-supervised learning approaches that include pseudo-labeling techniques.\nOur method primarily relies on prior pseudo-labels generated through clustering on pre-trained features to enhance semi-supervised training performance. This approach faces limitations in scenarios where clustering cannot effectively distinguish between different classes, such as in multi-label learning where a single image contains multiple objects. Additionally, the fixed pseudo-label switching time may hinder performance in nonstationary environments, such as stream learning." |
| }, |
| { |
| "section_id": "7", |
| "parent_section_id": null, |
| "section_name": "Acknowledgements", |
| "text": "The authors acknowledge the University of Sydney\u2019s high performance computing cluster, Artemis, for providing the computing resources. This work was supported by the Australian Research Council [Grant LE200100049]." |
| } |
| ], |
| "appendix": [], |
| "tables": { |
| "1": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Impact of self-supervised pre-training weight initialization on semi-supervised performance. The experiments adopted the semi-supervised training method, FlexMatch. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.8.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.9.1.1\"><span class=\"ltx_text\" id=\"S3.T1.8.8.9.1.1.1\" style=\"font-size:90%;\">Self-sup.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.8.8.9.1.2\"><span class=\"ltx_text\" id=\"S3.T1.8.8.9.1.2.1\" style=\"font-size:90%;\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S3.T1.8.8.9.1.3\"><span class=\"ltx_text\" id=\"S3.T1.8.8.9.1.3.1\" style=\"font-size:90%;\">CIFAR-100</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8.10.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S3.T1.8.8.10.2.1\"><span class=\"ltx_text\" id=\"S3.T1.8.8.10.2.1.1\" style=\"font-size:90%;\">Initialization</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.8.8.10.2.2\"><span class=\"ltx_text\" id=\"S3.T1.8.8.10.2.2.1\" style=\"font-size:90%;\">10 labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S3.T1.8.8.10.2.3\"><span class=\"ltx_text\" id=\"S3.T1.8.8.10.2.3.1\" style=\"font-size:90%;\">40 labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.8.8.10.2.4\"><span class=\"ltx_text\" id=\"S3.T1.8.8.10.2.4.1\" style=\"font-size:90%;\">200 labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.8.8.10.2.5\"><span class=\"ltx_text\" id=\"S3.T1.8.8.10.2.5.1\" style=\"font-size:90%;\">400 labels</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4.5\"><span class=\"ltx_text\" id=\"S3.T1.4.4.4.5.1\" style=\"font-size:90%;\">None</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1\" style=\"font-size:90%;\">59.06</span><span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.2\" style=\"font-size:90%;\">19.80</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2.2\">\n<span class=\"ltx_text\" id=\"S3.T1.2.2.2.2.1\" style=\"font-size:90%;\">94.86</span><span class=\"ltx_text\" id=\"S3.T1.2.2.2.2.2\" style=\"font-size:90%;\">0.05</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.3.3.3\">\n<span class=\"ltx_text\" id=\"S3.T1.3.3.3.3.1\" style=\"font-size:90%;\">30.59</span><span class=\"ltx_text\" id=\"S3.T1.3.3.3.3.2\" style=\"font-size:90%;\">1.69</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.4.4.4.4\">\n<span class=\"ltx_text\" id=\"S3.T1.4.4.4.4.1\" style=\"font-size:90%;\">46.11</span><span class=\"ltx_text\" id=\"S3.T1.4.4.4.4.2\" style=\"font-size:90%;\">2.83</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S3.T1.8.8.8.5\"><span class=\"ltx_text\" id=\"S3.T1.8.8.8.5.1\" style=\"font-size:90%;\">Simsiam</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.5.5.5.1\">\n<span class=\"ltx_text\" id=\"S3.T1.5.5.5.1.1\" style=\"font-size:90%;\">57.03</span><span class=\"ltx_text\" id=\"S3.T1.5.5.5.1.2\" style=\"font-size:90%;\">14.86</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S3.T1.6.6.6.2\">\n<span class=\"ltx_text\" id=\"S3.T1.6.6.6.2.1\" style=\"font-size:90%;\">94.63</span><span class=\"ltx_text\" id=\"S3.T1.6.6.6.2.2\" style=\"font-size:90%;\">0.34</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.7.7.7.3\">\n<span class=\"ltx_text\" id=\"S3.T1.7.7.7.3.1\" style=\"font-size:90%;\">28.63</span><span class=\"ltx_text\" id=\"S3.T1.7.7.7.3.2\" style=\"font-size:90%;\">4.13</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.8.8.8.4\">\n<span class=\"ltx_text\" id=\"S3.T1.8.8.8.4.1\" style=\"font-size:90%;\">51.24</span><span class=\"ltx_text\" id=\"S3.T1.8.8.8.4.2\" style=\"font-size:90%;\">1.02</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 1: Impact of self-supervised pre-training weight initialization on semi-supervised performance. The experiments adopted the semi-supervised training method, FlexMatch. " |
| }, |
| "2": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.66.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.67.2\" style=\"font-size:90%;\">Comparison of accuracy on CIFAR-10, CIFAR-100 and STL-10. All results are averaged over 3 runs. The best results are shown in red and the second best results are shown in blue.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.64\" style=\"width:433.6pt;height:170.5pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-105.0pt,41.1pt) scale(0.673786488335232,0.673786488335232) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.64.64\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.64.64.65.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.64.64.65.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.64.64.65.1.2\">Active</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.64.64.65.1.3\">Self-sup.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T2.64.64.65.1.4\">CIFAR-10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T2.64.64.65.1.5\">CIFAR-100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.64.64.65.1.6\">STL-10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.64.64.66.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.64.64.66.2.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.64.64.66.2.2\">Learning</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.64.64.66.2.3\">Initialization</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.64.64.66.2.4\">10 labels</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.64.64.66.2.5\">40 labels</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.64.64.66.2.6\">200 labels</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.64.64.66.2.7\">400 labels</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.64.64.66.2.8\">20 labels</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.64.64.66.2.9\">40 labels</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.64.64.67.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.64.64.67.3.1\">Fully-Supervised</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.64.64.67.3.2\">-</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.64.64.67.3.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T2.64.64.67.3.4\">95.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T2.64.64.67.3.5\">80.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.64.64.67.3.6\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.6.6.6.7\">FixMatch</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.6.6.6.8\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.6.6.6.9\">None</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.1.1.1\">60.017.41</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.2\">85.575.21</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.3.3.3.3\">35.831.63</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.4.4\">46.041.41</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.5.5.5.5\">43.246.32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.6.6.6.6\">60.925.60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.12.12.12.7\">FixMatch-Sel</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.12.12.12.8\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.12.12.12.9\">None</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.1\">65.7310.32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.8.8.8.2\">89.874.96</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.9.9.9.3\">35.781.58</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.10.10.10.4\">46.051.28</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.11.11.11.5\">45.453.24</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.12.12.12.6\">63.939.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.18.18.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.18.18.18.7\">FlexMatch</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.18.18.18.8\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.18.18.18.9\">None</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.13.13.13.1\">59.0619.80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.14.14.14.2\"><span class=\"ltx_text\" id=\"S5.T2.14.14.14.2.1\" style=\"color:#FF0000;\">94.860.05</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.15.15.15.3\">30.591.69</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.16.16.16.4\">46.112.83</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.17.17.17.5\">38.3116.69</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.18.18.18.6\">54.908.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.22.22.22\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.22.22.22.5\">CoMatch</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.22.22.22.6\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.22.22.22.7\">None</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.19.19.19.1\">65.107.81</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.20.20.20.2\">92.164.97</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.21.21.21.3\">32.511.15</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.22.22.22.4\">41.722.04</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.22.22.22.8\">-</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.22.22.22.9\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.28.28.28\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.28.28.28.7\">LESS</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.28.28.28.8\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.28.28.28.9\">None</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.23.23.23.1\">64.4010.90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.24.24.24.2\">93.202.10</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.25.25.25.3\">42.503.20</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.26.26.26.4\">51.302.40</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.27.27.27.5\">48.985.19</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.28.28.28.6\"><span class=\"ltx_text\" id=\"S5.T2.28.28.28.6.1\" style=\"color:#0000FF;\">64.205.10</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.34.34.34\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.34.34.34.7\">SelfMatch\n(w FlexMatch)</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.34.34.34.8\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.34.34.34.9\"><span class=\"ltx_text\" id=\"S5.T2.34.34.34.9.1\" style=\"color:#FF0000;\">Simsiam</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.29.29.29.1\">57.0314.86</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.30.30.30.2\">94.630.34</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.31.31.31.3\">28.634.13</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.32.32.32.4\">51.241.02</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.33.33.33.5\">44.781.98</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.34.34.34.6\">57.065.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.40.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.40.40.40.7\">SelfMatch-Sel(w FlexMatch)</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.40.40.40.8\">No</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.40.40.40.9\"><span class=\"ltx_text\" id=\"S5.T2.40.40.40.9.1\" style=\"color:#FF0000;\">Simsiam</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.35.35.35.1\">59.1315.43</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.36.36.36.2\"><span class=\"ltx_text\" id=\"S5.T2.36.36.36.2.1\" style=\"color:#0000FF;\">94.790.07</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.37.37.37.3\">36.583.31</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.38.38.38.4\">53.051.85</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.39.39.39.5\">45.734.11</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.40.40.40.6\">63.149.47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.46.46.46\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.46.46.46.7\">USL-Sel (w FlexMatch)</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.46.46.46.8\"><span class=\"ltx_text\" id=\"S5.T2.46.46.46.8.1\" style=\"color:#FF0000;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.46.46.46.9\"><span class=\"ltx_text\" id=\"S5.T2.46.46.46.9.1\" style=\"color:#FF0000;\">Simsiam</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.41.41.41.1\">55.766.45</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.42.42.42.2\">89.220.79</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.43.43.43.3\">25.3110.32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.44.44.44.4\">29.734.77</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.45.45.45.5\"><span class=\"ltx_text\" id=\"S5.T2.45.45.45.5.1\" style=\"color:#FF0000;\">55.901.33</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.46.46.46.6\">61.945.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.52.52.52\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.52.52.52.7\">Ours (w FlexMatch)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.52.52.52.8\"><span class=\"ltx_text\" id=\"S5.T2.52.52.52.8.1\" style=\"color:#FF0000;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.52.52.52.9\"><span class=\"ltx_text\" id=\"S5.T2.52.52.52.9.1\" style=\"color:#FF0000;\">Simsiam</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.47.47.47.1\">69.085.32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.48.48.48.2\">94.350.20</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.49.49.49.3\"><span class=\"ltx_text\" id=\"S5.T2.49.49.49.3.1\" style=\"color:#0000FF;\">46.872.47</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.50.50.50.4\">47.376.66</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.51.51.51.5\">49.579.32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.52.52.52.6\">57.579.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.58.58.58\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.58.58.58.7\">Ours-Sel (w FlexMatch)</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.58.58.58.8\"><span class=\"ltx_text\" id=\"S5.T2.58.58.58.8.1\" style=\"color:#FF0000;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.58.58.58.9\"><span class=\"ltx_text\" id=\"S5.T2.58.58.58.9.1\" style=\"color:#FF0000;\">Simsiam</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.53.53.53.1\"><span class=\"ltx_text\" id=\"S5.T2.53.53.53.1.1\" style=\"color:#FF0000;\">83.8010.57</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.54.54.54.2\">94.320.09</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.55.55.55.3\"><span class=\"ltx_text\" id=\"S5.T2.55.55.55.3.1\" style=\"color:#FF0000;\">48.405.78</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.56.56.56.4\"><span class=\"ltx_text\" id=\"S5.T2.56.56.56.4.1\" style=\"color:#0000FF;\">59.162.37</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.57.57.57.5\"><span class=\"ltx_text\" id=\"S5.T2.57.57.57.5.1\" style=\"color:#0000FF;\">51.116.88</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.58.58.58.6\">61.897.73</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.64.64.64\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T2.64.64.64.7\">Ours-Early-Sel (w FlexMatch)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T2.64.64.64.8\"><span class=\"ltx_text\" id=\"S5.T2.64.64.64.8.1\" style=\"color:#FF0000;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T2.64.64.64.9\"><span class=\"ltx_text\" id=\"S5.T2.64.64.64.9.1\" style=\"color:#FF0000;\">Simsiam</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T2.59.59.59.1\"><span class=\"ltx_text\" id=\"S5.T2.59.59.59.1.1\" style=\"color:#0000FF;\">82.1711.09</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T2.60.60.60.2\">93.720.22</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T2.61.61.61.3\">44.805.07</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T2.62.62.62.4\"><span class=\"ltx_text\" id=\"S5.T2.62.62.62.4.1\" style=\"color:#FF0000;\">61.502.00</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T2.63.63.63.5\">46.327.77</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T2.64.64.64.6\"><span class=\"ltx_text\" id=\"S5.T2.64.64.64.6.1\" style=\"color:#FF0000;\">65.152.44</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>", |
| "capture": "Table 2: Comparison of accuracy on CIFAR-10, CIFAR-100 and STL-10. All results are averaged over 3 runs. The best results are shown in red and the second best results are shown in blue." |
| }, |
| "3": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Comparison of practical running time on a single RTX 3090 GPU. SelfMatch time refers to the total runtime of complete semi-supervised training, while our method\u2019s time indicates the duration required to reach SelfMatch\u2019s final accuracy.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.4.1.1.1\"><span class=\"ltx_text\" id=\"S5.T3.4.1.1.1.1\" style=\"font-size:90%;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.4.1.1.2\"><span class=\"ltx_text\" id=\"S5.T3.4.1.1.2.1\" style=\"font-size:90%;\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.4.1.1.3\"><span class=\"ltx_text\" id=\"S5.T3.4.1.1.3.1\" style=\"font-size:90%;\">CIFAR-100</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.4.2.1.1\"><span class=\"ltx_text\" id=\"S5.T3.4.2.1.1.1\" style=\"font-size:90%;\">SelfMatch (w FlexMatch)</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.2.1.2\"><span class=\"ltx_text\" id=\"S5.T3.4.2.1.2.1\" style=\"font-size:90%;\">98.82 (hrs)</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.2.1.3\"><span class=\"ltx_text\" id=\"S5.T3.4.2.1.3.1\" style=\"font-size:90%;\">353.69 (hrs)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T3.4.3.2.1\"><span class=\"ltx_text\" id=\"S5.T3.4.3.2.1.1\" style=\"font-size:90%;\">Ours (w FlexMatch)</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T3.4.3.2.2\"><span class=\"ltx_text\" id=\"S5.T3.4.3.2.2.1\" style=\"font-size:90%;\">35.21 (hrs)</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T3.4.3.2.3\"><span class=\"ltx_text\" id=\"S5.T3.4.3.2.3.1\" style=\"font-size:90%;\">121.04 (hrs)</span></td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 3: Comparison of practical running time on a single RTX 3090 GPU. SelfMatch time refers to the total runtime of complete semi-supervised training, while our method\u2019s time indicates the duration required to reach SelfMatch\u2019s final accuracy." |
| }, |
| "4": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Comparison of accuracy on ImageNet. The best results are shown in red and the second best results are shown in blue.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T4.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.1.1.1\"><span class=\"ltx_text\" id=\"S5.T4.4.1.1.1.1\" style=\"font-size:90%;\">Semi-Supervised</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.1.1.2\"><span class=\"ltx_text\" id=\"S5.T4.4.1.1.2.1\" style=\"font-size:90%;\">Self-Supervised</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T4.4.1.1.3\"><span class=\"ltx_text\" id=\"S5.T4.4.1.1.3.1\" style=\"font-size:90%;\">0.2% labels</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T4.4.1.1.4\"><span class=\"ltx_text\" id=\"S5.T4.4.1.1.4.1\" style=\"font-size:90%;\">1% labels</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.2.2.1\"><span class=\"ltx_text\" id=\"S5.T4.4.2.2.1.1\" style=\"font-size:90%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.2.2.2\"><span class=\"ltx_text\" id=\"S5.T4.4.2.2.2.1\" style=\"font-size:90%;\">Initialization</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.2.2.3\"><span class=\"ltx_text\" id=\"S5.T4.4.2.2.3.1\" style=\"font-size:90%;\">Top-1</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.2.2.4\"><span class=\"ltx_text\" id=\"S5.T4.4.2.2.4.1\" style=\"font-size:90%;\">Top-5</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.2.2.5\"><span class=\"ltx_text\" id=\"S5.T4.4.2.2.5.1\" style=\"font-size:90%;\">Top-1</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.2.2.6\"><span class=\"ltx_text\" id=\"S5.T4.4.2.2.6.1\" style=\"font-size:90%;\">Top-5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.3.1\"><span class=\"ltx_text\" id=\"S5.T4.4.3.3.1.1\" style=\"font-size:90%;\">Fine-tune</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.3.2\"><span class=\"ltx_text\" id=\"S5.T4.4.3.3.2.1\" style=\"font-size:90%;\">BYOL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.3.3\"><span class=\"ltx_text\" id=\"S5.T4.4.3.3.3.1\" style=\"font-size:90%;\">26.0</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.3.4\"><span class=\"ltx_text\" id=\"S5.T4.4.3.3.4.1\" style=\"font-size:90%;\">48.7</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.3.5\"><span class=\"ltx_text\" id=\"S5.T4.4.3.3.5.1\" style=\"font-size:90%;\">53.2</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.3.6\"><span class=\"ltx_text\" id=\"S5.T4.4.3.3.6.1\" style=\"font-size:90%;\">68.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.4.1\"><span class=\"ltx_text\" id=\"S5.T4.4.4.4.1.1\" style=\"font-size:90%;\">FlexMatch</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.4.2\"><span class=\"ltx_text\" id=\"S5.T4.4.4.4.2.1\" style=\"font-size:90%;\">None</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.4.3\"><span class=\"ltx_text\" id=\"S5.T4.4.4.4.3.1\" style=\"font-size:90%;\">3.9</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.4.4\"><span class=\"ltx_text\" id=\"S5.T4.4.4.4.4.1\" style=\"font-size:90%;\">9.6</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.4.5\"><span class=\"ltx_text\" id=\"S5.T4.4.4.4.5.1\" style=\"font-size:90%;\">19.6</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.4.6\"><span class=\"ltx_text\" id=\"S5.T4.4.4.4.6.1\" style=\"font-size:90%;\">37.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.5.1\"><span class=\"ltx_text\" id=\"S5.T4.4.5.5.1.1\" style=\"font-size:90%;\">SelfMatch (w FlexMatch)</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.5.2\"><span class=\"ltx_text\" id=\"S5.T4.4.5.5.2.1\" style=\"font-size:90%;\">BYOL</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.5.3\"><span class=\"ltx_text\" id=\"S5.T4.4.5.5.3.1\" style=\"font-size:90%;\">30.3</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.5.4\"><span class=\"ltx_text\" id=\"S5.T4.4.5.5.4.1\" style=\"font-size:90%;\">48.8</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.5.5\"><span class=\"ltx_text\" id=\"S5.T4.4.5.5.5.1\" style=\"font-size:90%;\">57.3</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.5.6\"><span class=\"ltx_text\" id=\"S5.T4.4.5.5.6.1\" style=\"font-size:90%;\">80.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.6.6.1\"><span class=\"ltx_text\" id=\"S5.T4.4.6.6.1.1\" style=\"font-size:90%;\">SelfMatch (w FixMatch-EMAN)</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.6.6.2\"><span class=\"ltx_text\" id=\"S5.T4.4.6.6.2.1\" style=\"font-size:90%;\">BYOL-EMAN</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.6.6.3\"><span class=\"ltx_text\" id=\"S5.T4.4.6.6.3.1\" style=\"font-size:90%;\">32.7</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.6.6.4\"><span class=\"ltx_text\" id=\"S5.T4.4.6.6.4.1\" style=\"font-size:90%;\">52.7</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.6.6.5\"><span class=\"ltx_text\" id=\"S5.T4.4.6.6.5.1\" style=\"font-size:90%;color:#0000FF;\">59.6</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.6.6.6\"><span class=\"ltx_text\" id=\"S5.T4.4.6.6.6.1\" style=\"font-size:90%;color:#FF0000;\">81.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.7.7.1\"><span class=\"ltx_text\" id=\"S5.T4.4.7.7.1.1\" style=\"font-size:90%;\">Ours (w FlexMatch)</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.7.7.2\"><span class=\"ltx_text\" id=\"S5.T4.4.7.7.2.1\" style=\"font-size:90%;\">BYOL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.7.7.3\"><span class=\"ltx_text\" id=\"S5.T4.4.7.7.3.1\" style=\"font-size:90%;color:#0000FF;\">37.3</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.7.7.4\"><span class=\"ltx_text\" id=\"S5.T4.4.7.7.4.1\" style=\"font-size:90%;color:#0000FF;\">55.9</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.7.7.5\"><span class=\"ltx_text\" id=\"S5.T4.4.7.7.5.1\" style=\"font-size:90%;\">58.3</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.7.7.6\"><span class=\"ltx_text\" id=\"S5.T4.4.7.7.6.1\" style=\"font-size:90%;\">79.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.8.8.1\"><span class=\"ltx_text\" id=\"S5.T4.4.8.8.1.1\" style=\"font-size:90%;\">Ours (w FixMatch-EMAN)</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.8.8.2\"><span class=\"ltx_text\" id=\"S5.T4.4.8.8.2.1\" style=\"font-size:90%;\">BYOL-EMAN</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.8.8.3\"><span class=\"ltx_text\" id=\"S5.T4.4.8.8.3.1\" style=\"font-size:90%;color:#FF0000;\">40.4</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.8.8.4\"><span class=\"ltx_text\" id=\"S5.T4.4.8.8.4.1\" style=\"font-size:90%;color:#FF0000;\">60.7</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.8.8.5\"><span class=\"ltx_text\" id=\"S5.T4.4.8.8.5.1\" style=\"font-size:90%;color:#FF0000;\">60.4</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.8.8.6\"><span class=\"ltx_text\" id=\"S5.T4.4.8.8.6.1\" style=\"font-size:90%;color:#0000FF;\">81.5</span></td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 4: Comparison of accuracy on ImageNet. The best results are shown in red and the second best results are shown in blue." |
| }, |
| "5": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Ablation Study on CIFAR-10. Accuracy is the average over 3 runs.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T5.5.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T5.1.1.1.2\"><span class=\"ltx_text\" id=\"S5.T5.1.1.1.2.1\" style=\"font-size:90%;\">Ablation</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T5.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T5.1.1.1.1.1\" style=\"font-size:90%;\">Accuracy(avg. </span><span class=\"ltx_text\" id=\"S5.T5.1.1.1.1.2\" style=\"font-size:90%;\"> std.)</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T5.2.2.2.2\"><span class=\"ltx_text\" id=\"S5.T5.2.2.2.2.1\" style=\"font-size:90%;\">Random</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T5.2.2.2.1\">\n<span class=\"ltx_text\" id=\"S5.T5.2.2.2.1.1\" style=\"font-size:90%;\">58.17 </span><span class=\"ltx_text\" id=\"S5.T5.2.2.2.1.2\" style=\"font-size:90%;\"> 13.01</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.3.3.3.2\"><span class=\"ltx_text\" id=\"S5.T5.3.3.3.2.1\" style=\"font-size:90%;\">Random + PPL</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.3.3.3.1\">\n<span class=\"ltx_text\" id=\"S5.T5.3.3.3.1.1\" style=\"font-size:90%;\">68.01 </span><span class=\"ltx_text\" id=\"S5.T5.3.3.3.1.2\" style=\"font-size:90%;\"> 15.96</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.4.4.4.2\"><span class=\"ltx_text\" id=\"S5.T5.4.4.4.2.1\" style=\"font-size:90%;\">Active</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.4.4.4.1\">\n<span class=\"ltx_text\" id=\"S5.T5.4.4.4.1.1\" style=\"font-size:90%;\">76.65 </span><span class=\"ltx_text\" id=\"S5.T5.4.4.4.1.2\" style=\"font-size:90%;\"> 5.29</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T5.5.5.5.2\"><span class=\"ltx_text\" id=\"S5.T5.5.5.5.2.1\" style=\"font-size:90%;\">Active + PPL</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T5.5.5.5.1\">\n<span class=\"ltx_text\" id=\"S5.T5.5.5.5.1.1\" style=\"font-size:90%;\">84.43 </span><span class=\"ltx_text\" id=\"S5.T5.5.5.5.1.2\" style=\"font-size:90%;\"> 5.19</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 5: Ablation Study on CIFAR-10. Accuracy is the average over 3 runs." |
| }, |
| "6": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T6\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Ablation study of proposed active learning strategy on CIFAR-10. Accuracy reported is the average over 3 runs.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T6.21.21\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T6.1.1.1.2\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T6.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T6.1.1.1.1.1\" style=\"font-size:90%;\">Accuracy of </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T6.1.1.1.3\"><span class=\"ltx_text\" id=\"S5.T6.1.1.1.3.1\" style=\"font-size:90%;\">Class Coverage</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.21.21.22.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T6.21.21.22.1.1\"><span class=\"ltx_text\" id=\"S5.T6.21.21.22.1.1.1\" style=\"font-size:90%;\"># Labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T6.21.21.22.1.2\"><span class=\"ltx_text\" id=\"S5.T6.21.21.22.1.2.1\" style=\"font-size:90%;\">10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T6.21.21.22.1.3\"><span class=\"ltx_text\" id=\"S5.T6.21.21.22.1.3.1\" style=\"font-size:90%;\">40</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T6.21.21.22.1.4\"><span class=\"ltx_text\" id=\"S5.T6.21.21.22.1.4.1\" style=\"font-size:90%;\">10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T6.21.21.22.1.5\"><span class=\"ltx_text\" id=\"S5.T6.21.21.22.1.5.1\" style=\"font-size:90%;\">40</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.6.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T6.2.2.2.1\">\n<span class=\"ltx_text\" id=\"S5.T6.2.2.2.1.1\" style=\"font-size:90%;\">K-medoids on </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T6.3.3.3.2\">\n<span class=\"ltx_text\" id=\"S5.T6.3.3.3.2.1\" style=\"font-size:90%;\">44.90</span><span class=\"ltx_text\" id=\"S5.T6.3.3.3.2.2\" style=\"font-size:90%;\">1.88</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T6.4.4.4.3\">\n<span class=\"ltx_text\" id=\"S5.T6.4.4.4.3.1\" style=\"font-size:90%;\">69.62</span><span class=\"ltx_text\" id=\"S5.T6.4.4.4.3.2\" style=\"font-size:90%;\">0.83</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T6.5.5.5.4\">\n<span class=\"ltx_text\" id=\"S5.T6.5.5.5.4.1\" style=\"font-size:90%;\">7.2</span><span class=\"ltx_text\" id=\"S5.T6.5.5.5.4.2\" style=\"font-size:90%;\">1.4</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T6.6.6.6.5\">\n<span class=\"ltx_text\" id=\"S5.T6.6.6.6.5.1\" style=\"font-size:90%;\">10.0</span><span class=\"ltx_text\" id=\"S5.T6.6.6.6.5.2\" style=\"font-size:90%;\">0.0</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.11.11.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.7.7.7.1\">\n<span class=\"ltx_text\" id=\"S5.T6.7.7.7.1.1\" style=\"font-size:90%;\">Multi-cluster on </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.8.8.8.2\">\n<span class=\"ltx_text\" id=\"S5.T6.8.8.8.2.1\" style=\"font-size:90%;\">61.94</span><span class=\"ltx_text\" id=\"S5.T6.8.8.8.2.2\" style=\"font-size:90%;\">5.42</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.9.9.9.3\">\n<span class=\"ltx_text\" id=\"S5.T6.9.9.9.3.1\" style=\"font-size:90%;\">72.54</span><span class=\"ltx_text\" id=\"S5.T6.9.9.9.3.2\" style=\"font-size:90%;\">1.42</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.10.10.10.4\">\n<span class=\"ltx_text\" id=\"S5.T6.10.10.10.4.1\" style=\"font-size:90%;\">9.0</span><span class=\"ltx_text\" id=\"S5.T6.10.10.10.4.2\" style=\"font-size:90%;\">0.7</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.11.11.11.5\">\n<span class=\"ltx_text\" id=\"S5.T6.11.11.11.5.1\" style=\"font-size:90%;\">10.0</span><span class=\"ltx_text\" id=\"S5.T6.11.11.11.5.2\" style=\"font-size:90%;\">0.0</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.16.16.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.12.12.12.1\">\n<span class=\"ltx_text\" id=\"S5.T6.12.12.12.1.1\" style=\"font-size:90%;\">K-medoids on </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.13.13.13.2\">\n<span class=\"ltx_text\" id=\"S5.T6.13.13.13.2.1\" style=\"font-size:90%;\">64.70</span><span class=\"ltx_text\" id=\"S5.T6.13.13.13.2.2\" style=\"font-size:90%;\">6.27</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.14.14.14.3\">\n<span class=\"ltx_text\" id=\"S5.T6.14.14.14.3.1\" style=\"font-size:90%;\">72.72</span><span class=\"ltx_text\" id=\"S5.T6.14.14.14.3.2\" style=\"font-size:90%;\">2.77</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.15.15.15.4\">\n<span class=\"ltx_text\" id=\"S5.T6.15.15.15.4.1\" style=\"font-size:90%;\">9.2</span><span class=\"ltx_text\" id=\"S5.T6.15.15.15.4.2\" style=\"font-size:90%;\">0.8</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T6.16.16.16.5\">\n<span class=\"ltx_text\" id=\"S5.T6.16.16.16.5.1\" style=\"font-size:90%;\">10.0</span><span class=\"ltx_text\" id=\"S5.T6.16.16.16.5.2\" style=\"font-size:90%;\">0.0</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.21.21.21\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T6.17.17.17.1\">\n<span class=\"ltx_text\" id=\"S5.T6.17.17.17.1.1\" style=\"font-size:90%;\">Multi-cluster on </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T6.18.18.18.2\"><span class=\"ltx_text\" id=\"S5.T6.18.18.18.2.1\" style=\"font-size:90%;color:#FF0000;\">71.414.10</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T6.19.19.19.3\"><span class=\"ltx_text\" id=\"S5.T6.19.19.19.3.1\" style=\"font-size:90%;color:#FF0000;\">74.911.66</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T6.20.20.20.4\"><span class=\"ltx_text\" id=\"S5.T6.20.20.20.4.1\" style=\"font-size:90%;color:#FF0000;\">9.50.5</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T6.21.21.21.5\">\n<span class=\"ltx_text\" id=\"S5.T6.21.21.21.5.1\" style=\"font-size:90%;\">10.0</span><span class=\"ltx_text\" id=\"S5.T6.21.21.21.5.2\" style=\"font-size:90%;\">0.0</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 6: Ablation study of proposed active learning strategy on CIFAR-10. Accuracy reported is the average over 3 runs." |
| }, |
| "7": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T7\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Comparison of active sampling strategies. The accuracy reported is the average over 3 runs. The best results are shown in red and the second best results are shown in blue.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T7.24.24\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T7.24.24.25.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T7.24.24.25.1.1\"><span class=\"ltx_text\" id=\"S5.T7.24.24.25.1.1.1\" style=\"font-size:90%;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T7.24.24.25.1.2\"><span class=\"ltx_text\" id=\"S5.T7.24.24.25.1.2.1\" style=\"font-size:90%;\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T7.24.24.25.1.3\"><span class=\"ltx_text\" id=\"S5.T7.24.24.25.1.3.1\" style=\"font-size:90%;\">CIFAR-100</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T7.24.24.25.1.4\"><span class=\"ltx_text\" id=\"S5.T7.24.24.25.1.4.1\" style=\"font-size:90%;\">STL-10</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.24.24.26.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S5.T7.24.24.26.2.1\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.1.1\" style=\"font-size:90%;\">Size of Labeled Set</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T7.24.24.26.2.2\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.2.1\" style=\"font-size:90%;\">10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T7.24.24.26.2.3\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.3.1\" style=\"font-size:90%;\">40</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T7.24.24.26.2.4\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.4.1\" style=\"font-size:90%;\">200</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T7.24.24.26.2.5\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.5.1\" style=\"font-size:90%;\">400</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T7.24.24.26.2.6\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.6.1\" style=\"font-size:90%;\">20</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T7.24.24.26.2.7\"><span class=\"ltx_text\" id=\"S5.T7.24.24.26.2.7.1\" style=\"font-size:90%;\">40</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T7.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T7.6.6.6.7\"><span class=\"ltx_text\" id=\"S5.T7.6.6.6.7.1\" style=\"font-size:90%;\">Random</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T7.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T7.1.1.1.1.1\" style=\"font-size:90%;color:#0000FF;\">58.1713.01</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T7.2.2.2.2\">\n<span class=\"ltx_text\" id=\"S5.T7.2.2.2.2.1\" style=\"font-size:90%;\">82.81</span><span class=\"ltx_text\" id=\"S5.T7.2.2.2.2.2\" style=\"font-size:90%;\">8.10</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T7.3.3.3.3\">\n<span class=\"ltx_text\" id=\"S5.T7.3.3.3.3.1\" style=\"font-size:90%;\">38.84</span><span class=\"ltx_text\" id=\"S5.T7.3.3.3.3.2\" style=\"font-size:90%;\">3.09</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T7.4.4.4.4\">\n<span class=\"ltx_text\" id=\"S5.T7.4.4.4.4.1\" style=\"font-size:90%;\">58.42</span><span class=\"ltx_text\" id=\"S5.T7.4.4.4.4.2\" style=\"font-size:90%;\">2.37</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T7.5.5.5.5\">\n<span class=\"ltx_text\" id=\"S5.T7.5.5.5.5.1\" style=\"font-size:90%;\">51.14</span><span class=\"ltx_text\" id=\"S5.T7.5.5.5.5.2\" style=\"font-size:90%;\">6.79</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T7.6.6.6.6\">\n<span class=\"ltx_text\" id=\"S5.T7.6.6.6.6.1\" style=\"font-size:90%;\">63.33</span><span class=\"ltx_text\" id=\"S5.T7.6.6.6.6.2\" style=\"font-size:90%;\">5.97</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T7.12.12.12.7\"><span class=\"ltx_text\" id=\"S5.T7.12.12.12.7.1\" style=\"font-size:90%;\">K-medoids</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.7.7.7.1\">\n<span class=\"ltx_text\" id=\"S5.T7.7.7.7.1.1\" style=\"font-size:90%;\">47.26</span><span class=\"ltx_text\" id=\"S5.T7.7.7.7.1.2\" style=\"font-size:90%;\">6.62</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T7.8.8.8.2\"><span class=\"ltx_text\" id=\"S5.T7.8.8.8.2.1\" style=\"font-size:90%;color:#0000FF;\">87.425.42</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.9.9.9.3\"><span class=\"ltx_text\" id=\"S5.T7.9.9.9.3.1\" style=\"font-size:90%;color:#0000FF;\">43.394.00</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T7.10.10.10.4\"><span class=\"ltx_text\" id=\"S5.T7.10.10.10.4.1\" style=\"font-size:90%;color:#0000FF;\">59.100.36</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.11.11.11.5\"><span class=\"ltx_text\" id=\"S5.T7.11.11.11.5.1\" style=\"font-size:90%;color:#0000FF;\">51.034.89</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.12.12.12.6\"><span class=\"ltx_text\" id=\"S5.T7.12.12.12.6.1\" style=\"font-size:90%;color:#0000FF;\">68.292.93</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.18.18.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T7.18.18.18.7\"><span class=\"ltx_text\" id=\"S5.T7.18.18.18.7.1\" style=\"font-size:90%;\">Coreset</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.13.13.13.1\">\n<span class=\"ltx_text\" id=\"S5.T7.13.13.13.1.1\" style=\"font-size:90%;\">31.92</span><span class=\"ltx_text\" id=\"S5.T7.13.13.13.1.2\" style=\"font-size:90%;\">3.14</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T7.14.14.14.2\">\n<span class=\"ltx_text\" id=\"S5.T7.14.14.14.2.1\" style=\"font-size:90%;\">86.19</span><span class=\"ltx_text\" id=\"S5.T7.14.14.14.2.2\" style=\"font-size:90%;\">10.78</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.15.15.15.3\">\n<span class=\"ltx_text\" id=\"S5.T7.15.15.15.3.1\" style=\"font-size:90%;\">27.59</span><span class=\"ltx_text\" id=\"S5.T7.15.15.15.3.2\" style=\"font-size:90%;\">4.63</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T7.16.16.16.4\">\n<span class=\"ltx_text\" id=\"S5.T7.16.16.16.4.1\" style=\"font-size:90%;\">47.77</span><span class=\"ltx_text\" id=\"S5.T7.16.16.16.4.2\" style=\"font-size:90%;\">3.22</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.17.17.17.5\">\n<span class=\"ltx_text\" id=\"S5.T7.17.17.17.5.1\" style=\"font-size:90%;\">45.75</span><span class=\"ltx_text\" id=\"S5.T7.17.17.17.5.2\" style=\"font-size:90%;\">3.95</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T7.18.18.18.6\">\n<span class=\"ltx_text\" id=\"S5.T7.18.18.18.6.1\" style=\"font-size:90%;\">51.41</span><span class=\"ltx_text\" id=\"S5.T7.18.18.18.6.2\" style=\"font-size:90%;\">3.74</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.24.24.24\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T7.24.24.24.7\"><span class=\"ltx_text\" id=\"S5.T7.24.24.24.7.1\" style=\"font-size:90%;\">Proposed</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T7.19.19.19.1\"><span class=\"ltx_text\" id=\"S5.T7.19.19.19.1.1\" style=\"font-size:90%;color:#FF0000;\">84.435.19</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T7.20.20.20.2\"><span class=\"ltx_text\" id=\"S5.T7.20.20.20.2.1\" style=\"font-size:90%;color:#FF0000;\">94.250.43</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T7.21.21.21.3\"><span class=\"ltx_text\" id=\"S5.T7.21.21.21.3.1\" style=\"font-size:90%;color:#FF0000;\">51.223.06</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T7.22.22.22.4\"><span class=\"ltx_text\" id=\"S5.T7.22.22.22.4.1\" style=\"font-size:90%;color:#FF0000;\">61.070.12</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T7.23.23.23.5\"><span class=\"ltx_text\" id=\"S5.T7.23.23.23.5.1\" style=\"font-size:90%;color:#FF0000;\">61.395.43</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T7.24.24.24.6\"><span class=\"ltx_text\" id=\"S5.T7.24.24.24.6.1\" style=\"font-size:90%;color:#FF0000;\">70.263.60</span></td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 7: Comparison of active sampling strategies. The accuracy reported is the average over 3 runs. The best results are shown in red and the second best results are shown in blue." |
| }, |
| "8": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T8\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>Accuracy of prior pseudo-label comparison of different active strategies and label propagation methods. The accuracy reported is the average over 3 runs. The best results are shown in red and the second best results are shown in blue.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T8.30.30\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T8.30.30.31.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"2\" id=\"S5.T8.30.30.31.1.1\"><span class=\"ltx_text\" id=\"S5.T8.30.30.31.1.1.1\" style=\"font-size:90%;\">Dataset</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S5.T8.30.30.31.1.2\"><span class=\"ltx_text\" id=\"S5.T8.30.30.31.1.2.1\" style=\"font-size:90%;\">CIFAR-10</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.30.30.32.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" colspan=\"2\" id=\"S5.T8.30.30.32.2.1\"><span class=\"ltx_text\" id=\"S5.T8.30.30.32.2.1.1\" style=\"font-size:90%;\"># Labels</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.30.30.32.2.2\"><span class=\"ltx_text\" id=\"S5.T8.30.30.32.2.2.1\" style=\"font-size:90%;\">10</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.30.30.32.2.3\"><span class=\"ltx_text\" id=\"S5.T8.30.30.32.2.3.1\" style=\"font-size:90%;\">20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.30.30.32.2.4\"><span class=\"ltx_text\" id=\"S5.T8.30.30.32.2.4.1\" style=\"font-size:90%;\">40</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.30.30.33.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.30.30.33.3.1\"><span class=\"ltx_text\" id=\"S5.T8.30.30.33.3.1.1\" style=\"font-size:90%;\">Propagation</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.30.30.33.3.2\"><span class=\"ltx_text\" id=\"S5.T8.30.30.33.3.2.1\" style=\"font-size:90%;\">Sampling</span></th>\n<td class=\"ltx_td\" id=\"S5.T8.30.30.33.3.3\"></td>\n<td class=\"ltx_td\" id=\"S5.T8.30.30.33.3.4\"></td>\n<td class=\"ltx_td\" id=\"S5.T8.30.30.33.3.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.T8.3.3.3.4\"><span class=\"ltx_text\" id=\"S5.T8.3.3.3.4.1\" style=\"font-size:90%;\">LLGC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.T8.3.3.3.5\"><span class=\"ltx_text\" id=\"S5.T8.3.3.3.5.1\" style=\"font-size:90%;\">Random</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T8.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T8.1.1.1.1.1\" style=\"font-size:90%;\">53.42</span><span class=\"ltx_text\" id=\"S5.T8.1.1.1.1.2\" style=\"font-size:90%;\">5.36</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T8.2.2.2.2\">\n<span class=\"ltx_text\" id=\"S5.T8.2.2.2.2.1\" style=\"font-size:90%;\">59.91</span><span class=\"ltx_text\" id=\"S5.T8.2.2.2.2.2\" style=\"font-size:90%;\">2.58</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T8.3.3.3.3\">\n<span class=\"ltx_text\" id=\"S5.T8.3.3.3.3.1\" style=\"font-size:90%;\">69.59</span><span class=\"ltx_text\" id=\"S5.T8.3.3.3.3.2\" style=\"font-size:90%;\">2.35</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.6.6.6.4\"><span class=\"ltx_text\" id=\"S5.T8.6.6.6.4.1\" style=\"font-size:90%;\">LLGC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.6.6.6.5\"><span class=\"ltx_text\" id=\"S5.T8.6.6.6.5.1\" style=\"font-size:90%;\">Coreset</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.4.4.4.1\">\n<span class=\"ltx_text\" id=\"S5.T8.4.4.4.1.1\" style=\"font-size:90%;\">31.57</span><span class=\"ltx_text\" id=\"S5.T8.4.4.4.1.2\" style=\"font-size:90%;\">6.31</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.5.5.5.2\">\n<span class=\"ltx_text\" id=\"S5.T8.5.5.5.2.1\" style=\"font-size:90%;\">56.06</span><span class=\"ltx_text\" id=\"S5.T8.5.5.5.2.2\" style=\"font-size:90%;\">3.35</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.6.6.6.3\">\n<span class=\"ltx_text\" id=\"S5.T8.6.6.6.3.1\" style=\"font-size:90%;\">74.47</span><span class=\"ltx_text\" id=\"S5.T8.6.6.6.3.2\" style=\"font-size:90%;\">5.17</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.9.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.9.9.9.4\"><span class=\"ltx_text\" id=\"S5.T8.9.9.9.4.1\" style=\"font-size:90%;\">LLGC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.9.9.9.5\"><span class=\"ltx_text\" id=\"S5.T8.9.9.9.5.1\" style=\"font-size:90%;\">K-medoids</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.7.7.7.1\">\n<span class=\"ltx_text\" id=\"S5.T8.7.7.7.1.1\" style=\"font-size:90%;\">45.53</span><span class=\"ltx_text\" id=\"S5.T8.7.7.7.1.2\" style=\"font-size:90%;\">2.96</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.8.8.8.2\">\n<span class=\"ltx_text\" id=\"S5.T8.8.8.8.2.1\" style=\"font-size:90%;\">62.28</span><span class=\"ltx_text\" id=\"S5.T8.8.8.8.2.2\" style=\"font-size:90%;\">5.39</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.9.9.9.3\">\n<span class=\"ltx_text\" id=\"S5.T8.9.9.9.3.1\" style=\"font-size:90%;\">71.51</span><span class=\"ltx_text\" id=\"S5.T8.9.9.9.3.2\" style=\"font-size:90%;\">1.36</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.12.12.12.4\"><span class=\"ltx_text\" id=\"S5.T8.12.12.12.4.1\" style=\"font-size:90%;\">LLGC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.12.12.12.5\"><span class=\"ltx_text\" id=\"S5.T8.12.12.12.5.1\" style=\"font-size:90%;\">USL</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.10.10.10.1\">\n<span class=\"ltx_text\" id=\"S5.T8.10.10.10.1.1\" style=\"font-size:90%;\">53.90</span><span class=\"ltx_text\" id=\"S5.T8.10.10.10.1.2\" style=\"font-size:90%;\">0.56</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.11.11.11.2\">\n<span class=\"ltx_text\" id=\"S5.T8.11.11.11.2.1\" style=\"font-size:90%;\">63.47</span><span class=\"ltx_text\" id=\"S5.T8.11.11.11.2.2\" style=\"font-size:90%;\">3.19</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.12.12.12.3\">\n<span class=\"ltx_text\" id=\"S5.T8.12.12.12.3.1\" style=\"font-size:90%;\">70.82</span><span class=\"ltx_text\" id=\"S5.T8.12.12.12.3.2\" style=\"font-size:90%;\">1.76</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.15.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.15.15.15.4\"><span class=\"ltx_text\" id=\"S5.T8.15.15.15.4.1\" style=\"font-size:90%;\">LLGC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.15.15.15.5\"><span class=\"ltx_text\" id=\"S5.T8.15.15.15.5.1\" style=\"font-size:90%;\">Proposed</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.13.13.13.1\"><span class=\"ltx_text\" id=\"S5.T8.13.13.13.1.1\" style=\"font-size:90%;color:#0000FF;\">62.943.47</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.14.14.14.2\"><span class=\"ltx_text\" id=\"S5.T8.14.14.14.2.1\" style=\"font-size:90%;color:#0000FF;\">71.611.64</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.15.15.15.3\"><span class=\"ltx_text\" id=\"S5.T8.15.15.15.3.1\" style=\"font-size:90%;color:#FF0000;\">75.501.40</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.18.18.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.18.18.18.4\"><span class=\"ltx_text\" id=\"S5.T8.18.18.18.4.1\" style=\"font-size:90%;\">Proposed</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.18.18.18.5\"><span class=\"ltx_text\" id=\"S5.T8.18.18.18.5.1\" style=\"font-size:90%;\">Random</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.16.16.16.1\">\n<span class=\"ltx_text\" id=\"S5.T8.16.16.16.1.1\" style=\"font-size:90%;\">52.12</span><span class=\"ltx_text\" id=\"S5.T8.16.16.16.1.2\" style=\"font-size:90%;\">9.10</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.17.17.17.2\">\n<span class=\"ltx_text\" id=\"S5.T8.17.17.17.2.1\" style=\"font-size:90%;\">60.32</span><span class=\"ltx_text\" id=\"S5.T8.17.17.17.2.2\" style=\"font-size:90%;\">4.67</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.18.18.18.3\">\n<span class=\"ltx_text\" id=\"S5.T8.18.18.18.3.1\" style=\"font-size:90%;\">69.40</span><span class=\"ltx_text\" id=\"S5.T8.18.18.18.3.2\" style=\"font-size:90%;\">2.49</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.21.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.21.21.21.4\"><span class=\"ltx_text\" id=\"S5.T8.21.21.21.4.1\" style=\"font-size:90%;\">Proposed</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.21.21.21.5\"><span class=\"ltx_text\" id=\"S5.T8.21.21.21.5.1\" style=\"font-size:90%;\">Coreset</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.19.19.19.1\">\n<span class=\"ltx_text\" id=\"S5.T8.19.19.19.1.1\" style=\"font-size:90%;\">37.43</span><span class=\"ltx_text\" id=\"S5.T8.19.19.19.1.2\" style=\"font-size:90%;\">3.24</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.20.20.20.2\">\n<span class=\"ltx_text\" id=\"S5.T8.20.20.20.2.1\" style=\"font-size:90%;\">59.56</span><span class=\"ltx_text\" id=\"S5.T8.20.20.20.2.2\" style=\"font-size:90%;\">3.30</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.21.21.21.3\">\n<span class=\"ltx_text\" id=\"S5.T8.21.21.21.3.1\" style=\"font-size:90%;\">73.82</span><span class=\"ltx_text\" id=\"S5.T8.21.21.21.3.2\" style=\"font-size:90%;\">3.01</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.24.24.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.24.24.24.4\"><span class=\"ltx_text\" id=\"S5.T8.24.24.24.4.1\" style=\"font-size:90%;\">Proposed</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.24.24.24.5\"><span class=\"ltx_text\" id=\"S5.T8.24.24.24.5.1\" style=\"font-size:90%;\">K-medoids</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.22.22.22.1\">\n<span class=\"ltx_text\" id=\"S5.T8.22.22.22.1.1\" style=\"font-size:90%;\">44.90</span><span class=\"ltx_text\" id=\"S5.T8.22.22.22.1.2\" style=\"font-size:90%;\">1.88</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.23.23.23.2\">\n<span class=\"ltx_text\" id=\"S5.T8.23.23.23.2.1\" style=\"font-size:90%;\">62.12</span><span class=\"ltx_text\" id=\"S5.T8.23.23.23.2.2\" style=\"font-size:90%;\">6.97</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.24.24.24.3\">\n<span class=\"ltx_text\" id=\"S5.T8.24.24.24.3.1\" style=\"font-size:90%;\">69.62</span><span class=\"ltx_text\" id=\"S5.T8.24.24.24.3.2\" style=\"font-size:90%;\">0.83</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.27.27.27\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.27.27.27.4\"><span class=\"ltx_text\" id=\"S5.T8.27.27.27.4.1\" style=\"font-size:90%;\">Proposed</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T8.27.27.27.5\"><span class=\"ltx_text\" id=\"S5.T8.27.27.27.5.1\" style=\"font-size:90%;\">USL</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.25.25.25.1\">\n<span class=\"ltx_text\" id=\"S5.T8.25.25.25.1.1\" style=\"font-size:90%;\">54.20</span><span class=\"ltx_text\" id=\"S5.T8.25.25.25.1.2\" style=\"font-size:90%;\">0.43</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.26.26.26.2\">\n<span class=\"ltx_text\" id=\"S5.T8.26.26.26.2.1\" style=\"font-size:90%;\">57.62</span><span class=\"ltx_text\" id=\"S5.T8.26.26.26.2.2\" style=\"font-size:90%;\">5.10</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T8.27.27.27.3\">\n<span class=\"ltx_text\" id=\"S5.T8.27.27.27.3.1\" style=\"font-size:90%;\">67.61</span><span class=\"ltx_text\" id=\"S5.T8.27.27.27.3.2\" style=\"font-size:90%;\">1.86</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.30.30.30\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S5.T8.30.30.30.4\"><span class=\"ltx_text\" id=\"S5.T8.30.30.30.4.1\" style=\"font-size:90%;\">Proposed</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S5.T8.30.30.30.5\"><span class=\"ltx_text\" id=\"S5.T8.30.30.30.5.1\" style=\"font-size:90%;\">Proposed</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T8.28.28.28.1\"><span class=\"ltx_text\" id=\"S5.T8.28.28.28.1.1\" style=\"font-size:90%;color:#FF0000;\">71.414.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T8.29.29.29.2\"><span class=\"ltx_text\" id=\"S5.T8.29.29.29.2.1\" style=\"font-size:90%;color:#FF0000;\">72.630.96</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T8.30.30.30.3\"><span class=\"ltx_text\" id=\"S5.T8.30.30.30.3.1\" style=\"font-size:90%;color:#0000FF;\">74.911.66</span></td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 8: Accuracy of prior pseudo-label comparison of different active strategies and label propagation methods. The accuracy reported is the average over 3 runs. The best results are shown in red and the second best results are shown in blue." |
| }, |
| "9": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T9\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 9: </span>Ablation study of on CIFAR-10, accuracy of and Expected calibration error (ECE) reported is the average over 5 runs.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T9.23.23\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T9.5.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T9.5.5.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T9.5.5.1.3\"><span class=\"ltx_text\" id=\"S5.T9.5.5.1.3.1\" style=\"font-size:90%;\">ECE</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S5.T9.5.5.1.1\">\n<span class=\"ltx_text\" id=\"S5.T9.5.5.1.1.1\" style=\"font-size:90%;\">Accuracy of </span>\n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T9.23.23.20.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T9.23.23.20.1.1\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.1.1\" style=\"font-size:90%;\"># Labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T9.23.23.20.1.2\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.2.1\" style=\"font-size:90%;\">10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T9.23.23.20.1.3\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.3.1\" style=\"font-size:90%;\">20</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T9.23.23.20.1.4\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.4.1\" style=\"font-size:90%;\">40</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T9.23.23.20.1.5\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.5.1\" style=\"font-size:90%;\">10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T9.23.23.20.1.6\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.6.1\" style=\"font-size:90%;\">20</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T9.23.23.20.1.7\"><span class=\"ltx_text\" id=\"S5.T9.23.23.20.1.7.1\" style=\"font-size:90%;\">40</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T9.11.11.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T9.11.11.7.7\"><span class=\"ltx_text\" id=\"S5.T9.11.11.7.7.1\" style=\"font-size:90%;\">K=[10,10,10,10,10,10]</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T9.6.6.2.1\">\n<span class=\"ltx_text\" id=\"S5.T9.6.6.2.1.1\" style=\"font-size:90%;\">0.278</span><span class=\"ltx_text\" id=\"S5.T9.6.6.2.1.2\" style=\"font-size:90%;\">0.039</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T9.7.7.3.2\">\n<span class=\"ltx_text\" id=\"S5.T9.7.7.3.2.1\" style=\"font-size:90%;\">0.166</span><span class=\"ltx_text\" id=\"S5.T9.7.7.3.2.2\" style=\"font-size:90%;\">0.038</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T9.8.8.4.3\">\n<span class=\"ltx_text\" id=\"S5.T9.8.8.4.3.1\" style=\"font-size:90%;\">0.112</span><span class=\"ltx_text\" id=\"S5.T9.8.8.4.3.2\" style=\"font-size:90%;\">0.021</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T9.9.9.5.4\">\n<span class=\"ltx_text\" id=\"S5.T9.9.9.5.4.1\" style=\"font-size:90%;\">70.50</span><span class=\"ltx_text\" id=\"S5.T9.9.9.5.4.2\" style=\"font-size:90%;\">4.13</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T9.10.10.6.5\">\n<span class=\"ltx_text\" id=\"S5.T9.10.10.6.5.1\" style=\"font-size:90%;\">70.66</span><span class=\"ltx_text\" id=\"S5.T9.10.10.6.5.2\" style=\"font-size:90%;\">4.05</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T9.11.11.7.6\">\n<span class=\"ltx_text\" id=\"S5.T9.11.11.7.6.1\" style=\"font-size:90%;\">70.81</span><span class=\"ltx_text\" id=\"S5.T9.11.11.7.6.2\" style=\"font-size:90%;\">2.13</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T9.17.17.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T9.17.17.13.7\"><span class=\"ltx_text\" id=\"S5.T9.17.17.13.7.1\" style=\"font-size:90%;\">K=[10,20,30,40,50,60]</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T9.12.12.8.1\">\n<span class=\"ltx_text\" id=\"S5.T9.12.12.8.1.1\" style=\"font-size:90%;\">0.237</span><span class=\"ltx_text\" id=\"S5.T9.12.12.8.1.2\" style=\"font-size:90%;\">0.041</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T9.13.13.9.2\"><span class=\"ltx_text\" id=\"S5.T9.13.13.9.2.1\" style=\"font-size:90%;color:#FF0000;\">0.0870.014</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T9.14.14.10.3\"><span class=\"ltx_text\" id=\"S5.T9.14.14.10.3.1\" style=\"font-size:90%;color:#FF0000;\">0.0670.013</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T9.15.15.11.4\"><span class=\"ltx_text\" id=\"S5.T9.15.15.11.4.1\" style=\"font-size:90%;color:#FF0000;\">71.414.10</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T9.16.16.12.5\"><span class=\"ltx_text\" id=\"S5.T9.16.16.12.5.1\" style=\"font-size:90%;color:#FF0000;\">72.630.96</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T9.17.17.13.6\"><span class=\"ltx_text\" id=\"S5.T9.17.17.13.6.1\" style=\"font-size:90%;color:#FF0000;\">74.911.66</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T9.23.23.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T9.23.23.19.7\"><span class=\"ltx_text\" id=\"S5.T9.23.23.19.7.1\" style=\"font-size:90%;\">K=[10,60,60,60,60,60]</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T9.18.18.14.1\"><span class=\"ltx_text\" id=\"S5.T9.18.18.14.1.1\" style=\"font-size:90%;color:#FF0000;\">0.2330.040</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T9.19.19.15.2\">\n<span class=\"ltx_text\" id=\"S5.T9.19.19.15.2.1\" style=\"font-size:90%;\">0.133</span><span class=\"ltx_text\" id=\"S5.T9.19.19.15.2.2\" style=\"font-size:90%;\">0.037</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T9.20.20.16.3\">\n<span class=\"ltx_text\" id=\"S5.T9.20.20.16.3.1\" style=\"font-size:90%;\">0.082</span><span class=\"ltx_text\" id=\"S5.T9.20.20.16.3.2\" style=\"font-size:90%;\">0.011</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T9.21.21.17.4\">\n<span class=\"ltx_text\" id=\"S5.T9.21.21.17.4.1\" style=\"font-size:90%;\">70.53</span><span class=\"ltx_text\" id=\"S5.T9.21.21.17.4.2\" style=\"font-size:90%;\">4.09</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T9.22.22.18.5\">\n<span class=\"ltx_text\" id=\"S5.T9.22.22.18.5.1\" style=\"font-size:90%;\">70.46</span><span class=\"ltx_text\" id=\"S5.T9.22.22.18.5.2\" style=\"font-size:90%;\">2.82</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T9.23.23.19.6\">\n<span class=\"ltx_text\" id=\"S5.T9.23.23.19.6.1\" style=\"font-size:90%;\">73.67</span><span class=\"ltx_text\" id=\"S5.T9.23.23.19.6.2\" style=\"font-size:90%;\">1.67</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 9: Ablation study of on CIFAR-10, accuracy of and Expected calibration error (ECE) reported is the average over 5 runs." |
| }, |
| "10": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T10\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 10: </span>The impact of different self-supervised learning tasks on prior pseudo-label accuracy on CIFAR-10. Results are averaged over 3 runs. The best results are shown in red and the second best results are shown in blue.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T10.15.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T10.15.15.16.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T10.15.15.16.1.1\"><span class=\"ltx_text\" id=\"S5.T10.15.15.16.1.1.1\" style=\"font-size:90%;\">Self-sup. Task</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T10.15.15.16.1.2\"><span class=\"ltx_text\" id=\"S5.T10.15.15.16.1.2.1\" style=\"font-size:90%;\">Pre-trained Data</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T10.15.15.16.1.3\"><span class=\"ltx_text\" id=\"S5.T10.15.15.16.1.3.1\" style=\"font-size:90%;\">Model</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T10.15.15.16.1.4\"><span class=\"ltx_text\" id=\"S5.T10.15.15.16.1.4.1\" style=\"font-size:90%;\">10 Labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T10.15.15.16.1.5\"><span class=\"ltx_text\" id=\"S5.T10.15.15.16.1.5.1\" style=\"font-size:90%;\">20 Labels</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T10.15.15.16.1.6\"><span class=\"ltx_text\" id=\"S5.T10.15.15.16.1.6.1\" style=\"font-size:90%;\">40 Labels</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T10.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T10.3.3.3.4\">\n<span class=\"ltx_text\" id=\"S5.T10.3.3.3.4.1\" style=\"font-size:90%;\">Simsiam\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S5.T10.3.3.3.4.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.04560v3#bib.bib26\" title=\"\">26</a><span class=\"ltx_text\" id=\"S5.T10.3.3.3.4.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T10.3.3.3.5\"><span class=\"ltx_text\" id=\"S5.T10.3.3.3.5.1\" style=\"font-size:90%;\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T10.3.3.3.6\"><span class=\"ltx_text\" id=\"S5.T10.3.3.3.6.1\" style=\"font-size:90%;\">WRN-28-2</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T10.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T10.1.1.1.1.1\" style=\"font-size:90%;\">71.41</span><span class=\"ltx_text\" id=\"S5.T10.1.1.1.1.2\" style=\"font-size:90%;\">4.10</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T10.2.2.2.2\">\n<span class=\"ltx_text\" id=\"S5.T10.2.2.2.2.1\" style=\"font-size:90%;\">72.63</span><span class=\"ltx_text\" id=\"S5.T10.2.2.2.2.2\" style=\"font-size:90%;\">0.96</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T10.3.3.3.3\">\n<span class=\"ltx_text\" id=\"S5.T10.3.3.3.3.1\" style=\"font-size:90%;\">74.91</span><span class=\"ltx_text\" id=\"S5.T10.3.3.3.3.2\" style=\"font-size:90%;\">1.66</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.6.6.6.4\">\n<span class=\"ltx_text\" id=\"S5.T10.6.6.6.4.1\" style=\"font-size:90%;\">SimCLR\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S5.T10.6.6.6.4.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.04560v3#bib.bib24\" title=\"\">24</a><span class=\"ltx_text\" id=\"S5.T10.6.6.6.4.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.6.6.6.5\"><span class=\"ltx_text\" id=\"S5.T10.6.6.6.5.1\" style=\"font-size:90%;\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.6.6.6.6\"><span class=\"ltx_text\" id=\"S5.T10.6.6.6.6.1\" style=\"font-size:90%;\">WRN-28-2</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.4.4.4.1\"><span class=\"ltx_text\" id=\"S5.T10.4.4.4.1.1\" style=\"font-size:90%;color:#0000FF;\">72.633.40</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.5.5.5.2\"><span class=\"ltx_text\" id=\"S5.T10.5.5.5.2.1\" style=\"font-size:90%;color:#0000FF;\">75.950.13</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.6.6.6.3\"><span class=\"ltx_text\" id=\"S5.T10.6.6.6.3.1\" style=\"font-size:90%;color:#0000FF;\">78.450.28</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.9.9.9.4\">\n<span class=\"ltx_text\" id=\"S5.T10.9.9.9.4.1\" style=\"font-size:90%;\">BYOL\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S5.T10.9.9.9.4.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.04560v3#bib.bib25\" title=\"\">25</a><span class=\"ltx_text\" id=\"S5.T10.9.9.9.4.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.9.9.9.5\"><span class=\"ltx_text\" id=\"S5.T10.9.9.9.5.1\" style=\"font-size:90%;\">ImageNet</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.9.9.9.6\"><span class=\"ltx_text\" id=\"S5.T10.9.9.9.6.1\" style=\"font-size:90%;\">ResNet-50</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.7.7.7.1\">\n<span class=\"ltx_text\" id=\"S5.T10.7.7.7.1.1\" style=\"font-size:90%;\">49.27</span><span class=\"ltx_text\" id=\"S5.T10.7.7.7.1.2\" style=\"font-size:90%;\">3.89</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.8.8.8.2\">\n<span class=\"ltx_text\" id=\"S5.T10.8.8.8.2.1\" style=\"font-size:90%;\">64.13</span><span class=\"ltx_text\" id=\"S5.T10.8.8.8.2.2\" style=\"font-size:90%;\">0.55</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.9.9.9.3\">\n<span class=\"ltx_text\" id=\"S5.T10.9.9.9.3.1\" style=\"font-size:90%;\">67.80</span><span class=\"ltx_text\" id=\"S5.T10.9.9.9.3.2\" style=\"font-size:90%;\">1.61</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.12.12.12.4\">\n<span class=\"ltx_text\" id=\"S5.T10.12.12.12.4.1\" style=\"font-size:90%;\">MAE\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S5.T10.12.12.12.4.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.04560v3#bib.bib20\" title=\"\">20</a><span class=\"ltx_text\" id=\"S5.T10.12.12.12.4.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.12.12.12.5\"><span class=\"ltx_text\" id=\"S5.T10.12.12.12.5.1\" style=\"font-size:90%;\">ImageNet</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T10.12.12.12.6\"><span class=\"ltx_text\" id=\"S5.T10.12.12.12.6.1\" style=\"font-size:90%;\">ViT-B/16</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.10.10.10.1\">\n<span class=\"ltx_text\" id=\"S5.T10.10.10.10.1.1\" style=\"font-size:90%;\">29.18</span><span class=\"ltx_text\" id=\"S5.T10.10.10.10.1.2\" style=\"font-size:90%;\">0.54</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.11.11.11.2\">\n<span class=\"ltx_text\" id=\"S5.T10.11.11.11.2.1\" style=\"font-size:90%;\">32.61</span><span class=\"ltx_text\" id=\"S5.T10.11.11.11.2.2\" style=\"font-size:90%;\">0.24</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T10.12.12.12.3\">\n<span class=\"ltx_text\" id=\"S5.T10.12.12.12.3.1\" style=\"font-size:90%;\">35.98</span><span class=\"ltx_text\" id=\"S5.T10.12.12.12.3.2\" style=\"font-size:90%;\">0.23</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.15.15.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T10.15.15.15.4\">\n<span class=\"ltx_text\" id=\"S5.T10.15.15.15.4.1\" style=\"font-size:90%;\">CLIP\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S5.T10.15.15.15.4.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.04560v3#bib.bib52\" title=\"\">52</a><span class=\"ltx_text\" id=\"S5.T10.15.15.15.4.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T10.15.15.15.5\"><span class=\"ltx_text\" id=\"S5.T10.15.15.15.5.1\" style=\"font-size:90%;\">WebImageText</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T10.15.15.15.6\"><span class=\"ltx_text\" id=\"S5.T10.15.15.15.6.1\" style=\"font-size:90%;\">ViT-B/32</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T10.13.13.13.1\"><span class=\"ltx_text\" id=\"S5.T10.13.13.13.1.1\" style=\"font-size:90%;color:#FF0000;\">75.940.07</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T10.14.14.14.2\"><span class=\"ltx_text\" id=\"S5.T10.14.14.14.2.1\" style=\"font-size:90%;color:#FF0000;\">80.772.79</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T10.15.15.15.3\"><span class=\"ltx_text\" id=\"S5.T10.15.15.15.3.1\" style=\"font-size:90%;color:#FF0000;\">84.190.32</span></td>\n</tr>\n</tbody>\n</table>\n</figure>", |
| "capture": "Table 10: The impact of different self-supervised learning tasks on prior pseudo-label accuracy on CIFAR-10. Results are averaged over 3 runs. The best results are shown in red and the second best results are shown in blue." |
| } |
| }, |
| "image_paths": { |
| "1(a)": { |
| "figure_path": "2203.04560v3_figure_1(a).png", |
| "caption": "(a) SelfMatch\nFigure 1: Pipeline of AS3L (Ours) and Existing Self-Semi-Supervised Learning Approache (SelfMatch) [12]. (a) SelfMatch involves self-supervised pre-training followed by semi-supervised fine-tuning, relying on weight initialization to benefit semi-supervised learning from self-supervised pre-training. (b) Beyond weight initialization, AS3L (ours) improves semi-supervised learning by selecting labeled samples and generating prior pseudo-labels based on self-supervised features, providing a better starting point for subsequent semi-supervised training.", |
| "url": "http://arxiv.org/html/2203.04560v3/extracted/5972647/selfm_intro_frame.png" |
| }, |
| "1(b)": { |
| "figure_path": "2203.04560v3_figure_1(b).png", |
| "caption": "(b) AS3L (ours)\nFigure 1: Pipeline of AS3L (Ours) and Existing Self-Semi-Supervised Learning Approache (SelfMatch) [12]. (a) SelfMatch involves self-supervised pre-training followed by semi-supervised fine-tuning, relying on weight initialization to benefit semi-supervised learning from self-supervised pre-training. (b) Beyond weight initialization, AS3L (ours) improves semi-supervised learning by selecting labeled samples and generating prior pseudo-labels based on self-supervised features, providing a better starting point for subsequent semi-supervised training.", |
| "url": "http://arxiv.org/html/2203.04560v3/extracted/5972647/as3l_intro_frame.png" |
| }, |
| "2(a)": { |
| "figure_path": "2203.04560v3_figure_2(a).png", |
| "caption": "(a) CIFAR-100\nFigure 2: Consistency of sample labels with neighboring sample labels in self-supervised and semi-supervised feature spaces. The model was initialized with the self-supervised pre-training weights and then further trained using FlexMatch.", |
| "url": "http://arxiv.org/html/2203.04560v3/x1.png" |
| }, |
| "2(b)": { |
| "figure_path": "2203.04560v3_figure_2(b).png", |
| "caption": "(b) CIFAR-10\nFigure 2: Consistency of sample labels with neighboring sample labels in self-supervised and semi-supervised feature spaces. The model was initialized with the self-supervised pre-training weights and then further trained using FlexMatch.", |
| "url": "http://arxiv.org/html/2203.04560v3/x2.png" |
| }, |
| "3": { |
| "figure_path": "2203.04560v3_figure_3.png", |
| "caption": "Figure 3: Test accuracy of the semi-supervised models initialized with self-supervised pre-training and random initialization. Specifically, semi-supervised training utilizing 10 labeled samples on CIFAR-10. Semi-supervised training method employed: FlexMatch.", |
| "url": "http://arxiv.org/html/2203.04560v3/x3.png" |
| }, |
| "4": { |
| "figure_path": "2203.04560v3_figure_4.png", |
| "caption": "Figure 4: The framework of our Active Self-Semi-Supervised learning(AS3L). AS3L consists of four components: (1) Obtaining self-supervised feature fs\u2062e\u2062l\u2062fsubscript\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc53f_{self}italic_f start_POSTSUBSCRIPT italic_s italic_e italic_l italic_f end_POSTSUBSCRIPT follows [26]; (2) Selecting labeled samples based on fs\u2062e\u2062l\u2062fsubscript\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc53f_{self}italic_f start_POSTSUBSCRIPT italic_s italic_e italic_l italic_f end_POSTSUBSCRIPT (Sec. 4.4); (3) Label Propagation based on clusters to get PPL yp\u2062r\u2062i\u2062o\u2062rsubscript\ud835\udc66\ud835\udc5d\ud835\udc5f\ud835\udc56\ud835\udc5c\ud835\udc5fy_{prior}italic_y start_POSTSUBSCRIPT italic_p italic_r italic_i italic_o italic_r end_POSTSUBSCRIPT (Sec. 4.3); (4) Semi-supervised training guided by yp\u2062r\u2062i\u2062o\u2062rsubscript\ud835\udc66\ud835\udc5d\ud835\udc5f\ud835\udc56\ud835\udc5c\ud835\udc5fy_{prior}italic_y start_POSTSUBSCRIPT italic_p italic_r italic_i italic_o italic_r end_POSTSUBSCRIPT (Sec. 4.2).", |
| "url": "http://arxiv.org/html/2203.04560v3/x4.png" |
| }, |
| "5(a)": { |
| "figure_path": "2203.04560v3_figure_5(a).png", |
| "caption": "(a) Semi-Supervised feature fs\u2062e\u2062m\u2062isubscript\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc5a\ud835\udc56f_{semi}italic_f start_POSTSUBSCRIPT italic_s italic_e italic_m italic_i end_POSTSUBSCRIPT\nFigure 5: T-SNE visualization of semi-supervised and self-supervised features, where semi-supervised features are trained with 40 labels on CIFAR-10 followed our method. Self-supervised features seem to be more loosely clustered, and the boundaries between clusters are not as well defined.", |
| "url": "http://arxiv.org/html/2203.04560v3/extracted/5972647/semi_sup_feas.png" |
| }, |
| "5(b)": { |
| "figure_path": "2203.04560v3_figure_5(b).png", |
| "caption": "(b) Self-Supervised feature fs\u2062e\u2062l\u2062fsubscript\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc53f_{self}italic_f start_POSTSUBSCRIPT italic_s italic_e italic_l italic_f end_POSTSUBSCRIPT\nFigure 5: T-SNE visualization of semi-supervised and self-supervised features, where semi-supervised features are trained with 40 labels on CIFAR-10 followed our method. Self-supervised features seem to be more loosely clustered, and the boundaries between clusters are not as well defined.", |
| "url": "http://arxiv.org/html/2203.04560v3/extracted/5972647/self_sup_feas.png" |
| }, |
| "6(a)": { |
| "figure_path": "2203.04560v3_figure_6(a).png", |
| "caption": "(a) 10 labels\nFigure 6: Test accuracy of semi-supervised training in CIFAR-10. In all experiments, the same self-supervised pre-training weight initialization is applied. Without PPL indicates the absence of PPL. Change at 5 epoch and change at 60 epoch refer to the utilization of PPL according to Eq. (5), with T\ud835\udc47Titalic_T as 5 epoch and 60 epoch, respectively. Direct PPL signifies using PPL in all training iterations.", |
| "url": "http://arxiv.org/html/2203.04560v3/x5.png" |
| }, |
| "6(b)": { |
| "figure_path": "2203.04560v3_figure_6(b).png", |
| "caption": "(b) 40 labels\nFigure 6: Test accuracy of semi-supervised training in CIFAR-10. In all experiments, the same self-supervised pre-training weight initialization is applied. Without PPL indicates the absence of PPL. Change at 5 epoch and change at 60 epoch refer to the utilization of PPL according to Eq. (5), with T\ud835\udc47Titalic_T as 5 epoch and 60 epoch, respectively. Direct PPL signifies using PPL in all training iterations.", |
| "url": "http://arxiv.org/html/2203.04560v3/x6.png" |
| }, |
| "7(a)": { |
| "figure_path": "2203.04560v3_figure_7(a).png", |
| "caption": "(a) Accuracy of prior Pseudo-labels\nFigure 7: The impact of the number of clusters C\ud835\udc36Citalic_C on the active learning strategy. Class coverage is robust to C\ud835\udc36Citalic_C, while larger C\ud835\udc36Citalic_C means more accurate yp\u2062r\u2062i\u2062o\u2062rsubscript\ud835\udc66\ud835\udc5d\ud835\udc5f\ud835\udc56\ud835\udc5c\ud835\udc5fy_{prior}italic_y start_POSTSUBSCRIPT italic_p italic_r italic_i italic_o italic_r end_POSTSUBSCRIPT.", |
| "url": "http://arxiv.org/html/2203.04560v3/x7.png" |
| }, |
| "7(b)": { |
| "figure_path": "2203.04560v3_figure_7(b).png", |
| "caption": "(b) Class Coverage\nFigure 7: The impact of the number of clusters C\ud835\udc36Citalic_C on the active learning strategy. Class coverage is robust to C\ud835\udc36Citalic_C, while larger C\ud835\udc36Citalic_C means more accurate yp\u2062r\u2062i\u2062o\u2062rsubscript\ud835\udc66\ud835\udc5d\ud835\udc5f\ud835\udc56\ud835\udc5c\ud835\udc5fy_{prior}italic_y start_POSTSUBSCRIPT italic_p italic_r italic_i italic_o italic_r end_POSTSUBSCRIPT.", |
| "url": "http://arxiv.org/html/2203.04560v3/x8.png" |
| }, |
| "8(a)": { |
| "figure_path": "2203.04560v3_figure_8(a).png", |
| "caption": "(a) CIFAR-10\nFigure 8: The relationship between distance and dominant labels in self-supervised feature space, where the model is self-supervised training by Simsiam [26].", |
| "url": "http://arxiv.org/html/2203.04560v3/x9.png" |
| }, |
| "8(b)": { |
| "figure_path": "2203.04560v3_figure_8(b).png", |
| "caption": "(b) CIFAR-100\nFigure 8: The relationship between distance and dominant labels in self-supervised feature space, where the model is self-supervised training by Simsiam [26].", |
| "url": "http://arxiv.org/html/2203.04560v3/x10.png" |
| } |
| }, |
| "validation": true, |
| "references": [], |
| "url": "http://arxiv.org/html/2203.04560v3" |
| } |