diff --git "a/title_30K/test_title_long_2404.16348v2.json" "b/title_30K/test_title_long_2404.16348v2.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16348v2.json" @@ -0,0 +1,103 @@ +{ + "url": "http://arxiv.org/abs/2404.16348v2", + "title": "Dual Expert Distillation Network for Generalized Zero-Shot Learning", + "abstract": "Zero-shot learning has consistently yielded remarkable progress via modeling\nnuanced one-to-one visual-attribute correlation. Existing studies resort to\nrefining a uniform mapping function to align and correlate the sample regions\nand subattributes, ignoring two crucial issues: 1) the inherent asymmetry of\nattributes; and 2) the unutilized channel information. This paper addresses\nthese issues by introducing a simple yet effective approach, dubbed Dual Expert\nDistillation Network (DEDN), where two experts are dedicated to coarse- and\nfine-grained visual-attribute modeling, respectively. Concretely, one coarse\nexpert, namely cExp, has a complete perceptual scope to coordinate\nvisual-attribute similarity metrics across dimensions, and moreover, another\nfine expert, namely fExp, consists of multiple specialized subnetworks, each\ncorresponds to an exclusive set of attributes. Two experts cooperatively\ndistill from each other to reach a mutual agreement during training. Meanwhile,\nwe further equip DEDN with a newly designed backbone network, i.e., Dual\nAttention Network (DAN), which incorporates both region and channel attention\ninformation to fully exploit and leverage visual semantic knowledge.\nExperiments on various benchmark datasets indicate a new state-of-the-art.", + "authors": "Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao", + "published": "2024-04-25", + "updated": "2024-04-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Dual Expert Distillation Network for Generalized Zero-Shot Learning", + "main_content": "Introduction Recognizing unknown categories in the open environment is a critical challenge for automatic recognition systems. ZeroShot Learning (ZSL) [Lampert et al., 2009] that serves as a promising solution has received increasing attention, which is inspired by human text-to-image reasoning capabilities. The objective of ZSL is to transfer the visual knowledge of seen classes to the unseen domain by virtue of shared semantic information, thus empowering the model to recognize the unseen classes. More trickily, Generalized Zero-Shot Learning (GZSL) [Chao et al., 2016] requires recognizing samples \u2217Corresponding author: Jingcai Guo. \u2020: Equal contribution. (a) cExp (b) fExp crown eye bill \u00b7\u00b7\u00b7 belly breast wing \u00b7\u00b7\u00b7 belly wing breast \u00b7\u00b7\u00b7 torso: crown bill eye \u00b7\u00b7\u00b7 head: Figure 1: (a) cExp, also the common practice in existing works, possesses complete attribute-awareness capability yet lacks the ability to process fine-grained semantic information. (b) fExp, which consists of multiple specialized sub-networks, lacks a global perception field. from both seen and unseen classes in the inference phase. Mainstream studies broadly follow two routes, generative [Xian et al., 2018][Xie et al., 2022][Li et al., 2023] and embedding techniques [Zhang et al., 2017][Liu et al., 2020][Chen et al., 2021b], where most of the schemes are devoted to mining and constructing class-wise visual-attribute relations. To strengthen the fine-grained perceptual capabilities of the model, recent research has invested considerable effort into modeling local-subattribute correlations [Xie et al., 2019][Huynh and Elhamifar, 2020][Xu et al., 2020]. The motivation is to build a refined pairwise relation map via searching and binding subattributes and the corresponding region visual features (Figure 1 (a)). Despite their contribution to boosting performance, the inherent asymmetry of attributes remains undiscussed, and the channel information is not fully exploited. The asymmetry of attributes stems from the fact that 1) the semantic dimensions between attributes are heterogeneous or even antagonistic. Take the SUN dataset [Patterson and Hays, 2012] as an example, where 38 attributes (studying, playing, etc.) describe the function of one scene, while 27 attributes arXiv:2404.16348v2 [cs.CV] 29 Apr 2024 \f(trees, flowers, etc.) describe the entities in the scene. It can be obviously observed that the former are abstract and global, while the latter are concrete and local; 2) the visual features corresponding to attributes are intertwined. For example, neighboring regions tend to be more semantically similar, a phenomenon that is exacerbated by the local information fusion mechanism of the convolutional kernel, which leads to difficulties in accurately locating fine-grained attributes such as head, crown, and so on. In this paper, we revisit the task of modeling visualattribute relations from the perspective of attribute annotations. Given the inherent complexity of attribute descriptions, existing learning paradigms are virtually forcing a single model to undertake a multi-objective hybrid task, which is ideally appealing yet empirically challenging. Naturally, we employ the idea of divide-and-conquer to release the pressure of a single model. We meticulously decompose the hybrid task into multiple subtasks, i.e., dividing the attributes into multiple disjoint clusters and assigning specialized learnable networks to them. Our approach is referred to as, Dual Expert Distillation Network, abbreviated DEDN. As shown in Figure 1, our approach sets up two experts. cExp, in line with common practices, is equipped with complete attribute perception capability to harmonize holistic visual-attribute measure results. fExp, consists of multiple subnetworks, where each subnetwork is only responsible for capturing the characteristics of a specific attribute cluster. During the training phase, we encourage the two to learn cooperatively to compensate for their respective deficiencies in a mutually distilling manner. The decision results of the two experts are combined for final inference. For the issue of underutilized channel information, we design a novel attention network, Dual Attention Network (DAN), as the backbone. DAN employs a dual-attention mechanism that fully exploits the potential semantic knowledge of both regions and channels to facilitate more precise visual-attribute correlation metrics. To further boost performance, we present Margin-Aware Loss (MAL) as the training loss function to address the confidence imbalance between seen and unseen classes. Our contributions are summarized below: \u2022 We rethink the issue of modeling visual-attribute relations from the perspective of attribute annotations and point out that the inherent complexity of attributes is one of the major bottlenecks. We propose a simple yet effective strategy of establishing two experts working on distinct attribute perception scopes to learn and infer collaboratively in a complementary manner. \u2022 We present a novel attention network, dubbed DAN, which incorporates both region and channel attention information to better capture correlations between visuals and attributes. Furthermore, a new learning function named MAL is designed to balance the confidence of seen and unseen classes. \u2022 We conduct extensive experiments on mainstream evaluation datasets, and the results show that the proposed method effectively improves the performance. 2 Related Work In ZSL/GZSL, attributes are the only ties that bridge seen and unseen classes, hence exploring and constructing the link between visuals and attributes is a core subject. Existing methods fall into class-wise visual-attribute modeling, which treats both visual features and attribute vectors as a whole, and regional visual-subattribute modeling, which seeks to explore the correlation between local visual information and subattributes. 2.1 Class-wise Visual-Attribute Modeling Mainstream researches broadly follow two technical routes, generative and embedding techniques. Generative techniques utilize the latent distribution fitting ability of generative models such as GAN and VAE to implicitly learn the relationship between attributes and categories to construct hallucinatory samples of unseen classes [Xian et al., 2018][Verma et al., 2018][Felix et al., 2018][Li et al., 2019][Vyas et al., 2020][Keshari et al., 2020][Xie et al., 2022][Li et al., 2023]. The technical bottleneck of this route is the poor realism of the hallucinatory samples, thus many studies incorporate other techniques such as meta-learning [Yu et al., 2020], representation learning [Li et al., 2021][Chen et al., 2021c][Chen et al., 2021a][Han et al., 2021][Kong et al., 2022], etc. for joint training. Embedding techniques aim at projecting visual and attribute features to a certain space, from which the most similar semantic information is searched. In general, embedding techniques are categorized into three directions: visual-to-attribute space [Changpinyo et al., 2016][Kodirov et al., 2017][Liu et al., 2020][Chen et al., 2022a], attribute-to-visual space [Zhang et al., 2017][Annadani and Biswas, 2018], and common space [Liu et al., 2018][Jiang et al., 2019]. Researchers in the first two directions invest considerable effort in designing robust mapping functions to cope with domain shift and out-of-distribution generalization problems. The third direction centers on finding a suitable semantic space. Class-level visual-attribute modeling lacks the fine-grained perceptual ability to respond to interactions between local visual features and subattributes. 2.2 Region-wise Visual-Attribute Modeling Region-wise modeling is a promising direction in embedding techniques. Unlike other embedding approaches, region-wise modeling focuses on the correlation between local information and subattributes to build more detailed mapping functions. Models based on attention mechanisms are the dominant means in this direction, motivated by training models to search for corresponding visual features based on semantic vectors. Recent approaches include feature-to-attribute attention networks [Xie et al., 2019][Huynh and Elhamifar, 2020], bidirectional attention networks [Chen et al., 2022b], and multi-attention networks [Zhu et al., 2019]. In addition, some studies resort to prototype learning, where the goal is to explicitly learn the corresponding prototypical visual features of individual subattributes, thus aiding the model\u2019s judgment [Xu et al., 2020][Wang et al., 2021]. Further, modeling the topological structure between regional features with the help of graph convolution techniques also yields promising results \fcExp fExp DAN DAN Distillation MAL MAL concat W1 W2 F V CxR DxG Sr DxR Ar softmax DxR \u00a0Product&Sum Or D W3 W4 F V RxC DxG Sc DxC Ac softmax DxC \u00a0Product&Sum Oc D \u00a0Weighted&Sum O D DAN Visual Feature crown bill eye \u00b7\u00b7\u00b7 head: belly wing breast \u00b7\u00b7\u00b7 torso: crown eye bill \u00b7\u00b7\u00b7 belly breast wing \u00b7\u00b7\u00b7 Figure 2: Left: cExp possesses the scope of a holistic attribute set, while fExp consists of multiple sub-networks, each of which is responsible for the prediction of only partial attributes. We concatenate all outputs of subnetworks as the final result of fExp. Then, distillation loss is implemented to facilitate joint learning. Right: The architecture of DAN. [Xie et al., 2020][Guo et al., 2023]. While the main idea of these approaches is to design appropriate attention networks or regularization functions, ignoring the inherent complexity of attribute annotations, we provide a new perspective to think about the visual-attribute modeling problem. In addition, existing region-attribute methods, although achieving good results, neglect the utilization of channel information, and we design a new attention network that utilizes both region and channel information. 3 Methodology 3.1 Preliminary Following previous studies [Chen et al., 2022b][Li et al., 2023], we adopt a fixed feature extractor, ResNet-101 [He et al., 2016], to extract visual features. Suppose Ds = {(F s i , Y s i )} denotes the seen classes, where F s i is the visual feature and Y s i denotes its label. Note that F \u2208RC\u00d7H\u00d7W , where C, H, W are the channel number, height, and width, respectively. Similarly have Du = {(F u i , Y u i )} to denote the unseen classes. Normally, the visual features of the unseen classes are not accessible during the training phase. Alternatively, we have the shared attribute A \u2208RK\u00d7D, where K denotes the total number of categories, and D denotes the number of attributes. Also, we use the semantic vectors of each attribute learned by GloVe, denoted by V \u2208RD\u00d7G, where G denotes the dimension of the vector. 3.2 Overview Our approach is shown in Figure 2 (Left). First, we disassemble the attribute set into multiple clusters based on their characteristics. Then the attribute vectors and the visual feature are fed into cExp and fExp simultaneously. cExp directly computes the scores of all attributes on that visual feature, while the scores of fExp are obtained by combining the computation results of each subnetwork. We constrain the two to learn from each other using distillation loss. Meanwhile, we introduce DAN as the backbone and MAL as the optimization objective. 3.3 Dual Attention Network Firstly we introduce the proposed novel backbone network, Dual Attention Network (DAN). Mining and constructing relations between visual features and attributes is crucial for zero-shot learning. Recently many works have been devoted to modeling the association between regions and attributes, such as attention-based approaches [Xie et al., 2019][Huynh and Elhamifar, 2020][Chen et al., 2022b] and prototypebased techniques [Xu et al., 2020][Wang et al., 2021]. However, these methods only focus on the semantic information of regions and ignore the role of channels. Therefore, DAN incorporates both the attention information of regions and channels to promote the efficacy of the model in utilizing visual features. As shown in Figure 2 (Right), DAN contains two parallel components that model region-attribute and channel-attribute relations, respectively. We first introduce the region-attribute component. We have visual features F \u2208RC\u00d7H\u00d7W , which is flattened to F \u2208RC\u00d7R, where R = H \u00d7 W denotes the number of regions. Let W1, W2 \u2208RG\u00d7C denote two learnable matrices. W1 maps the attribute vectors to the visual space and computes their similarity. The formula is expressed as: Sr = V W1F, (1) where Sr \u2208RD\u00d7R represents the score obtained for each attribute on each region. W2 is in charge of computing the attention weights to encourage the model to focus on the region-attribute pairs with the highest similarity. The formula is expressed as: Ar = V W2F P r\u2208R V W2Fr , (2) where Ar \u2208RD\u00d7R denote the normalized weight obtained by softmax. Then we naturally get the weighted matrix of \fscores, represented as: Or = X R Sr \u00d7 Ar, (3) where Or \u2208RD represents the similarity score obtained for each attribute on a visual feature. Next, we introduce the channel-attribute section, which has a similar principle. We have the scaled visual feature F \u2208RR\u00d7C and W3, W4 \u2208RG\u00d7R. Then W3 is charged with calculating the similarity score obtained by the attribute on each channel, formulated as: Sc = V W3F, (4) where Sc \u2208RD\u00d7C. And W4 computes its attention weights: Ac = V W4F P c\u2208C V W4Fc , (5) where Ac \u2208RD\u00d7C. Finally, we get the weighted score map: Oc = X C Sc \u00d7 Ac, (6) where Oc \u2208RD. We expect the final scores of attributes from different scale features to be consistent, i.e., semantic consistency. Therefore we employ Lalign, which contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, to align the outputs of both, formulated as: Lalign = 1 2(LKL(Or||Oc) + LKL(Oc||Or)) + ||Or \u2212Oc||2 2, (7) where LKL denotes Kullback-Leibler Divergence. In the inference phase, we use the weighted sum of Or and Oc as the final output, expressed as: O = \u03bbrc \u00d7 Or + (1 \u2212\u03bbrc) \u00d7 Oc, (8) where \u03bbrc is a hyperparameter. 3.4 Dual Expert Distillation Network Despite the fact that DAN enhances the modeling capability of the network, it is extremely challenging for a single model to simultaneously handle attributes with different semantic dimensions as well as visual features with different granularities. To this end, we propose the Dual Expert Distillation Network (DEDN) to alleviate the pressure on a single network (Figure 2 (left)). cExp is set up with a complete attributeaware scope as in conventional practice. Specifically, the input of cExp is the semantic vectors of all attributes, and the output is the similarity scores of all attributes. Denote cExp by \u03d5ec = {W ec 1 , W ec 2 , W ec 3 , W ec 4 }, the output is defined as: Oec = \u03d5ec(V, F), (9) where Oec \u2208RD and V \u2208RD\u00d7G. fExp consists of multiple subnetworks, each focusing on a specific attribute cluster. At first, we elaborate on how the attribute clusters are divided. Since attribute annotations are manually labeled based on semantics, they are inherently clustered in nature. For example, in the SUN dataset [Patterson and Hays, 2012], the top 38 prompts are used to describe the scene function. Therefore, it is easy to perform the division by human operation, Chat-GPT [Radford et al., 2018], or clustering algorithm. It requires a trivial amount of effort but is worth it. Assuming that the attribute set is divided into Q disjoint clusters, i.e. V = {V1 \u2208RD1\u00d7G, V2 \u2208RD2\u00d7G, ..., VQ \u2208 RDQ\u00d7G}, where D1 + D2 + ... + DQ = D. Accordingly, there are Q subnetworks for fExp to handle these attribute clusters one-to-one. Let \u03d5ef = {\u03d51 ef, \u03d52 ef, ..., \u03d5Q ef} denotes fExp, then the output is defined as: Oef = \u03d51 ef(V1, F) \u2295\u03d52 ef(V2, F) \u2295... \u2295\u03d5Q ef(VQ, F), (10) where \u2295denotes concat operation. After that, we calculate the score of each category for training and inference. Specifically, we compute the similarity with the output of the expert and the attributes of each category, defined as: Pec = OecAT, Pef = OefAT, (11) where Pec, Pef \u2208RK. To facilitate cooperative learning between two expert networks, we introduce distillation loss to constrain their semantic consistency. Concretely, the distillation loss contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, defined as: Ldistill = 1 2(LKL(Pec||Pef)+LKL(Pef||Pec))+||Pec\u2212Pef||2 2. (12) 3.5 Margin-Aware Loss Once the category scores are obtained, the network is optimized by using the cross-entropy loss, which is formulated as: Lce = \u2212log exp(P y ec) PK yi exp(P yi ec ) , (13) where y is the ground truth. The loss of Pef ditto. Note that we next narrate with Pec only, and the principle is the same for Pef. Due to the lack of access to samples from the unseen classes during the training phase, the scores of the unseen classes are relatively low and thus cannot compete with the seen classes in GZSL. To address this problem, the common practice [Huynh and Elhamifar, 2020][Chen et al., 2022b] is to add a margin to the scores: PMec = [P 1 ec \u2212\u03f5, ..., P N ec \u2212\u03f5, P N+1 ec + \u03f5, ..., P K ec + \u03f5], (14) where \u03f5 is a constant, P 1 ec \u223cP N ec are seen classes score, and P N+1 ec \u223cP K ec are unseen classes score. However, this method leads to misclassification of seen classes that would otherwise be correctly predicted. In order to maintain the correctness of the predicted classes while enhancing the competitiveness of the unseen classes. We propose Margin-Aware Loss (MAL), which takes the form: Lmal = \u2212log exp(P y ec\u22122\u03f5) exp(P y ec\u22122\u03f5)+PS yi\u0338=y exp(P yi ec +\u03f5)+PU exp(P yi ec ) , (15) \fwhere S, U denote seen and unseen classes, respectively. In contrast to the cross-entropy loss, MAL reactivates the confidence of the predicted class to ensure that it stays ahead in the margin-processed scores, while suppressing the confidence of the other seen classes to ensure the competitiveness of the unseen classes. 3.6 Summarize In the training phase, the basic training loss of cExp stems from the classification and the alignment loss, which is expressed as: Lec = Lec mal + \u03b2Lec align, (16) where \u03b2 is a hyperparameter. Similarly, we have the basic training loss of fExp: Lef = Lef mal + \u03b2Lef align. (17) Then the final loss is obtained from the combination of basic losses and distillation loss, denoted as: LDEDN = Lec + Lef + \u03b3Ldistill, (18) where \u03b3 is a hyperparameter. In the inference phase, the recommendations of the two experts are combined and used for final judgment. The predicted result is expressed as: arg max \u03bbe \u00d7 Pec + (1 \u2212\u03bbe) \u00d7 Pef, (19) where \u03bbe is a hyperparameter. 4 Experiments Datasets. We conduct extensive experiments on three benchmark datasets to verify the effectiveness of the method, including CUB (Caltech UCSD Birds 200) [Wah et al., 2011], SUN (SUN Attribute) [Patterson and Hays, 2012], and AWA2 (Animals with Attributes 2) [Xian et al., 2017]. We split all datasets following [Xian et al., 2017]. CUB comprises 200 bird species totaling 11,788 image samples, of which 50 categories are planned as unseen classes. We use class attributes for fair comparison, which contain 312 subattributes. SUN has a sample of 717 different scenes totaling 14,340 images, where 72 categories are unseen classes. Attribute annotations are 102-dimensional. AWA2 includes 50 classes of assorted animals totaling 37,322 samples, of which 10 categories are considered unseen classes. Its number of attributes is 85. Evaluation Protocols. We perform experiments in both the Zero-Shot learning (ZSL) and Generalized Zero-Shot learning (GZSL) settings. For ZSL, we employ top-1 accuracy to evaluate the performance of the model, denoted as T. For GZSL, we record the accuracy for both seen classes, and unseen classes, denoted as S, and U, respectively. We also record the harmonic mean H, which is computed as, H = (2 \u00d7 S \u00d7 U)/(S + U). Implementation Details. For a fair comparison, we use the fixed ResNet-101 [He et al., 2016] without finetune as the feature extractor. We set the batch size to 50 and the learning rate to 0.0001. The RMSProp optimizer with the momentum CUB SUN AWA2 #Des. #Num. #Des. #Num. #Des. #Num. head 112 function 38 texture 18 torso 87 instance 27 organ 14 wing 24 environ. 17 environ. 13 tail 40 light 20 abstract 40 leg 15 whole 34 Table 1: Manual division of attribute clusters. Des. (description) indicates the criteria for classification. Num. (number) is the size of the attribute cluster. environ: environment. set as 0.9 and weight decay set as 1e-4 is employed. For hyperparameters, [\u03b2, \u03b3] are fixed to [0.001, 0.1]. We empirically set [\u03bbrc, \u03bbe] to [0.8, 0.9] for CUB, [0.95, 0.3] for SUN, [0.8, 0.5] for AWA2. Subsequent experimental analyses show that the performance of our method has low sensitivity to hyperparameters. For attribute clusters, we classify attribute sets according to their characteristics, and the results are shown in Table 1. 4.1 Compared with State-of-the-arts To evaluate the performance of the proposed method, we compare it with the state-of-the-art various methods. Generative methods: f-CLSWGAN (CVPR \u203218) [Xian et al., 2018], f-VAEGAN-D2 (CVPR \u203219) [Xian et al., 2019], TF-VAEGAN (ECCV \u203220) [Narayan et al., 2020], E-PGN (CVPR \u203220) [Yu et al., 2020], CADA-VAE (CVPR \u203219) [Schonfeld et al., 2019], FREE (ICCV \u203221) [Chen et al., 2021a], SDGZSL (ICCV \u203221) [Chen et al., 2021c], CE-GZSL (CVPR \u203221) [Han et al., 2021], VS-Boost (IJCAI \u203223) [Li et al., 2023]; Embedding methos: LFGAA (ICCV \u203219) [Liu et al., 2019], APN (NeurIPS \u203220) [Xu et al., 2020], DCN (NeurIPS \u203218) [Liu et al., 2018], HSVA (NeurIPS \u203221) [Chen et al., 2021b]; Region-Attribute modeling: SGMA (NeurIPS \u203219) [Zhu et al., 2019], AREN (CVPR \u203219) [Xie et al., 2019], DAZLE (CVPR \u203220) [Huynh and Elhamifar, 2020], MSDN (CVPR \u203222) [Chen et al., 2022b]. The experimental results are shown in Table 1. Our method achieves the best performance in seven metrics and second place in one metric. For Generalized Zero-Shot Learning (GZSL), we beat VS-Boost by 2% in the H-score of CUB, a fine-grained bird dataset whose attribute annotations possess explicit correspondences to visual features. It demonstrates the superiority of the proposed method for fine-grained modeling. On the SUN and AWA2 datasets, we obtain the best and second-best results in H-score, respectively. These two datasets have fewer attributes and contain complex semantic dimensions, including abstract, concrete, etc. The experimental results demonstrate the effectiveness of the proposed method in deconstructing complex tasks to alleviate the modeling pressure of a single network. In addition, the U-scores of our method on all three datasets are well ahead of the others, demonstrating that the proposed method effectively captures the relationship between attributes and visuals to generalize to unseen classes. For Zero-Shot Learning (ZSL), we achieve the highest top\fCUB SUN AWA2 METHOD ROUTE T U S H T U S H T U S H f-CLSWGAN Gen. 57.3 43.7 57.7 49.7 60.8 42.6 36.6 39.4 68.2 57.9 61.4 59.6 f-VAEGAN-D2 Gen. 61.0 48.4 60.1 53.6 64.7 45.1 38.0 41.3 71.1 57.6 70.6 63.5 TF-VAEGAN Gen. 64.9 52.8 64.7 58.1 66.0 45.6 40.7 43.0 72.2 59.8 75.1 66.6 E-PGN Gen. 72.4 52.0 61.1 56.2 73.4 52.6 83.5 64.6 CADA-VAE Gen. 59.8 51.6 53.5 52.4 61.7 47.2 35.7 40.6 63.0 55.8 75.0 63.9 FREE Gen. 55.7 59.9 57.7 47.4 37.2 41.7 60.4 75.4 67.1 SDGZSL Gen. 75.5 59.9 66.4 63.0 62.4 48.2 36.1 41.3 72.1 64.6 73.6 68.8 CE-GZSL Gen. 77.5 63.9 66.8 65.3 63.3 48.8 38.6 43.1 70.4 63.1 78.6 70.0 VS-Boost Gen. 79.8 68.0 68.7 68.4 62.4 49.2 37.4 42.5 67.9 81.6 74.1 SGMA Emb.\u2020 71.0 36.7 71.3 48.5 68.8 37.6 87.1 52.5 AREN Emb.\u2020 71.8 38.9 78.7 52.1 60.6 19.0 38.8 25.5 67.9 15.6 92.9 26.7 LFGAA Emb. 67.6 36.2 80.9 50.0 61.5 18.5 40.0 25.3 68.1 27.0 93.4 41.9 DAZLE Emb.\u2020 66.0 56.7 59.6 58.1 59.4 52.3 24.3 33.2 67.9 60.3 75.7 67.1 APN Emb. 72.0 65.3 69.3 67.2 61.6 41.9 34.0 37.6 68.4 57.1 72.4 63.9 DCN Emb. 56.2 28.4 60.7 38.7 61.8 25.5 37.0 30.2 65.2 25.5 84.2 39.1 HSVA Emb. 62.8 52.7 58.3 55.3 63.8 48.6 39.0 43.3 59.3 76.6 66.8 MSDN Emb.\u2020 76.1 68.7 67.5 68.1 65.8 52.2 34.2 41.3 70.1 62.0 74.5 67.7 DEDN(Ours) Emb. 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 2: Comparison with state-of-the-art methods (%). Gen. denotes generative method and Emb. denotes embedding method. \u2020 denotes the region-attribute modeling method. The best and second-best results are highlighted in blue and underlined, respectively. CUB SUN AWA2 SETTING T U S H T U S H T U S H cExp w/o Ldistill 74.6 62.4 71.4 66.6 64.0 41.6 35.7 38.4 71.1 62.8 78.8 69.9 fExp w/o Ldistill 75.5 68.1 67.9 68.0 64.0 42.8 35.5 38.7 71.1 62.9 79.1 70.1 DEDN w/o Ldistill 75.7 66.7 70.7 68.6 65.2 47.3 35.0 40.3 72.1 63.8 79.3 70.7 DAN w/o CA\u2217 77.0 58.7 73.6 65.3 65.8 48.5 34.6 40.4 74.6 61.7 79.8 69.6 DEDN w/o Lmal 75.8 73.2 62.5 67.4 66.0 56.5 34.3 42.7 73.1 66.5 72.4 69.3 DAN w/o Lalign 77.6 63.3 72.8 67.7 65.5 47.5 35.3 40.5 74.6 64.8 76.8 70.3 DEDN(full) 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 3: Ablation Study (%). w/o denotes remove the module. CA\u2217denotes channel attention. The best result is highlighted in bold. 1 accuracy on the SUN and AWA2 datasets, as well as competitive performance on CUB. Specifically, our method outperforms TF-VAEGAN by 1.4% on the SUN dataset. On AWA2, we have a 2.4% lead relative to the second-place EPGN. The experimental results validate the superiority of the proposed method. Notably, our method achieves far better results than existing region-attribute modeling methods in both ZSL and GZSL settings, which implies the potential of attribute intrinsic asymmetry and channel information is not fully exploited. 4.2 Ablation Study To evaluate the role of each module, we perform a series of ablation experiments. The results of the experiments are shown in Table 3. Comprehensively, removing any of the modules leads to different degrees of performance degradation, verifying the rationality and necessity of the design of each module. Concretely, it is observed that the performance of cExp is slightly lower than that of fExp without the distillation loss constraint, which indicates the potential research value of the inherent asymmetry of the attributes. Meanwhile, without distillation, the performance of DEDN is higher than both cExp and fExp, demonstrating the complementary properties of the dual experts. In addition, it is worth noting that DAN removing the channel attention results in a substantial performance degradation, demonstrating the importance of channel information. Moreover, the role of Lmal in balancing the confidence of unseen and seen classes can be observed from the metrics U and S. When Lmal is removed, the metric U increases dramatically while S decreases dramatically. Finally, the results also demonstrate the importance of Lalign for constraining semantic consistency. 4.3 Empirical Analysis 4.4 The influence of parameters \u03bbe and \u03bbrc We launch a series of empirical analyses, including evaluating the impact of parameters \u03bbe and \u03bbrc on the final performance. Figure 4 (a) illustrates the sensitivity of the harmonic mean for each dataset with respect to parameter \u03bbe. It can be observed that the influence of parameter a is extremely small. Of particular note, when \u03bbe is set to 1 or 0, it indicates that \fFigure 3: Visualization of the attention heat maps. The first row represents the heat maps of cExp, and the second row denotes the heat maps of fExp. (a) (b) (c) (d) Figure 4: (a) Sensitivity to \u03bbe. (b) Sensitivity to \u03bbrc. The harmonic mean (H) is reported. (c) Comparison with Kmeans. (d) Impact of the number of attribute clusters. The harmonic mean (H) and top-1 accuracy (T) are reported. only the cExp or fExp after distillation learning is used for the inference phase. It implies that by mutual distillation learning, each of the two experts learns the strengths of the other, thereby reaching an agreement. Figure 4 (b) illustrates the impact of \u03bbrc. It can be seen that setting \u03bbrc above 0.7 stabilizes the performance. Optimization is achieved when it is set between 0.7 and 0.9. The influence of different clustering algorithms We further evaluate the impact of the clustering algorithm on performance. In Introducing Table 1, we have explained that attribute clusters are obtained by humans to classify the attribute sets based on their characteristics. In this subsection, we use the K-Means algorithm for attribute clustering as a comparison to evaluate the performance. The experimental results are shown in Figure 4 (c), where the harmonic mean (H) and top-1 accuracy (T) are reported. From the figure, it can be seen that the K-Means algorithm is slightly poorer compared to human classification, but a good result is also achieved. It again shows that the idea of dividing the attribute set into different clusters holds great promise. The influence of the number of attribute clusters We evaluate the impact of the number of attribute clusters on performance. The attributes of CUB, SUN, and AWA2 are classified into 6, 4, and 4 categories, respectively (Table 1). In this subsection, we halve the categories, i.e., the numbers of attribute clusters for CUB, SUN, and AWA2 are 3, 2, and 2. The experimental results are shown in Figure 4 (d), where half denotes that the cluster number is halved. We can see that half leads to a reduction of H by 0.6%, 1.0%, and 6.8%, respectively, and a reduction of T by 0.7%, 0.2%, and 11%, respectively. The results show that detailed attribute classification facilitates the model in capturing more fine-grained information and thus improves the performance. Visual analysis of attention. We perform a visual analysis of the attention of the two experts, and the schematic is shown in Figure 3. It can be observed that cExp has a better localization for some global attributes, such as HeadPatternMaler, BellyColorGrey, ShapePerchingLike. Meanwhile, fExp has more detailed and precise localization for some local attributes, such as UpperTailColorGrey, ThroatColorGrey, LegColorWhite. The two experts collaborate and learn in a complementary way to improve together, which leads to better performance. 5 Conclusion In this paper, we analyze the impact of attribute annotations and channel information on the regional visual-attribute modeling task. We argue that the intrinsic asymmetry of attributes is one of the important bottlenecks constraining existing approaches and propose a simple yet effective framework named DEDN to address this problem. DEDN consists of two expert networks, one with complete attribute-domain perception to harmonize the global correlation confidence and the other consisting of multiple subnetworks, each focusing on a specific attribute domain to capture fine-grained association information. Both of them complement each other and learn cooperatively. Meanwhile, we introduce DAN as a strong backbone, a novel attention network that incorporates both region and channel knowledge. Moreover, we present a new loss named MAL to train the network. Numerous experiments demonstrate the significant superiority of the proposed approach.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2403.11907v1", + "title": "Distill2Explain: Differentiable decision trees for explainable reinforcement learning in energy application controllers", + "abstract": "Demand-side flexibility is gaining importance as a crucial element in the\nenergy transition process. Accounting for about 25% of final energy consumption\nglobally, the residential sector is an important (potential) source of energy\nflexibility. However, unlocking this flexibility requires developing a control\nframework that (1) easily scales across different houses, (2) is easy to\nmaintain, and (3) is simple to understand for end-users. A potential control\nframework for such a task is data-driven control, specifically model-free\nreinforcement learning (RL). Such RL-based controllers learn a good control\npolicy by interacting with their environment, learning purely based on data and\nwith minimal human intervention. Yet, they lack explainability, which hampers\nuser acceptance. Moreover, limited hardware capabilities of residential assets\nforms a hurdle (e.g., using deep neural networks). To overcome both those\nchallenges, we propose a novel method to obtain explainable RL policies by\nusing differentiable decision trees. Using a policy distillation approach, we\ntrain these differentiable decision trees to mimic standard RL-based\ncontrollers, leading to a decision tree-based control policy that is\ndata-driven and easy to explain. As a proof-of-concept, we examine the\nperformance and explainability of our proposed approach in a battery-based home\nenergy management system to reduce energy costs. For this use case, we show\nthat our proposed approach can outperform baseline rule-based policies by about\n20-25%, while providing simple, explainable control policies. We further\ncompare these explainable policies with standard RL policies and examine the\nperformance trade-offs associated with this increased explainability.", + "authors": "Gargya Gokhale, Seyed Soroush Karimi Madahi, Bert Claessens, Chris Develder", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Distill2Explain: Differentiable decision trees for explainable reinforcement learning in energy application controllers", + "main_content": "Introduction The ongoing shift towards sustainable energy is leading to a significant restructuring of the energy sector: largescale integration of distributed renewable energy sources, increased electrification, phasing out of fossil fuel-based generation, etc. [18]. As a result of these changes, there is a growing need for grid balancing services and demand-side flexibility to ensure reliable and secure functioning of the grid. Conventionally, large industries and big consumers were the primary source of such demand-side flexibility. However, another important and as-of-yet untapped source of flexibility is the residential sector [17]. Households account for about 25% of the final energy consumption and with growing adoption of rooftop solar PVs, home batteries, heat pumps, etc., represent an appealing source of flexibility [9]. Usually, exploiting this flexibility entails optimizing the use of a battery or other flexible assets to shift the real-time consumption of households while \u2217Under review arXiv:2403.11907v1 [eess.SY] 18 Mar 2024 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT ensuring user comfort [20]. In most cases, the primary objective is to minimize the energy bill of the household, however, prior research has also investigated other objectives such as maximizing self-consumption or participation in other explicit demand response services [22, 28]. An important component for extracting this household flexibility is a home energy management system (HEMS), responsible for solving the underlying, non-linear sequential decision-making problem and calculating the necessary control actions to be taken in real-time. Developing HEMS has been a major research area, with works such as [15, 42] providing an overview of techniques used in literature. A prominent research direction in this context is the use of model-predictive control (MPC) algorithms. MPC forms an advanced control framework that relies on a model of the system to predict the system\u2019s behavior and uses the model to analytically obtain optimal actions [8]. Works such as [10, 14, 31] have investigated the application of MPC in both simulation and real-world scenarios, showing significant performance improvements in such systems. However, as highlighted in [2, 41], accurate models\u2014which an MPC requires\u2014of the system are often difficult to obtain in the residential sector, significantly limiting widespread adoption of MPC-based solutions in this sector. The residential sector necessitates control frameworks that can easily scale to many, potentially diverse households. This has led to an increased interest in data-driven control frameworks, especially based on reinforcement learning (RL). RL-based controllers work by continuously interacting with the environment (i.e., the household), collecting experience (data) from these interactions, and using them to learn a control policy that maximizes a predefined reward [36]. Thus, with little human intervention and relying completely on data, such RL-based controllers can learn good control policies. Previous works on RL-based HEMS controllers such as [1, 7, 25] have shown significant improvements over baseline scenarios. However, most RL-based research is limited to simulation environments or specialized buildings. As discussed in [32], this is due to two main factors: (i) the data inefficiency of RL training, and (ii) the opaque nature of obtained control policies. To address (i), i.e., the high amount of data required for training RL-based controllers, previous works such as [3, 39, 44] propose different solutions. However, (ii) raises another important concern related to RL, i.e., the noninterpretable/non-explainable nature of their policies, especially when based on deep neural networks. With limited prior works in this area, we identify this as a significant gap in existing literature and thus introduce our innovative approach to specifically address the (lack of) explainability of RL-based HEMS. More specifically, we propose a policy distillation framework using differentiable decision trees [11, 19]. The key idea is to distill information from pre-trained RL-based controllers into an explainable decision tree, leading to control policies that are explainable and perform nearly as good as the original RL-based policies. To the best of our knowledge, this is one of the first works in the energy field to adopt policy distillation using differentiable decision trees for explainable RL. Our main contributions can be summarized as: 1. We propose a novel framework for explainable RL that uses differentiable decision trees and policy distillation for converting black-box RL policies into explainable decision trees (\u00a74). 2. Using different case studies, we detail the explainability of our proposed method, contrasting it with conventional RL-based policies (\u00a76.2). 3. We compare the performance of our method with conventional RL-based policies and other baselines to show the performance trade-off that results from the increased explainability (\u00a76.1). The primary emphasis of this paper is to introduce a novel method for obtaining explainable RL-based control policies. As a proof-of-concept, we validate our proposed approach on a battery-based home energy management scenario using real-world data and present our preliminary findings. Section \u00a77.1 outlines the future work in terms of additional investigation of this method and its application to other, more complex scenarios. 2 Related Work Designing control algorithms for unlocking flexibility in households has been a major field of research, with works such as [15, 42] providing an exhaustive review of prior works including heuristics-based controllers, MPCs, and datadriven algorithms. As discussed in \u00a71, our work focuses on improving the explainability of reinforcement learningbased controllers and hence this section focuses on developments in the fields of reinforcement learning-based control, policy distillation, and explainable AI. We refer interested readers to [15, 28] for more comprehensive reviews of other relevant methods in the context of HEMS and demand-side flexibility. 2 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT 2.1 Data-driven Home Energy Management Systems A recent research direction in HEMS has been the use of data-driven and mainly reinforcement learning-based controllers [5]. RL-based controllers rely primarily on past data and have minimal modeling requirements as compared to prominent control techniques such as MPCs. For example, works such as [1, 7, 25], demonstrate the applications of RL-based controllers in the context of HEMS. In most of these cases, the RL-based controllers rely on state-of-the-art RL algorithms such as deep Q-networks (DQN) [30], deep deterministic policy gradient (DDPG) [23] and use control policies based on deep neural networks to achieve significant performance improvements (\u223c5-16% as reported in these works). While these deep neural networks are beneficial for achieving good performance, a common drawback associated with their use is their opaque (or black-box) control policy [38]. We aim to address this challenge associated with the explainability of RL-based controllers, providing a framework for distilling a standard RL control policy into an explainable policy. 2.2 Explainable AI Providing explainability for AI-based technology is an important and necessary issue to address for large-scale deployment of machine learning-based solutions, especially in the context of the energy sector. We refer to more exhaustive reviews [24, 29] of available techniques, metrics, and methodologies across different fields such as image recognition, natural language processing, etc. However, as discussed in [27], in the context of energy, research on explainable AI has been largely restricted to applications such as forecasting, modeling, or fault diagnosis. While few works such as [21, 40, 43], present explainable RL-based controllers, they largely rely on decomposition methods or utilize post-hoc explanation frameworks such as SHAPley values, feature importances, LIME, etc. Although useful, such post-hoc explanations are typically designed for experts and are not easily accessible to the average end-user, such as a homeowner. Our proposed method differs from such approaches by distilling the deep RL-based control policy into an explainable architecture in the form of differentiable decision trees. Thus, the resulting control policies are structurally explainable, i.e., in the form of rather simple if-then-else rules, that can be easily (a) explained to non-expert end users, and (b) deployed on simple hardware. 2.3 Policy Distillation and Differentiable Decision Trees As discussed in \u00a71, our approach employs policy distillation to trained RL-based controllers and distills their knowledge into a differentiable decision tree structure. This closely follows prior works that have used knowledge distillation strategies to (1) compress large neural networks, or (2) combine knowledge from model ensembles into a single model [4, 16]. Differing from these, works such as [6, 12, 34] adopt knowledge distillation in RL to transform the architecture of the final policy, e.g., into a fuzzy inference system. We follow a similar approach and distill an RLbased control policy into a differentiable decision tree. This enables us to extract knowledge from standard RL-based controllers into simple decision trees which are structurally easy-to-explain and simple to understand. This choice is closely related to the objective of obtaining control policies that are easy to explain (to both energy experts and end-users). Differentiable decision trees (DDTs) or soft decision trees are variants of binary decision trees that can be trained using gradient descent [11, 19]. Prior works such as [16, 26] have applied DDTs to computer vision and regression tasks. For our energy use case, we follow the approach of [6], using DDTs to distill RL-based control policies. However, our proposed approach differs from [6] in two ways: (i) we learn deterministic decision trees instead of the soft decision trees, and (ii) we learn using observed, explainable features (as opposed to rather indirect pixel-based learning in [6]). This enables us to learn DDTs using gradient descent and then convert them into simple decision decision trees for inference (as detailed further in \u00a74). 3 Preliminaries The proposed differentiable decision tree-based policy distillation framework is examined in the context of a home energy management scenario. This section describes the problem formulation for this proof-of-concept and introduces basic concepts related to reinforcement learning (RL). 3.1 Problem Formulation In the context of home energy management, we consider an average Belgian household with a rooftop solar PV installation (with generated power P pv t ), non-flexible electrical load (P con t ), and a home battery. We assume that this household is exposed to varying BELPEX day-ahead prices (\u03bbcon t ) and a capacity tariff based on peak power. This leads 3 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT to a joint optimization problem, where the HEMS must minimize the daily cost of both the energy consumption (ceng t ) and the peak power (cp t ). This optimization problem is modeled as: min u1,...uT T X t=1 ceng t + cp t (1a) s.t.: ceng t = ( \u03bbcon t P agg t \u2206t : P agg t \u22650 \u03bbinj t P agg t \u2206t : P agg t < 0 \u2200t (1b) cp t = \u03bbcap max(P agg t , P agg min ) (1c) P agg t = P con t + P pv t + ut \u2200t (1d) Et+1 = ( Et + \u03b7 ut \u2206t : ut \u22650 Et + 1 \u03b7 ut \u2206t : ut < 0 \u2200t (1e) 0 \u2264Et \u2264Emax; umin \u2264ut \u2264umax \u2200t. (1f) The battery is modeled using a linear model (Eq. (1e) with charging/discharging actions ut and current energy level (Et). The cost of energy consumed (ceng t ) depends on the power consumed (P agg t ) and the current injection and consumption prices (\u03bbinj t and \u03bbcon t respectively). Similarly, the capacity cost (cp t ) depends on the actual power consumed and the minimum power capacity contracted [37]. Furthermore, we assume T = 24 hours and a time resolution \u2206t = 1 hour. The above-mentioned problem illustrates a real-world scenario that is pertinent in the present day where a household\u2019s HEMS needs to efficiently leverage the home battery to reduce the energy bill, taking charging/discharging actions dependent on the real-time price, solar PV production, and daily load consumption patterns. Accordingly, we further assume that the HEMS can only take discrete actions (a total of 5 related to 2 charging modes, 2 discharging modes, and 1 \u2018do nothing\u2019 mode). Nonetheless, our method can be easily extended to other action spaces as well. 3.2 Markov Decision Process We model the sequential decision-making problem presented in \u00a73.1 as a Markov Decision Process (MDP) [36]. The states (xt \u2208X) consist of the current price (\u03bbcon t ), battery state-of-charge, non-flexible demand (P con t ), and solar PV generation (P pv t ). The actions (ut \u2208U) are the charging/discharging signals given to the battery. As stated above, we assume a discrete action space of 5 elements (i.e., U = {\u22121, \u22120.5, 0, 0.5, 1}), with the possibility of extending it reserved for future work. The reward function (\u03c1 : X \u00d7 U \u2192R) is defined as the cost incurred for each time step t and is modeled based on Eq. (1b), Eq. (1c). The transition function (f) models the dynamics of the household, taking into account the (controllable) behavior of the battery along with (uncontrollable) real-time solar PV generation, and non-flexible power consumption. 3.3 Reinforcement Learning In RL, the goal of an agent is to find a policy \u03c0 : X \u2192U that minimizes the expected T-step cost (J\u03c0) starting from an initial state x0 \u2208X (Eq. (2)). J\u03c0 = T X t=0 \u03c1(xt, \u03c0(xt), \u03c9) (2) This expected cost J\u03c0 can be expressed as a recursive function using a state-action value function, called Q-function: Q\u03c0(xt, ut) = E \u03c9[\u03c1(xt, ut, \u03c9) + \u03b3Q\u03c0(xt+1, \u03c0(xt+1))]. (3) Here, \u03c9 represents the stochasticity in the transition function (f) and can be attributed to exogenous factors. The discount factor is represented as \u03b3. For our work, we focus on the deep-Q network (DQN) algorithm [30], where the Q-function is iteratively estimated using a deep neural network as a function approximator. The neural network-based Q-function (parameterized as \u02c6 Q\u03b8) is trained on a batch of data (F) with the following loss term: L = E \"\u0012 \u02c6 Q\u03b8(xt, ut) \u2212 \u0012 ct + min u\u2208U \u02c6 Q\u03b8\u2212(xt+1, u) \u0013\u00132# , (4) 4 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT Figure 1: Illustration of a DDT of depth 2. The rounded boxes depict the decision nodes and the rectangles depict leaf nodes. All pi represent the path probabilities and pL jk denotes the leaf probability distributions. where, ct = \u03c1(xt, ut, \u03c9) is the observed cost value during the state transition from xt to xt+1 and the expectation is over all elements of F. For more details related to the DQN algorithm, we refer the readers to [30]. Note that our proposed method is agnostic to the choice of the RL algorithm and can be easily extended to other RL algorithms as well. 4 Methodology This section details our proposed approach. We first mathematically formulate the differentiable decision tree architecture, followed by the policy distillation process. 4.1 Differentiable Decision Trees (DDTs) Differentiable decision trees or soft decision trees are a variant of ordinary decision trees, introduced in prior works such as [19, 11]. We follow the work presented in [35], where a DDT is formulated as a directed, acyclic graph consisting of nodes and edges. There are two types of nodes in a DDT: (1) decision nodes, characterized by a feature selection weights (\u03b2) and a threshold (\u03d5); and (2) leaf nodes containing a weight vector (wL) to express the probability distribution. While ordinary decision trees have decision nodes represented using a boolean function, DDTs implement a \u2018soft\u2019 decision using the sigmoid function (represented as \u03c3). Consequently, each path (or edge) going out of a decision node carries a probability value that is based on the condition evaluated at that decision. 4.1.1 Decision Node A decision node (represented as rounded boxes in Fig. 1) is modeled as: p = \u03c3 (\u03b2x \u2212\u03d5) (5a) pleft = p (5b) pright = 1 \u2212p (5c) Here, \u03b2 and \u03d5 are trainable parameters representing the feature selection weight and the cut thresholds respectively. Each decision node evaluates a condition based on the selected feature and cut threshold and gives path probabilities for going left (the condition is likely to be True) and going right (the condition is likely to be False). 4.1.2 Leaf Nodes A leaf node l contains a weight vector (wL l ) that leads to an output probability distribution modeled using a SoftMax function. In our case, each leaf output is the probability distribution over all actions in the action space (U), however, this can be extended to estimate exact values (for continuous actions) as well. For this leaf node, the probability for each action um \u2208U is calculated using Eq. (6) pL lm = e\u2212wm P|U| \u03ba=1 e\u2212w\u03ba \u2200m \u2208{1, 2, . . . , |U|} (6) 5 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT Algorithm 1 Depth 2 DDT Formulation 1: Initialize: \u03b2i, \u03d5, wL k , where i = {1, 2, 3} (decision nodes) and k = {1, 2, 3, 4} (leaf nodes) 2: Input: State x 3: for all i do 4: Feature Selection: xj = \u03b2i \u00b7 x 5: Evaluate Condition: pi = \u03c3(xj \u2212\u03d5i) 6: end for 7: Calculating Path Probabilities: p = \u0002 p1 0 0 1\u2212p1 \u0003 \u00b7 \u0002 p2 1\u2212p2 p3 1\u2212p3 \u0003 8: for all k do 9: Calculate Leaf Probabilities: pL k = {pL k1, pL k2, . . . pL kn} based on Eq. (6) 10: end for 11: Output: o = p[1, 1]pL 1 + p[1, 2]pL 2 + p[2, 1]pL 3 + p[2, 2]pL 4 4.1.3 Creating a DDT Eq. (5) and Eq. (6) are combined to implement a DDT of required depth. To illustrate this, we now present the formulation of a DDT of depth 2 (as shown in Fig. 1). Such a DDT contains 3 decision nodes and 4 leaf nodes. For each decision node, we have feature selection vectors (\u03b21, \u03b22, \u03b23) and cut-thresholds (\u03d51, \u03d52, \u03d53); each leaf node contains weight vectors (wL k ). The tree is built based on algorithm 1. This formulation is used to perform a forward pass of the DDT and train the parameters using gradient descent. At inference, each node is converted from the \u2018soft\u2019 version into a crisp node, resembling an ordinary decision tree. This includes reducing all feature selection parameters (\u03b2) into one-hot representations (using argmax) and converting all probabilities into \u2018crisp\u2019, boolean values. Note that this method of creating a DDT decomposes all computations into differentiable computations and allows to parallelize them. Additionally, while a DDT of any depth can be implemented based on Eq. (5), Eq. (6), for this work we restrict the scope to trees of depth 2 and 3. This choice is primarily driven by the ease of explainability for such (shallow) trees. 4.2 Policy Distillation Distillation is a method for transferring knowledge from a teacher model T to a student model S [34]. In the context of reinforcement learning, this refers to transferring knowledge related to a control policy from a trained teacher agent (\u03c0T ) to a student agent (\u03c0S). Typically, this leads to a classification problem where targets are obtained using the outputs of the trained agent. We follow the approach presented in [34], where a DQN-based teacher agent is trained first and then using a batch of observations (F), a student policy is distilled based on the teacher agent. First, the trained teacher agent is used to create a new batch of training data of the form D = {xi, qi}|F| i=1. Here, qi is the vector corresponding to Q-values for all actions for a state xi \u2208F, obtained using the teacher agent (i.e., qi = {QT (xi, ui) | \u2200ui \u2208U}). Following this, the student agent is trained to mimic this distribution using Kullback-Leibler (KL) divergence with temperature (\u03c4) as presented in Eq. (7). L\u03b8s = softmax \u0010qi \u03c4 \u0011 \u00b7 ln \uf8eb \uf8edsoftmax \u0000 qi \u03c4 \u0001 softmax \u0010 qS i \u03c4 \u0011 \uf8f6 \uf8f8 (7) Note that qS i is the output of the student model parameterized by \u03b8s. The temperature \u03c4 is used to adjust the \u2018smoothness\u2019 of the Q-function distribution. 4.3 Our Approach For our work, we assume a teacher agent (policy \u03c0T and Q-function QT ) as a standard DQN agent, and the student agent (\u03c0S) consists of the DDT architecture. First, the teacher agent is trained independently using DQN, to obtain a control policy. Following this, the trained teacher is used to create target distributions using data collected from previous interactions with the environment. This data is then used to train the student DDT-based agent. Algorithm 2 outlines the training procedure for our proposed approach. 6 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT Algorithm 2 Training algorithm for our proposed method 1: Initialize: Teacher agent T, DDT student S, buffer F. 2: Train Teacher: Use F and Eq. (4) to train teacher i.e., obtain \u03c0T and QT 3: Create Distillation Batch: Distillation batch D = {xi, qi}|F i=1| where qi = {QT (xi, ui) | \u2200ui \u2208U} 4: Train Student DDT: Use D and Eq. (7) to train the student (\u03c0S) using gradient descent. 5 Experiment Setup We validate our proposed approach on a home energy management scenario using a battery as the source of flexibility. This section presents the simulation environment and details the training and experimental scenarios used. 5.1 Simulator Setup We use a Python-based simulation environment to validate and compare our proposed approach with standard RLbased controllers. This simulator is derived from a real-world Belgian household with rooftop solar PV and is modeled based on Eq. (1). Real demand and solar PV profiles are used along with the battery model presented in Eq. (1e). Additionally, we use hourly, real-world BELPEX prices as consumption prices (\u03bbcon t ) and a capacity tariff structure based on [37]. The battery parameters are detailed in Appendix A.1. Further, we assume the injection price (\u03bbinj t ) is 25% of the consumption price i.e., (\u03bbinj t = 0.25 \u03bbcon t ). 5.2 Training Setup The training is divided into two parts: (i) training the teacher agent; and (ii) policy distillation to train student agent. For the teacher agent, we follow the standard DQN implementation and use an \u03f5-greedy training strategy to train the DQN-based teacher agent. Following this, the buffer generated by the DQN-based teacher is used to create the distillation dataset (D). The student agent is then trained using this dataset. To improve the stability of the training process, we set the temperature (\u03c4) from Eq. (7) to 0.03 to obtain a sharp Q-function distribution. We list all the hyperparameters used in Appendix A.2. For each agent (teacher and student) we perform 5 seeded runs and compare the mean values over the 5 runs. 5.3 Experiment Scenarios The primary goal of this work is to present a novel approach for obtaining explainable, RL-based policies. We investigate the HEMS scenario described in \u00a73.1, where one intentionally simplified scenario is used to effectively assess the explainability of our proposed approach. We specifically investigate two key scenarios: 5.3.1 Scenario 1: Performance Comparison In this scenario, we investigate the performance of our proposed approach and evaluate whether our method can achieve satisfactory performance compared to standard DQN agents and baseline rule-based controller. For this, we consider a realistic HEMS scenario and use real-world data for load profiles, solar PV, and prices as described in \u00a75.1. The performance is quantified as the total cost for a day, comprising both energy and capacity costs. As baselines, we use the teacher agents as the upper bound of performance and a rule-based control (RBC) policy as the lower bound. The RBC policy is designed similar to the typical built-in control policy of home batteries and aims to maximize selfconsumption. We consider two key variants of price profiles: (i) an artificial, square wave price profile (resembling day-night tariff); and (ii) an actual real-world day ahead price profile. The artificial price profile is a simplified scenario with clear peaks and valleys in the price to provide unambiguous opportunities for energy arbitrage. 5.3.2 Scenario 2: Explainability Assessment To further assess the explainability of our method, we consider a simplified scenario where we exclude solar PV from the system and reduce the state features to 3 components i.e., battery state-of-charge, price, and demand. This 7 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT (a) Artificial, square wave price profile (b) Real-world BELPEX price profile Figure 2: Performance of DDT-based students as a HEMS on different price scenarios. The dots represent the actual performance of individual models and the box plots show the aggregate performance. The student agents are benchmarked using teacher agent \u201cDQN\u201d and a RBC. (a) learned DDT for Square wave price scenario (b) learned DDT for real-world price scenario Figure 3: Visual representation of learned decision trees of depth 2 for both price scenarios. The decision nodes are depicted with unshaded boxes and contain the learned features and the threshold values. The leaf nodes are depicted by grey boxes and contain the learned distribution. The annotations highlight the actions related to each leaf node. simplification enables clear visualization of the learned policies, contrasting them with standard DQN policies to qualitatively investigate the explainability of our proposed method.2 6 Results This section presents the results obtained for the different scenarios discussed in \u00a75.3. 6.1 Performance Evaluation The performance of our proposed approach using DDTs of depth 2 and 3 is presented in Fig. 2. We note two key observations: (i) both DDT agents clearly outperform the baseline RBC controller; (ii) while the DQN-based teacher performs better than the DDTs, the performance difference (mean) is quite small (\u223c5%). This indicates that our proposed approach can learn satisfactory control policies that outperform an RBC included with standard batteries. Additionally, the DDTs can mimic the teacher agents well and sustain minimal deterioration in performance. While the overall performance is satisfactory, Fig. 2 indicates some (training) stability issues with the DDT-based controllers. This is particularly apparent in Fig. 2a, where both DDTs demonstrate a strong performance for 3 of the runs, while the other two instances do not fare as well. This problem can be attributed to the training process, where changes in \u2018upstream\u2019 or hierarchically higher features could have a disproportionate impact on the output distributions. This needs to be investigated further and will be part of future work as discussed in \u00a77.1. Furthermore, examples of learned DDTs of depth 2 are presented in Fig. 3. Note that these DDTs are randomly initialized and over the course of training learn the feature selection (e.g., choosing \u2018demand\u2019 or \u2018solar PV\u2019 as the feature for the first decision node) and the respective cut thresholds via gradient descent. We observed that both DDTs 2Quantitatively assessing the explainability of AI methods remains an open question with most prior works relying on either qualitative methods or user studies for assessment [33]. 8 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT (a) DDT of Depth 2 (b) DDT of Depth 3 Figure 4: Visualizing the trained policy of DDT and DQN-based agent on a simplified HEMS scenario. The heatmaps show the actions chosen by the agents for different values of state-of-charge and price across different demand regions. The bottom row depicts the DQN policy and the top rows show the policy of our proposed DDT-based controllers are straightforward to understand, easily \u2018explaining\u2019 how the controller takes an action. Additionally, the actions taken are intuitive and follow human intuition \u2013 e.g., in Fig. 3b, the controller decides to take a charging action only when solar PV generation is high (greater than 0.47) while demand is low (less than 0.37). Likewise, in Fig. 3a, the controller discharges with maximum power when both price and demand are high while only discharging by half the power when price is high but demand is low, showing that the learned policy adjusts its decision based on the current as well as expected future demand. We conclude that the results presented in Fig. 2 and Fig. 3 validate the performance and explainability of our proposed DDT-based approach and show that the DDTs learn a simple, easy-to-explain policy and achieve satisfactory control performance. 6.2 Explainability Comparison While \u00a76.1 investigated the control performance of our method, we now examine the explainability of the obtained policies. As described in \u00a75.3, we consider a reduced problem where a house without solar PV is exposed to an artificial square wave price profile. Despite being hypothetical, this scenario reduces the dimensionality of the state space (now reduced to battery state-of-charge, price and non-flexible demand) and allows us to examine and compare the explainability of the learned DDT policy with that of the DQN policy. While the visual representation of the policy as shown in Fig. 3 is useful, we cannot directly compare it with the teacher policy (which is a neural network). Consequently, we make use of policy heatmaps to visualize different control policies [33]. Figure 4 illustrates such heatmaps comparing the teacher (DQN) policy with depth 2 and depth 3 DDT policies. These heatmaps are generated by evaluating the controller\u2019s policy on all possible states (in a fixed subset of the state space) and provide an overview of how an agent would react for different states. Based on Fig. 4, we observe that the DDT heatmaps (top rows of the figure) are consistent, straightforward, and can be easily decomposed into a few rules based on demand, price or state-of-charge. Contrary to this, the DQN-based policy is complex and often non-intuitive in terms of actions taken in specific regions. E.g., the DQN policy in the low-demand region prefers to discharge the battery even in the regions where the price is quite low (e.g., regions where price is less than 0.25 and the state of charge is greater than 0.75). Such behavior is counter-intuitive and difficult to understand even for experts, not to mention everyday homeowners (who will actually use such a system). This further highlights the increased explainability achieved using our proposed approach. 6.3 Compute performance Besides explainability, the proposed DDT-based method is computationally light and easy to deploy on any edge device given that it reduces the control policy into a limited set of if-then-else rules. As a comparison, Table 1 lists the number of parameters used and the storage footprint of the teacher agents and the distilled DDTs used in \u00a76.1. For DDTs, the number of parameters used during training are represented inside parenthesis along with the parameters used during inference. Unlike DQN which uses the same set of parameters during training and inference, DDTs require fewer parameters for inference \u2013 e.g., at any decision node, the feature selection parameters can be reduced to a single parameter representing the selected feature. From this table, it can be clearly observed that the proposed DDTs have a significantly smaller compute footprint due to the reduced number of parameters, leading to trained models which are 9 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT Table 1: Comparison of DQN and DDTs based on computational metrics Algorithm Number of Parameters Storage Size DQN (teacher agent) 4.8k 22KB DDT \u2013 depth 2 10 (38) 4KB DDT \u2013 depth 3 22 (82) 7KB about 200 times smaller than the teacher DQN agents. To conclude, the comparison in Table 1 further underscores the potential for deploying such controllers in real-world scenarios. 7 Conclusion Through this work, we introduced a novel method for obtaining explainable RL-based control policies using differentiable decision trees and policy distillation. The key idea of our work is to distill knowledge from a standard RL-based controller into a simple, easy-to-explain decision tree architecture by purely relying on data. For this, we use differentiable decision trees in a policy distillation setup, training the decision trees using a standard (pre-trained) RL-based controller and gradient descent. The policy distillation step allows extracting knowledge from an RL-based controller, while the differentiable decision tree architecture constrains the policy to be simple and explainable at all times. We validated our method on a battery-based home energy management problem and investigated the control performance and explainability of our proposed approach. As presented in \u00a76, our proposed approach learns a control policy that performs comparable to the teacher DQN agent, while being simple (i.e., \u223c200 times reduction in number of parameters) and easy-to-explain. Furthermore, the performance of our DDT-based controllers surpasses the performance of commonly observed RBC, performing \u223c20 \u221225% better than the RBC. 7.1 Limitations and Future Work As discussed in \u00a71, the goal of this work was to introduce this novel method and highlight its potential for future applications in the energy domain. In support of this objective, we identify some limitations within the current work and outline areas for future investigations. The initial consideration pertains to the problem formulation discussed in \u00a73.1, which we will further expand to include thermal models and joint optimization with comfort constraints. While the current problem mimics a realworld house in the present times, future application scenarios will require more elaborate HEMS that can optimize cost by leveraging flexibility from different sources including building thermal mass, batteries, and electric vehicles. To efficiently deal with such complex scenarios, our future work will explore two main aspects: (i) extending the policy distillation set-up to multi-agent RL settings, where simple, shallow DDTs can be trained per flexibility asset; and (ii) domain knowledge induced feature engineering (using previous works such as [13]) to compress information and allow the use of shallow DDTs. While large DDTs can be trained for such complex scenarios, we intend to focus on \u201cshallow\u201d DDTs that are intuitively easier to explain (or more explainable) as compared to \u201cdeep\u201d trees. Besides this, another limitation of our current approach is the occasional instability in the training process related to the DDTs. As noted in \u00a76, this training instability could be attributed to the tree structure of the DDT with features hierarchically higher up in the tree significantly affecting the outputs. This needs to be investigated further to identify possible solutions to stabilize the learning process. This includes effective regularization strategies, warm starting, or constraining the decisions being learned. The latter seems particularly useful for DDTs of higher depth, where some decisions are conflicting or redundant (as shown in Appendix B). The third area that needs to be addressed further is the deployment of such an algorithm in real-world scenarios and performing a user trial to further validate the explainability of our method. While non-trivial, such a pilot study is needed to investigate the acceptance of such a HEMS as well as the challenges associated with maintaining such a system. This will further allow us to investigate more advanced approaches such as human-in-loop training and intervention strategies to maximize the decision tree architecture and develop a robust, data-driven controller that can be widely deployed across houses. 10 \fDistill2Explain AI4Energy, UGent \u2013 imec PREPRINT" + }, + { + "url": "http://arxiv.org/abs/2403.02757v1", + "title": "In-Memory Learning: A Declarative Learning Framework for Large Language Models", + "abstract": "The exploration of whether agents can align with their environment without\nrelying on human-labeled data presents an intriguing research topic. Drawing\ninspiration from the alignment process observed in intelligent organisms, where\ndeclarative memory plays a pivotal role in summarizing past experiences, we\npropose a novel learning framework. The agents adeptly distill insights from\npast experiences, refining and updating existing notes to enhance their\nperformance in the environment. This entire process transpires within the\nmemory components and is implemented through natural language, so we character\nthis framework as In-memory Learning. We also delve into the key features of\nbenchmarks designed to evaluate the self-improvement process. Through\nsystematic experiments, we demonstrate the effectiveness of our framework and\nprovide insights into this problem.", + "authors": "Bo Wang, Tianxiang Sun, Hang Yan, Siyin Wang, Qingyuan Cheng, Xipeng Qiu", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "In-Memory Learning: A Declarative Learning Framework for Large Language Models", + "main_content": "Introduction The essential means by which intelligent organisms align themselves with changing environments is through learning and memory, which can be categorized into two distinct types in Neuroscience: declarative and non-declarative (Squire and Zola, 1996). The memory acquired through non-declarative means is difficult to express in language, as depicted in Figure 1. Conversely, declarative memory empowers individuals to convey past experiences with language, thus preparing them to navigate a wider array of scenarios with greater flexibility. When approaching new tasks or environments, humans summarize rules from initial experiences, subsequently refining and applying * Equal contribution. \u2020Work done during internship at Shanghai Artificial Intelligence Laboratory \u2021 Corresponding author. these rules to similar problems. This iterative refinement enhances understanding and effectiveness, gradually increasing familiarity with the task or environment. When comes to Deep Neural Networks, if we liken learning through gradient back-propagation to a form of non-declarative learning, it can be observed that large language models (Brown et al., 2020) benefit from an explicit formulation of their context window. Whether it involves generating the thought process using a Chain of Thought (Wei et al., 2023) approach or providing input-output pairs as examples via In-context learning (Dong et al., 2023), large language models get similar improvement to those gained through gradient-based methods, reducing the loss value and enhancing their performance in downstream tasks. As shown in Figure 1, this method mirrors declarative learning, where understanding context enhances the network\u2019s performance. By leveraging this unique characteristic, agents built upon large language models can comprehend their environment, plan, and make decisions based on organizational context (Shridhar et al., 2020; Xi et al., 2023). This approach enables them to tackle a broad spectrum of problems effectively, which attracts the interest of many researchers. Given that LLM-based agents exhibit capabilities similar to intelligent organisms, and recognizing that these abilities empower them to align with the natural world and enhance cognition, a natural question arises: Can agents develop similar self-improvement capabilities? Research on the autonomous agent (Qin et al., 2023; Schick et al., 2023) usually incorporates the use of tools to formulate their context window autonomously, including strategies for teaching agents to utilize these tools or the design of processes that involve tools (Wang et al., 2023), such as retrievers. The enhancement in agent performance is significantly influenced by the performance of these tools, which 1 arXiv:2403.02757v1 [cs.CL] 5 Mar 2024 \fNon-Declarative declarative Positive or negative? Positive or negative? I like it! Notes: If \u2026, then \u2026 Review: I like it! Sentiment: Positive or negative? Positive Me too! Positive Induce from experiences Practice Declarative Review: I like it! Sentiment: Example: Review: Delicious! Sentiment: Positive Review: Taste good! Sentiment: Positive \u2026 Notes: If \u2026, then \u2026 Review: I like it! Sentiment: Prompt Gradient Input label Positive or negative? I like it! Positive or negative? Me too! Positive Examples Review:\u2026.Sentiment:\u2026 Review: I like it! Sentiment: Non-Declarative Declarative Figure 1: Learning Pattern. Non-declarative learning, as illustrated by the left figure, involves skills such as distinguishing relative pitches in music through practice. It\u2019s a challenge to express verbally. In contrast, declarative learning, exemplified by the right figure, refers to the acquisition of knowledge that can be explicitly stated, such as the introduction of the law of universal gravitation. For neural networks, models can develop the capability to answer questions through a gradient-based approach, as well as complete specific tasks using carefully designed prompts. This process closely resembles the learning process shown in the left parts. can not improve themselves concurrently. The central question we are concerned about is whether agents can self-enhance in the absence of humanlabeled data, which is the inherent capability of the model itself. In this research, we propose a novel perspective on the learning process of agents, drawing inspiration from declarative learning methods employed by humans. We introduce a comprehensive learning framework, termed In-Memory Learning (IML), which encompasses three pivotal components: induction, revision, and inference. The learning process is completed in the memory component, which is what the name refers to. In analogy to the gradient calculation process in gradientbased learning, agents perform note induction from their current experience to identify an update direction, subsequently updating their previous notes. Through iterative updates, the rules summarized by the agents progressively align to the correct direction. Our experiments illustrate that, through applying this framework, the model can self-enhance without the requirement for human-annotated labels. The successful implementation of this method necessitates three distinct capabilities: \u2022 Induction: the distillation of general principles from current experiences \u2022 Revision: the refinement of pre-existing guidelines \u2022 Inference: the application of these updated rules for logical reasoning. It\u2019s worth noting that we do not directly compare our framework with those that incorporate tools within agent systems, as our objective is to demonstrate the inherent potential for agents to self-improve. Instead, we further delve into an analysis of the model\u2019s capabilities and the impact of various IML parameters. Our main contribution is: \u2022 We discuss the essential properties that a benchmark requires to evaluate selfimprovement abilities and have implemented a preliminary version of such a benchmark. \u2022 We introduced a novel framework named Inmemory Learning and carried out a comprehensive series of systematic experiments to investigate its effectiveness and capabilities. 2 Related Work 2.1 LLM-Agent Discussions about agents have erupted, given the capacity of large language models to tackle a variety of language tasks, as previously mentioned. 2 \fA particularly intriguing question arises regarding the self-improvement of these agents. In numerous studies, agents have demonstrated the ability to leverage tools to enhance their performance (Yao et al., 2022b; Schick et al., 2023; Qin et al., 2023; Shen et al., 2023; Karpas et al., 2022; Li et al., 2023). In the Reflexion (Shinn et al., 2023) framework, the model takes multiple trials on the same question, necessitating specific conditions to determine the appropriate moment to stop attempts. Similar to the Voyager (Wang et al., 2023), we believe that the agent should operate within a stable environment over a long period. In practical scenarios, where labels are hard to obtain, the agent must develop an understanding of its surroundings and enhance its capabilities, diverging from the traditional notion of an autonomous agent. We later developed the concept of \u2019lifelong agent\u2019 in Voyager, to which our methods are specifically tailored. It\u2019s worth noting that the common practice for agents based on retrievers directly is acquiring related experiences and integrating them into the context (Wang et al., 2023), which essentially is incontext learning. Consequently, we have selected in-context learning as our foundational baseline. ExpeL (Zhao et al., 2023) also explores a similar process. The primary distinction from our work is we focus on iterative improvement and conduct systematic experiments about it, while ExpeL primarily emphasizes the benefits of cross-task experience. 2.2 Agent Benchmark Existing benchmarks for agents assess model capabilities across multiple dimensions, such as the ability to function as an agent (Liu et al., 2023), the planning skills necessary to address real-world issues (Shridhar et al., 2020; Yao et al., 2022a; Fan et al., 2022; Ahn et al., 2022) and their ability to complete tasks iteratively (Mohanty et al., 2023). The methods used to assess agents\u2019 performance vary widely, encompassing human evaluation through interviews (Park et al., 2023; Lin et al., 2023) and subjective assessments (Choi et al., 2023). However, there is a lack of benchmarks specifically designed to directly evaluate the selfimprovement ability of agents (Xi et al., 2023). We will discuss the characteristics of such a benchmark in the next section, which form the basis of our proposal for a new benchmark to measure agents\u2019 progression. Notes \u2205!\"#$% Revise notes \u2205\u2032 \u2207= \ud835\udf15\ud835\udc3f \ud835\udf15\ud835\udc4a \ud835\udc4a& = \ud835\udc4a+ \u2206\ud835\udc4a Induction phase Revision phase Calculate gradient Update parameter Non-declarative case: Finetune Declarative case: In-memory Learning(Ours) Samples in batch Samples in context window Figure 2: Backward Process. There is a similar structure between the gradient-based learning process and Inmemory Learning(Ours) 3 Meta Implementation The entire operation of an LLM-based agent can be formulated as a Partially Observed Markov Decision Process (Carta et al., 2023) (S, V, A, T , R, G, O, \u03b3) and we briefly introduce here. In this context, S is the state space while V represents the vocabulary of the language model. A \u2282VN is the action space and G \u2282VN is the goal space. The transition function is represented by T : S \u00d7 A 7\u2192S, the reward function by R : S \u00d7 A \u00d7 G 7\u2192R, and the observation function by O : S 7\u2192VN. Utilizing this definition, we can consequently define the problem of the Life-long Agent in section 3.1, discuss the characteristics of the benchmark assessing the self-improve capabilities in section 3.2, and define the In-memory Learning Framework in section 3.3. 3.1 Self-improved Agent Agents in real-world scenarios are often tasked with consistently performing some specific types of tasks Gspec \u2282G \u2282VN over an extended period. The question of the self-improved Agent centers on whether agents can enhance their performance without relying on human-labeled data since it\u2019s difficult to obtain such golden labels. Consequently, the reward function is categorized into two scenarios: one that utilizes fabricated labels such as AI feedback and the other in which only the correctness of outcomes can be known since it\u2019s often clear whether one solution has completed the task or not. In the implementation discussed below, we 3 \fTruth Table x y 0 0 0 0 0 0 0 0 0 0 1 \u2026 \u2026 0 1 1 0 0 0 1 1 0 1 2 0 1 1 0 0 0 1 1 1 0 2 \u2026 \u2026 1 1 1 1 1 1 1 1 1 1 4 \u201cThis creature is in size, with a coloration. \u2026. The being could possibly be which kind of being? Choose one from Creature A, Creature B, Creature C and Creature D.\" Template Prompt \u201cThis creature is massive i in size, with a dusky coloration .\u2026 The being could possibly be which kind of being? Choose one from Creature A, Creature B, Creature C and Creature D.\" Question: Creature B! Size Color Testing Data massive # dusky Size: 0: huge ->[\u201chuge\u201d, \u201cmassive\u201d, \u201ccolossal\u201d, \u2026] 1: tiny ->[\u201ctiny\u201d, \u201cminuscule\u201d, \u201cpetite\u201d, \u2026] Color: 0: bright -> [\u201cvibrant\u201d, \u201cradiant\u201d, \u201cdazzling\u201d, ...] 1: dim-> [\u201cdim\u201d, \u201cdusky\u201d, \u201cmurky\", \u2026] ... Adjectives Figure 3: The construction process of our benchmark. We pre-define a correspondence from the truth table to the labels (y) and wrap it with natural language. Each column of the truth table represents a dimension of creatures (xi), corresponding to two lists of adjectives. For instance, the first column stands for the size of the creature, associating the value 0 with huge and 1 with tiny. A combination of words is randomly selected from the sets of adjectives and then interconnected with predefined prompts to formulate the final questions. focus on the latter scenario. R 7\u2192 \u001a R, fake labels exist {0, 1}, else (1) where R on the right-hand side stands for the real set. The \u2019else\u2019 condition pertains to the correctness of the answer, 1 for correct and 0 for wrong. 3.2 Benchmark The benchmark for assessing an agent\u2019s selfimprovement ability should have certain essential characteristics. It should have a stable and clear testing goal to ensure that any progress by the model is noticeable. Additionally, the relationships within the data need to be learnable. Specifically, the least effective approach for self-improvement involves exhaustively searching through all possible solutions, which is meaningless here. Therefore, a relationship between the data is necessary. This also aligns with real-world scenarios, where common rules often exist across different experiences such as Newton\u2019s law of universal gravitation. Moreover, there must be enough data to make the problem statistically significant and solvable. Since existing benchmarks are not designed to assess the ability for self-improvement, most of them do not fully align with the required features. For example, HotpotQA (Yang et al., 2018), used in Reflexion, is primarily intended to evaluate multihop QA questions. However, upon analyzing errors made by agents that were tested by Exact Match(See AppendixA), we find that many of them are due to formatting issues, which are not expected and can\u2019t be generalized. As a result, we developed a straightforward classification dataset. We established a clear relationship between features and labels, making them learnable. The classification problem is suitably chosen because each correct feature-label match enhances the classifier\u2019s accuracy. The detailed information about the benchmark is introduced in Section 4.1.1. 3.3 In-memory Learning Within a Partially Observable Markov Decision Process (POMDP) trajectory (s0, o0, a0, s1, r1, .., sn, rn), an agent selects an action based on P(a|s, o, \u03b8), where \u03b8 represents all the variables, including prompts and parameters. Uniquely in our framework, we use the symbol \u03d5 to differentiate context notes from parameters of LLMs. The parameters of LLMs are frozen here and will therefore be omitted for simplicity. We will further explore the phases of the In-Memory Learning process in a formulaic manner below and introduce the details of implementation in section 4.1. 3.3.1 Inference Phase In the inference phase, agents get the observation o about the current state s, and select an action a \u223c P(a|s, o, \u03d5). The reward r that the model receives aligns with the concept of the self-improved agent, which was mentioned before. The trajectory \u03c4 = (s0, o0, a0, s1, r1) is recorded for later phase. This phase will continue until a specified threshold is reached. 4 \f0 2 4 6 8 Step 20 30 40 50 60 Accuracy (%) Accuracy Comparison Over Steps llama2-70b-chat GPT 3.5 llama2-7b-chat llama2-13b-chat ICL(4-shot) Figure 4: Accuracy curve over learning step. The solid lines represent the smoothed curves. Both llama2-70bchat and GPT-3.5-turbo show an upward trend. Llama2-13b-chat also shows continuous improvement, but its performance is limited by its inference capabilities. Llama2-7b-chat initially improved but experienced a decline in later steps. 3.3.2 Induction Phase After collecting a set of trajectories, the agent aims to derive general notes \u03d5batch from them. This process is completed using natural language descriptions, similar to calculating the gradient of batch data in gradient-based learning approaches like Figure 2. The size of the batch for this inductive process is limited by the length of the context window, making the topic of long context windows particularly significant here. 3.3.3 Revision Phase Like updating the parameter in gradient-based learning, the notes \u03d5 in the context before will be updated based on the insights \u03d5batch gained during the induction phase. The updated notes \u03d5\u2032 will then be utilized in the subsequent inference phase. The correctness of updating direction is ensured by statistical properties, that common rules are consistent in different experiences. 4 Experiments In this section, we will outline how we implemented the entire system first in section 4.1 and carry out systematic experiments to evaluate its performance. 4.1 Implementation Details 4.1.1 Benchmark To assess the self-improvement capabilities of agents, we developed a four-class classification problem. This problem involves a question describing one creature in 10 dimensions Like Figure 3, where every dimension is described by two opposing sets of adjectives. For instance, within the size dimension, one set of adjectives represents \"huge\" while the other represents \"tiny\". Each description uniquely matches a specific entry in a truth table that spans ten dimensions, thereby directly correlating to a single label. In the real scenario, when hearing the name of a new species, some features can be inferred because the naming process often includes hints about its characteristics. So we use abstract labels, like \"Creature A\", to avoid bringing in this kind of prior information. For each entry of the truth table, four unique combinations of adjectives are randomly selected and 896 entries are held out for extension in the future. In the end, we get 3200 shuffled samples. The first two features are designed to be the distinguishing features while the others are distractors. The accuracy achieved on this task can significantly demonstrate the extent to which the agents have grasped these rules. 4.1.2 Inference Phase Implementation During the inference phase, the agent needs to identify which creature the description refers to. Initially, the notes \u03d5 are set to \"no idea\". A taskunrelated example is provided to guide the answering format of the agent and we use Exact Match to assess the accuracy of the agents\u2019 answers. By default, the agent processes 320 samples in a sin5 \fModel Inference test(acc) Induction test(acc) Revise test (\u2206acc) llama2-7b-chat 37.11(\u00b1 9.46) 43.31(\u00b1 5.02) -3.81(\u00b1 12.36) llama2-13b-chat 42.91(\u00b1 6.59) 38.19(\u00b1 18.67) 17.63(\u00b1 8.48) llama2-70b-chat 58.67(\u00b1 9.51) 48.44(\u00b1 6.3) 1.063(\u00b1 5.09) GPT-3.5-turbo 92.94(\u00b1 7.38) 45.06(\u00b1 3.84) 2.75(\u00b1 7.05) Table 1: Ability Test. The inference test applies five distinct formats of oracle notes to assess accuracy on the same test split. In induction test, agents summarize 80 groups of notes from the same 320 data samples. Using randomly sampled 5 groups to make inferences on the original 320 data samples and the same model. The revision test involves merging 5 pairs of notes into single notes. The accuracy differences are calculated between the minimum accuracy of pairs and their merged version. gle step and saves the trajectories for use in the induction and revision phases. Following the implementation of Reflexion (Shinn et al., 2023), we instruct the agent to respond with \"Finish[Correct Answer]\". 4.1.3 Induction Phase Implementation After gathering trajectories in the previous phase, the agent identifies common features between them and summarizes their findings into batch notes \u03d5batch. Due to the constraint of the context window, the induction phase is executed in minibatch while the results \u03d5minibatch are accumulated iteratively, summarizing into \u03d5batch. We will delve into this process in the next section, demonstrating how such accumulation enhances stability, mirroring the effect of momentum observed in gradientbased learning. The notes are summarized for each creature individually and are later combined in the revision phase. 4.1.4 Revision Phase Implementation Ultimately, the context notes for each creature are individually adjusted based on the batch notes and are then merged. We illustrate how the degree to which your instructions prompt the agent to make changes can impact the stability of the optimization process, similar to the momentum in gradientbased learning. Both the induction and revision phases occur within the agents\u2019 memory, leading us to name this approach as In-memory Learning. 4.2 Compared with In-Context Leaning We choose In-context Learning as our baseline and the final result is presented in Figure 4. The result of in-context learning conducted in llama2-70bchat is slightly better than random guessing. We use 4-shot as our benchmark consists of 4 labels, and the examples were manually chosen at random, 0 2 4 6 8 Step 25 30 35 40 45 50 55 60 65 Accuracy (%) Differenct momentum accuracy curve No Momentum Partially Momentum Full Momentum Figure 5: momentum test 0 500 1000 1500 2000 2500 3000 number of data 20 30 40 50 60 70 Accuracy Percentage Different accumulation accuracy curve Accumulation = 128 Accumulation = 200 Accumulation = 320 Figure 6: accumulation test ensuring the correctness of the answers. To validate the effectiveness of our approach, we conduct experiments using various models and analyze the outcomes. 4.3 Test on Various Models As depicted in Figure 4, the performance of GPT3.5 and llama2-70b-chat shows a continuous improvement trend. However, llama2-13b-chat and llama2-7b-chat only improved a little and there is even a downward trend in the later steps for llama2-7b-chat. We analyze this outcome in three dimensions: the ability of inference, induction, and revision. 6 \fIn summary, the size of \u2026 As we all known, creature A is .. In summary, the distinguishing feature combination of Creature A includes: large size, dark coloration, strong and \u2026 Below are two notes of the Batch notes Creature A is \u2026 Summarize them. Begin with: In summary Below are two notes of the Batch notes Creature A is \u2026 Summarize them. Modified the previous notes: In summary\u2026 Previous notes: In summary, the distinguishing feature combination of Creature \u2026 Below are two notes of the Batch notes Creature A is \u2026 Summarize them. Previous notes: In summary, the distinguishing feature combination of Creature \u2026 Previous notes: In summary, the distinguishing feature combination of Creature \u2026 No Momentum Partially Momentum Full Momentum of change decreasing The Freedom of Change high low Figure 7: Momentum example. In the No Momentum setting, agents have the freedom to create new notes without any constraints. In the Partially Momentum setting, Agents are required to start with the initial words of the previous notes, which limits their freedom to make changes. The Full Momentum setting requires agents to make changes if necessary while appending the previous notes at the end of the prompts. The red underlined part in the reply represents the modified content compared to the previous notes. 4.3.1 Inference Ability We assess the inference ability of agents with Oracle notes, which indicate the upper bounds the agents can achieve in the inference phase. Given the sensitivity to the format of the prompt, we evaluate the accuracy of 5 different styles and compute the statistical result. The results shown in Table 1 reveal that both the llama2-7b-chat and llama213b-chat models attain around 40 percent accuracy, explaining why the trend of improvement is not markedly evident, as the maximum accuracy with oracle notes is not high enough. 4.3.2 Induction Ability The induction ability refers to the agent\u2019s capacity to summarize the common rules across different samples. In our study, four base models are tasked with performing induction on the same set of 320 samples, generating 80 groups of notes. We randomly select 5 of these 80 groups and use the llama2-70b-chat model to make inferences on the 320 samples. The results are presented in Table 1, indicating that llama2-70b-chat is the best one while llama2-13b-chat is the worst unexpectedly. The performance of GPT3.5-turbo falls short of that achieved by the llama2-70b-chat, providing insight into why GPT3.5 did not exhibit superior overall performance. 4.3.3 Revision Ability During the revision phase, the agent is required to summarize two notes into one iteratively. To evaluate this capability, we devised a targeted experiment. Utilizing the notes collected by the llama270b-chat model, we randomly select 5 pairs of notes, and the agents need to merge each pair. We assess the agents\u2019 inference accuracy before and after the revision process. The difference in accuracy, that between the merged notes and the lower accuracy of the original pairs, serves as a measure of the agents\u2019 revision proficiency. The result is presented in Table 1. The llama2-7b-chat model exhibited a decrease in accuracy, which accounts for the model\u2019s declining performance in Figure 4. Conversely, the llama2-13b-chat model is the most superior one in this ability test. 4.4 Effect of Parameters In our framework, certain key parameters influence the learning process. To explore these effects further, we conducted experiments focusing on the momentum and accumulation step, which are crucial for the stability of the learning process. We conduct the experiments on the llama2-70b-chat model. 4.4.1 Effect of Momentum Although the natural language is discrete, our framework incorporates a momentum mechanism. As illustrated in Figure 7, instructing the model to 7 \finitiate responses using the initial words of previous notes acts as a form of momentum, constraining the generative freedom. Additionally, we incorporated basic statistical information regarding the quantity of samples processed by the agents. We conducted comparative analyses across different momentum settings, with the results shown in Figure 7. In our experiments, the full momentum setting yields the most stable performance whereas the no momentum leads to the opposite. This suggests that integrating a momentum-like feature can significantly enhance the model\u2019s consistency. Prompts: These are some experiences about Creature C: Sample 33 This creature is minuscule in size, with vivid coloration. ... It displays motionless activities. ... Sample 34 This creature is diminutive in size, with vibrant coloration. ... It displays vigorous activities. .. Sample 35 This creature is petite in size, with radiant coloration. ... It displays lively activities. ... This is the previous analysis: First, Let\u2019s identify the consistent characteristics for creature C from the new samples: Temperature: Described as torrid in all new samples Activity: Described as lively in all new samples.. So, the final notes for identifying Creature C are: Torrid temperature Lively activity Modified the previous notes Output: First, Let\u2019s identify the consistent characteristics for creature C from the new samples: Temperature: Described as torrid in all new samples Activity: Described as lively in all new samples .. So, the final notes for identifying Creature C are: Torrid temperature Lively activity Figure 8: case study 4.4.2 Effect of Accumulation step Another critical parameter in our framework is the accumulation step count, which can exert influence on the learning process in two distinct ways. As described in the meta-implement section, the optimization process direction is determined by statistical properties, and the accumulation step assumes significance due to the fixed minibatch size imposed by the context window. Additionally, our assessments of accuracy during the subsequent influence phase are also influenced by the volume of data. In our experiment, we examined three accumulation step values: 128, 200, and 320, with the result presented in Figure 6. As observed, a smaller accumulation step leads to greater instability in the learning process. 4.5 Trapped in Local Minimum An interesting observation about the learning process is the presence of optimization challenges analogous to the occurrence of saddle points in gradientbased learning. When tasked with modifying existing notes based on new experiences, the model may encounter difficulties in updating, even when the new experience contradicts the existing notes. This issue tends to occur more frequently in the intermediate and advanced stages of the iterative update step. Since we have observed this phenomenon across various models, including GPT-3.5-turbo, we believe that it\u2019s not solely attributed to the diversity of training data. Rather, it appears as if the copy mechanism of transformers is triggered with the end-of-sequence token remaining the most likely outcome after repeating the previous notes, even in the presence of changed experiences. We have not identified the minimum support set to delve deeper into this question and leave it for future exploration. Figure 8 shows an simplified examples 5 Conclusion In conclusion, we formally define the problem of self-improved agents. We discuss the key properties of a benchmark designed to evaluate agents\u2019 self-improvement capabilities and introduce a novel framework called In-memory Learning. Our systematic experiments demonstrate the effectiveness of this method and provide valuable insights into this domain. 8 \fLimitations Multimodality has the potential to incorporate richer information, which can enable agents more adaptable to complex situations. In our current work, we primarily focus on text and do not incorporate multi-modality situations. This aspect is left for future research. Due to the constraint of budget, we didn\u2019t conduct experiments with GPT-4, leaving unanswered questions about its potential effectiveness as a learner and the extent of improvements it can achieve." + }, + { + "url": "http://arxiv.org/abs/2404.11958v1", + "title": "Not All Voxels Are Equal: Hardness-Aware Semantic Scene Completion with Self-Distillation", + "abstract": "Semantic scene completion, also known as semantic occupancy prediction, can\nprovide dense geometric and semantic information for autonomous vehicles, which\nattracts the increasing attention of both academia and industry. Unfortunately,\nexisting methods usually formulate this task as a voxel-wise classification\nproblem and treat each voxel equally in 3D space during training. As the hard\nvoxels have not been paid enough attention, the performance in some challenging\nregions is limited. The 3D dense space typically contains a large number of\nempty voxels, which are easy to learn but require amounts of computation due to\nhandling all the voxels uniformly for the existing models. Furthermore, the\nvoxels in the boundary region are more challenging to differentiate than those\nin the interior. In this paper, we propose HASSC approach to train the semantic\nscene completion model with hardness-aware design. The global hardness from the\nnetwork optimization process is defined for dynamical hard voxel selection.\nThen, the local hardness with geometric anisotropy is adopted for voxel-wise\nrefinement. Besides, self-distillation strategy is introduced to make training\nprocess stable and consistent. Extensive experiments show that our HASSC scheme\ncan effectively promote the accuracy of the baseline model without incurring\nthe extra inference cost. Source code is available at:\nhttps://github.com/songw-zju/HASSC.", + "authors": "Song Wang, Jiawei Yu, Wentong Li, Wenyu Liu, Xiaolu Liu, Junbo Chen, Jianke Zhu", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Not All Voxels Are Equal: Hardness-Aware Semantic Scene Completion with Self-Distillation", + "main_content": "Introduction The accurate 3D perception of the surrounding environment is critical for both autonomous vehicles and robots [41, 43, 47, 64]. Early semantic scene completion works mainly focus on indoor scenes [3, 27, 34, 45]. For outdoor driving scenarios, SemanticKITTI [1] provides the first large benchmark, in which LiDAR-based methods [7, 40, 51, 53] occupy a dominant position with promising performance. Recently, vision-centric methods [17, 29, 31] have made en*Corresponding authors (a) Previous SSC Methods (b) Our HASSC Method Camera 2D Features Encoder 2D-to-3D Transform 3D Features Comp. Head Images Result Camera 2D Features Encoder 2D-to-3D Transform 3D Features HVM Head Images Result Camera 2D Features Encoder 2D-to-3D Transform 3D Features HVM Head Result EMA Distill. Figure 1. Comparing our proposed hardness-aware semantic scene completion (HASSC) approach against previous semantic scene completion methods. We present an effective hard voxel mining (HVM) head with self-distillation during training. couraging progress in bird\u2019s-eye-view (BEV) perception. Researchers have further started to complete the entire 3D semantic scene with only the camera as input and obtain impressive results [4, 18, 30]. Generally, it is not trivial to infer semantic occupancy information in 3D dense space from the current camera observation alone. For vision-centric semantic scene completion models, the 2D backbone needs high-resolution images as input and consumes most of the GPU memory, as shown in Fig. 1(a). After the forward or backward 2D-to-3D transformation [22, 30, 32], the 3D dense voxel volume features are captured while it is impossible to perform feature extraction under high-resolution in 3D space due to the GPU memory limitation. The 3D backbone adopts convolution [4, 22] or self-attention [30] operators and extracts fine-grained features on reduced-resolution 3D feature maps. As voxel-wise outputs are required at full resolution, the completion head typically derives the final result through trilinear interpolation or transposed convolution directly without considering the hardness for different voxels. Meanwhile, existing methods [4, 30] mainly formulate semantic scene completion as a voxel-wise classification problem and compute the loss for each voxel equally. Such a scheme ignores that the hardness in classifying various voxels in 3D space is quite different. Since more than 90% of the voxel space is empty, 1 arXiv:2404.11958v1 [cs.CV] 18 Apr 2024 \fthese empty voxels are easy to predict but require a large amount of computation during training. Moreover, voxels inside an object exhibit greater predictability than those located on the boundary. Building upon the success of hard sample mining in 2D dense prediction like object detection [33, 42] and semantic segmentation [20, 28, 52], we are motivated to design a hard voxel mining strategy in 3D dense space. The difference between 2D pixel space and 3D voxel space is not only the extra computational cost due to dimension upgrading, but also a large number of empty voxels consuming most of memory and computation in 3D voxel space. To alleviate this problem, we propose the hard voxel mining (HVM) head with self-distillation, which select hard voxels via the global hardness and refine them with local hardness, as illustrated in Fig. 1(b). Specifically, the global hardness is based on the uncertainty in predicting each voxel so that we can update the selected voxels dynamically. As most of the voxels selected in such way are empty at the early stages of training, local hardness based on geometric anisotropy is introduced to weight the losses of different voxels. The local geometric anisotropy (LGA) of a voxel is defined as the semantic difference from its neighbors. We adopt the linear mapping of LGA as the local hardness to weight the voxel-wise losses from the hard area and refine their predictions. Furthermore, self-distillation training is introduced to make the model outputs more stable and consistent. The teacher model is optimized by the exponential moving average (EMA) of the student model without extra training process. Our presented self-distillation approach works well with the hard voxel mining head to jointly improve the completion performance. The main contributions of this work are summarized as follows: \u2022 We propose a hardness-aware semantic scene completion (HASSC) scheme that can be easily integrated into existing models without incurring extra cost for inference. \u2022 We take advantage of both the global and local hardness to find the hard voxels so that their predictions can be refined by weighted voxel-wise losses during training. \u2022 A self-distillation training strategy is introduced to improve semantic scene completion through an end-to-end training manner. \u2022 Extensive experiments are conducted to demonstrate the effectiveness of our presented method. 2. Related Work Semantic Scene Completion. The methods for outdoor semantic scene completion (SSC) can be divided into two categories according to their input: 1) LiDAR-based methods. The grid-based approaches [40, 50] employ the occupancy grid voxelized from the sparse LiDAR point cloud as input and achieve fast inference performance with liteweight backbone. The point-based methods [7, 53] integrate the point-wise features within the voxel space and improve the model accuracy. Xia et al. [51] redesign the completion network architecture and obtain the optimal accuracy with 3D input. 2) Camera-based methods. MonoScene [4] is the pioneer work, which explores the SSC with monocular camera image firstly. The subsequent works construct a tri-perspective view plane [18] or design dual-path transformer decoder [60] to improve the performance with single image. VoxFormer [30] estimates the coarse geometry with stereo images first and obtains the non-empty proposals to perform deformable cross-attention [63] on single or multiple monocular images. Another method category utilizes implicit representations rather than voxel-based modeling, indicating the capability for extending SSC with both LiDAR [26, 39] and camera [12]. Since the camera is much cheaper with greater application potentials than LiDAR, we mainly focus on vision-centric methods in this paper. Hard Sample Mining for Dense Prediction. Hard sample mining is firstly explored in 2D object detection [33, 42] as the difficulty in detecting distinct objects from an image is quite different. In 2D image segmentation, Li et al. [28] propose a layer cascade method to segment regions with different hardness. Yin et al. [57] extract hard regions according to the loss values and re-train these areas for better performance. Kirillov et al. [20] start from the common ground of image rendering and segmentation, and refine the edge regions of objects in the image from the feature level. Deng et al. [9] use an auxiliary detection network to find hard areas at nighttime and conduct segmentation refinement in both training and inference. Xiao et al. [52] propose a pixel hardness learning method by making use of global and historical loss values. The above methods are all designed for images in 2D pixel space. Li et al. [23] propose local geometric anisotropy to weight voxel-wise cross-entropy losses, which does not perform well in outdoor scenes. In this paper, we propose to conduct effective hard sample mining in 3D dense voxel space for driving scenes. Self-Distillation. Knowledge distillation is firstly proposed to learn dark knowledge from well-trained large models for model compression [2, 16]. A series of subsequent works have been presented to improve the learning efficiency and capability of student models [10, 15, 55, 61]. In autonomous driving, knowledge distillation, especially crossmodality distillation, has shown its great potential in improving model accuracy [6, 48, 62] and compressing models [8, 54, 58]. However, these methods usually need to train a stronger teacher with more parameters or other modality at first, which incur additional training costs. Inspired by successful applications of self-distillation in other fields including 2D and 3D semantic segmentation [19, 21, 25, 59], we introduce self-distillation training strategy for semantic scene completion without extra designed models. 2 \fCamera Encoder 2D-to-3D Transform 3D Backbone Input Images Soft Labels Input Images Predictions Camera Encoder 2D-to-3D Transform 3D Backbone EMA Teacher-Model Student-Model Self-Distillation HVM Head HVM Head final \uf050 training and inference training only Teacher final \uf050 hard refine \uf050 distill \uf04c hvm s-hvm t-hvm \u03b4 = + \u22c5 \uf04c \uf04c \uf04c ssc \uf04c total ssc hvm distill = + + \uf04c \uf04c \uf04c \uf04c gt \uf056 Figure 2. Overview of Hardness-Aware Semantic Scene Completion (HASSC) pipeline. We take the camera images as input and construct 3D feature volume by Camera Encoder and 2D-to-3D Transform. With the fine-grained features provided by 3D Backbone, we propose hard voxel mining (HVM) head to make the model concentrate on hard voxels. The teacher-model has the same architecture as studentmodel, which is updated by the exponential moving average (EMA) of student. The stable predictions can be achieved by taking advantage of both the self-distillation and HVM head. 3. Method The semantic scene completion task predicts a dense semantic voxel volume V \u2208RX\u00d7Y \u00d7Z in front of the vehicle with only the observation from onboard sensors including camera and LiDAR. (X, Y , Z) represent the length, width and height of the 3D volume, respectively. Each voxel in V is either empty or occupied with one semantic class. We mainly consider the more challenging 2D input for its low cost and great application potentials. 3.1. Overview In this work, we aim to provide a hard sample mining solution in voxel space for 3D dense prediction. The overall pipeline of Hardness-Aware Semantic Scene Completion (HASSC) is illustrated in Fig. 2. Our proposed hard voxel mining (HVM) head and self-distillation training strategy are independent from specific network, which can be easily integrated into off-the-shelf methods. The typical semantic scene completion network, e.g., MonoScene [4] and VoxFormer [30], consists of camera encoder, 2D-to-3D transformation, 3D backbone and completion head. Camera Encoder. The camera encoder is made of an image backbone and a neck, which extracts semantic and geometric feature F2D \u2208RH\u2032\u00d7W \u2032\u00d7D\u2032 from images under the perspective view. H\u2032 \u00d7 W \u2032 is the 2D feature resolution, and D\u2032 is the dimension. The extracted feature is the basis to construct 3D voxel volume in the following. 2D-to-3D Transformation. The 2D-to-3D transformation in semantic scene completion is similar to view transformation in BEV perception, which can be divided into two paradigms, including forward projection [17, 22, 37] and backward projection [30, 31, 49]. With the extracted image feature F2D, we construct 3D volume feature F3D \u2208 RX\u2032\u00d7Y \u2032\u00d7Z\u2032\u00d7D by explicit geometry estimation and querybased 3D-to-2D back projection [30]. To reduce memory consumption and computational cost, X\u2032 \u00d7 Y \u2032 \u00d7 Z\u2032 as the 3D feature resolution is smaller than X \u00d7 Y \u00d7 Z. D is the 3D feature dimension. 3D Backbone. The 3D Backbone performs selfattention [30] or convolution [4, 22] on the voxel volume feature F3D from the 2D-to-3D Transformation and obtain fine-grained feature F3D fine \u2208RX\u2032\u00d7Y \u2032\u00d7Z\u2032\u00d7D. Moreover, the completion head uses F3D fine to obtain the final prediction Pfinal \u2208RX\u00d7Y \u00d7Z\u00d7C, where C is the number of total classes including empty and semantic categories. 3.2. Hard Voxel Mining Head The completion head in SSC usually up-samples the finegrained feature F3D fine from the 3D backbone through trilinear interpolation or transposed convolution to obtain the completion result Pfinal of full-resolution. Since this process does not take into account the hardness of each voxel, the performance is poor in some difficult regions. Our proposed hard voxel mining (HVM) head is based on the vanilla completion head and selects hard voxels during training to refine their predictions. In the following, we firstly introduce the definitions of global hardness and local hardness, and then explain the working flow of proposed HVM head, as illustrated in Fig. 3. Global Hardness. With the fine-grained feature F3D fine, we obtain the coarse prediction Pcoarse \u2208RX\u2032\u00d7Y \u2032\u00d7Z\u2032\u00d7C by the single layer Multi-Layer Perceptron (MLP) and softmax function firstly. Let (i, j, k) denote the voxel index. For the prediction of each voxel p(i,j,k) \u2208R1\u00d7C in Pcoarse , we rank the probabilities of each class {p1, p2, ..., pC} in decreasing order. The largest probability in C classes is represented as pa, and the second largest one is denoted as pb. Then, the 3 \fCoarse Prediction Fine-grained Feature coarse \uf050 hard fine \uf046 Trilinear Interpolation \u2026\u2026 training and inference training only hard refine \uf050 \u2026\u2026 \u2026\u2026 3D fine \uf046 MLP Layer Final Prediction final \uf050 global \uf048 Sample( ) local \uf048 hvm \uf04c Figure 3. Illustration of Hard Voxel Mining (HVM) Head. At training stage, N hard voxels are selected with respect to their global hardness and random sampling. Then, we re-sample the corresponding fine-grained features and employ MLP Layer to refine their predictions, which are supervised by the ground truth and local hardness. For inference, we directly utilize Trilinear Interpolation to obtain final prediction. global hardness Hglobal i,j,k of this voxel is defined as follows g\\math cal { H } _{ { i, j,k}}^{\\text {global}}=\\frac {1}{p^a-p^b}. (1) We obtain Hglobal \u2208RX\u2032\u00d7Y \u2032\u00d7Z\u2032 by computing all the predictions of voxels in Pcoarse. Hglobal measures the uncertainty of the semantic scene completion prediction between the class a and b, which varies with the optimization of the network. The value of Hglobal indicates the global hardness of predicting a certain voxel during the overall training process. In this paper, we mainly employ Hglobal to select hard voxels and refine their predictions. Local Hardness. In 3D dense space, each voxel involves different geometric information, which depends on its various location. Since the voxels in the boundary region pose a greater challenge than those in the interior, the local information is crucial for instructing the model to find genuinely hard voxels. We adopt local geometric anisotropy [23] (LGA) A \u2208RX\u00d7Y \u00d7Z as the basis of the local hardness Hlocal \u2208RX\u00d7Y \u00d7Z on the selected voxels. For each voxel v(i,j,k) in V, the LGA Ai,j,k is computed with its neighbors {v1, v2, .., vM} at M different directions \\mat h c a l { A }_{ i ,j ,k } =\\sum _{m=1}^M\\left (v_{\\text {gt}} \\oplus v_{\\text {gt}}^{m}\\right ), \\label {eq:lga} (2) where vgt and vm gt are the semantic labels of v and vm (m = 1, ..., M), respectively. \u2295denotes the exclusive disjunction operation. It returns 0 or 1 when v and vm have the same semantic label or not. Note that we calculate the LGA of all the voxels including empty ones. In our implementation, we set M to 6 and compute the voxels in the LGA=0 LGA=1 LGA=2 LGA=3 LGA=4 LGA=5 LGA=6 Local Geometric Anisotropy (LGA) Distribution on SemanticKITTI Figure 4. Illustration of Local Geometric Anisotropy (LGA). The upper figure gives the examples of different LGA values, which are all from real scenarios. The lower figure shows the distribution of LGA values in SemanticKITTI [1]. We use dark and light colors to represent the proportion of non-empty and empty voxels in each LGA value category, respectively. up/down, front/back, and left/right directions. The examples and distribution information on SemanticKITTI [1] for different LGA values are provided in Fig. 4. Then, the corresponding local hardness Hi,j.k is defined as below \\mat hcal { H } _{{i,j,k}}^{\\text {local}}=\\alpha + \\beta \\mathcal {A}_{i,j,k}, \\label {eq:loc_hard} (3) where \u03b1 and \u03b2 are the coefficients linearly mapping A onto Hlocal. Hlocal measures the semantic difference between the selected voxel and its neighbors. The value of Hlocal reflects the geometric position on the object of the voxel. We adopt local hardness Hlocal to weight the selected voxels and make the model focus on more challenging voxel positions. Hard Voxel Selection. During training, we select N hard voxels from the coarse prediction Pcoarse with the global hardness Hglobal. Since N \u226aX\u2032 \u00d7 Y \u2032 \u00d7 Z\u2032, selecting N voxels with the largest Hglobal in Pcoarse directly may cause the SSC network to fall into over-fitting in the local area at the beginning. Motivated by PointRend [20], we firstly over-generate proposal voxels by randomly sampling tN voxels (t > 1) with a homogeneous distribution in 3D dense space. Secondly, \u03c9N hard voxels (\u03c9 \u2208[0, 1]) are selected from tN proposals by sorting the global hardness, which is calculated from the corresponding coarse prediction region Pcoarse. Thirdly, remaining (1 \u2212\u03c9)N voxels are randomly sampled from 3D space to prevent over-fitting in training. Finally, we obtain N hard voxels with the coordinates Vhard \u2208RN\u00d73. Voxel-wise Refinement Module. Given N selected hard voxels, we re-sample their corresponding features from the fine-grained 3D volume features F3D fine by the coordinates Vhard and obtain Fhard fine \u2208RN\u00d7D. Then, the hard voxels are refined with a lightweight network consisting of MLPs like 4 \fPointNet [38], since the voxel-wise prediction can be considered as point-wise segmentation problem as below \\ma thcal { P}_{\\text {re fine}}^{\\text {hard}}=\\text {MLP}(\\mathcal {F}^{\\text {hard}}_{\\text {fine}}), (4) where Phard refine is the refined prediction of the selected hard voxels. In training process, we need to re-sample the corresponding semantic labels of hard voxels from the ground truth. As the majority of 3D dense space is empty, we observe that the selected hard voxels are also predominantly empty ones at the beginning. In fact, part of these empty voxels are not inherently challenging to distinguish. If we treat the selected hard voxels equally, it may cause the SSC network to focus on some samples that are not actually difficult in the early stages of training. Therefore, we adopt the predefined local hardness Hlocal to weight the selected hard voxels and make the SSC network with the auxiliary MLP-based refinement head concentrate on the harder samples at local position. Specifically, the local hardness of selected hard voxels is computed by Eq. 2 and Eq. 3. Then, the hard voxel mining loss is calculated as follows \\mat h c a l {L} _{\\tex t {s-hv m}}=\\fr ac g{1}{N} \\sum _{n=1}^N \\mathcal {H}^{\\text {local}}_{n} \\cdot \\mathrm {CE}({v}_{\\text {refine}}^{n}, v_{\\text {gt}}^{n}), \\label {eq:hvm} (5) where CE(\u00b7, \u00b7) denotes the cross entropy loss function. vn refine and vn gt are the refined prediction and the semantic label of the selected n-th hard voxel, respectively. At inference stage, we directly use trilinear interpolation to obtain the final completion result Pfinal without introducing extra computational burden, as illustrated in Fig. 3. 3.3. Self-Distillation To further train a robust model with higher performance without extra well-trained teacher model, we propose to perform self-distillation with the hard voxel mining head. The teacher model in our HASSC scheme has the same network architecture as the student model, as shown in Fig. 2. Following [14, 46], we construct a mean teacher by exponential moving average (EMA) to achieve better stability and consistency between iterations. During training process, the parameters of teacher model share the same initialization as student at step 0 and update at next steps as below \\theta _{ t +1}^{\\tex t {T e acher}}=\\g amm a \\theta _{t}^{\\text {Teacher}}+(1-\\gamma ) \\theta _{t+1}^{\\text {Student}}, \\label {eq:ema} (6) where \u03b3 = min \u0010 1 \u2212 1 t+1, 0.99 \u0011 . \u03b8Teacher t and \u03b8Student t are the learnable parameters of teacher and student model at step t. We only optimize parameters of the student model \u03b8Student and update the teacher network by Eq. 6. Since the teacher network has the same hard voxel mining head as student, N hard voxels Vhard-T \u2208RN\u00d73 can be obtained during training. We use Vhard-T to sample the final result Pfinal from the student branch. Then, the teacherguided hard voxel mining loss Lt-hvm is computed by Eq. 5 with the corresponding local hardness, final prediction and ground truth. In case of the large number of voxels, we employ Lt-hvm to make the hard voxel selection by HVM head in student model more stable and consistent. Moreover, we adopt Kullback\u2013Leibler divergence (DKL) to instruct student model to learn from the online soft labels PTeacher final provided by teacher-branch as follows \\mathc a l { L }_{ \\ text {di still }} =\\la m bda e^{\\mu } \\cdot \\boldsymbol {D}_{\\mathrm {KL}}\\left (\\mathcal {P}_{\\text {final}}^{\\text {Teacher}} \\| \\mathcal {P}_{\\text {final}}\\right ), \\label {eq:kld} (7) where \u03bb is the weight coefficient for distillation. \u00b5 \u2208[0, 1] is the mean intersection over union (mIoU) value between the prediction of the current frame with teacher model PTeacher fine and the corresponding ground truth Vgt. 3.4. Training and Inference Overall Loss Function for Training. In this work, we follow the common settings [4, 30] and treat the semantic scene completion (SSC) as a voxel-wise classification problem. Overall, the total training loss of our proposed HASSC is composed of three terms as below \\mat h cal { L}_{ \\ text {total}}=\\mathcal {L}_{\\text {ssc}}+\\mathcal {L}_{\\text {hvm}}+\\mathcal {L}_{\\text {distill}}, (8) where Lssc is the commonly used loss for SSC. Lssc consists of weighted cross entropy loss Lwce and scene-class affinity losses as follows \\m a thca l {L} _ {\\text {ssc}}=\\mathcal {L}_{\\text {wce}}+\\mathcal {L}_{\\text {sem}} + \\mathcal {L}_{\\text {geo}}, (9) where Lsem and Lgeo are scene-class affinity losses optimized for semantics and geometry, respectively. Additionally, the hard voxel mining loss Lhvm is made of Ls-hvm and Lt-hvm: \\m a thcal { L } _{\\text {hvm}}=\\mathcal {L}_{\\text {s-hvm}}+\\delta \\cdot \\mathcal {L}_{\\text {t-hvm}}, (10) where \u03b4 is the trade-off coefficient between student and teacher. Inference. The student-branch is well optimized during training, which not only digs out hard samples and refine them with fine-grained features but also makes use of the soft labels provided by the teacher-branch. During the inference process, we only need to preserve the student-branch without incurring the extra computational cost. 4. Experiments 4.1. Setup Dataset. The SemanticKITTI dataset [1] is the first large semantic scene completion benchmark for outdoor scenes, which contains LiDAR scans and front camera images from 5 \fMethods VoxFormer-S [30] HASSC VoxFormer-S VoxFormer-T [30] HASSC VoxFormer-T StereoScene\u2020 [22] HASSC StereoScene Modality Camera Camera Camera Camera Camera Camera Range S M L S M L S M L S M L S M L S M L IoU (%)\u2191 65.35 57.54 44.02 65.54 57.99 44.82 65.38 57.69 44.15 66.05 58.01 44.58 65.70 56.84 43.66 65.52 57.01 44.55 mIoU (%)\u2191 17.66 16.48 12.35 18.98 17.95 13.48 21.55 18.42 13.35 24.10 20.27 14.74 23.27 21.15 15.24 24.43 22.17 15.88 car (3.92%) 39.78 35.24 25.79 42.37 36.78 27.23 44.90 37.46 26.54 45.79 37.70 27.33 47.05 43.52 31.15 46.47 43.02 30.64 bicycle (0.03%) 3.04 1.48 0.59 2.72 2.26 0.92 5.22 2.87 1.28 4.23 2.11 1.07 2.38 2.15 1.05 4.20 2.63 1.20 motorcycle (0.03%) 2.84 1.10 0.51 4.49 1.63 0.86 2.98 1.24 0.56 5.64 2.03 1.14 4.78 2.84 1.55 5.26 3.34 0.91 truck (0.16%) 7.50 7.47 5.63 6.25 11.00 9.91 9.80 10.38 7.26 22.89 21.90 17.06 18.72 22.48 17.55 24.94 34.73 23.72 other-veh. (0.20%) 8.71 4.98 3.77 14.77 8.85 5.61 17.21 10.61 7.81 22.71 13.52 8.83 17.33 13.79 9.26 20.61 14.24 7.77 person (0.07%) 4.10 3.31 1.78 5.11 4.89 2.80 4.44 3.50 1.93 5.12 4.18 2.25 6.31 4.37 2.17 6.06 3.58 1.79 bicyclist (0.07%) 6.82 7.14 3.32 6.87 8.57 4.71 2.65 3.92 1.97 4.09 6.58 4.09 7.70 4.75 2.30 8.22 5.65 2.47 motorcyclist (0.05%) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 road (15.30%) 72.40 65.74 54.76 74.49 68.04 57.05 75.45 66.15 53.57 78.51 70.02 57.23 79.24 74.16 61.86 80.61 75.53 62.75 parking (1.12%) 10.79 18.49 15.50 15.49 21.23 15.90 21.01 23.96 19.69 29.43 26.69 19.89 21.33 21.19 17.02 25.21 25.95 20.20 sidewalk (11.13%) 39.35 33.20 26.35 42.69 36.32 28.25 45.39 34.53 26.52 51.69 38.83 29.08 50.71 41.86 30.58 52.68 43.61 32.40 other-grnd(0.56%) 0.00 1.54 0.70 0.02 2.38 1.04 0.00 0.76 0.42 0.00 1.55 1.26 0.00 1.12 0.85 0.00 0.18 0.51 building (14.10%) 17.91 24.09 17.65 22.78 27.30 19.05 25.13 29.45 19.54 27.99 30.81 20.19 26.98 32.52 22.71 29.09 31.68 22.90 fence (3.90%) 12.98 10.63 7.64 9.81 8.70 6.58 16.17 11.15 7.31 17.09 11.65 7.95 22.50 14.26 8.73 20.88 13.32 8.67 vegetation (39.3%) 40.50 34.68 24.39 40.49 35.53 25.48 43.55 38.07 26.10 44.68 38.93 27.01 40.20 36.10 24.81 40.29 36.44 26.27 trunk (0.51%) 15.81 10.64 5.08 14.93 11.25 6.15 21.39 12.75 6.10 22.22 14.11 7.71 21.45 15.28 7.17 21.65 14.92 7.14 terrain (9.17%) 32.25 35.08 29.96 36.66 38.28 32.94 42.82 39.61 33.06 47.04 41.37 33.95 45.75 43.67 34.87 48.50 46.95 38.10 pole (0.29%) 14.47 11.95 7.11 15.25 12.48 7.68 20.66 15.56 9.15 18.95 14.76 9.20 20.43 18.95 10.66 18.67 16.34 9.00 traf.-sign (0.08%) 6.19 6.29 4.18 5.52 5.61 4.05 10.63 8.09 4.94 9.89 8.44 4.81 9.21 8.91 5.19 10.88 9.08 5.23 Table 1. Quantitative comparisons against the selected baseline methods on the validation set of SemanticKITTI [1]. \u2020 denotes the results are reproduced from the original implementation. \u201cS\u201d, \u201cM\u201d and \u201cL\u201d represent the short range (12.8 \u00d7 12.8 \u00d7 6.4m3), middle range (25.6 \u00d7 25.6 \u00d7 6.4m3) and long/full range (51.2 \u00d7 51.2 \u00d7 6.4m3), respectively. The improved results compared to the corresponding baselines are marked in blue. KITTI Odometry Benchmark [11]. The ground truth is generated from the accumulated LiDAR semantic segmentation labels, which is represented as the 256 \u00d7 256 \u00d7 32 voxel grids with a resolution of 0.2m. Each voxel grid is annotated as one of 19 semantic classes or 1 empty class. We adopt the same setting as in [1, 11] and split the total 22 sequences into (00-07, 09-10) / (08) / (11-21) for training/validation/test sets. Evaluation Metrics. The mean intersection over union (mIoU) on 19 semantic classes is reported to evaluate the quality of semantic scene completion (SSC). Moreover, we adopt intersection over union (IoU) to measure the performance of class-agnostic scene completion (SC), which reflects the 3D geometric quality with 2D camera images as input. Besides, we calculate the IoU and mIoU at different ranges from the ego car on the validation set, which include the volume of 12.8 \u00d7 12.8 \u00d7 6.4m3 (short range, S), 25.6 \u00d7 25.6 \u00d7 6.4m3 (middle range, M), and 51.2 \u00d7 51.2 \u00d7 6.4m3 (long/full range, L). In practical, the perception results at closer range are more critical to vehicle safety. Implementation Details. Our proposed HASSC method is designed as a generic training scheme to improve the performance of existing methods in hard regions. To demonstrate the efficacy of our approach, we choose the state-of-the-art methods including VoxFormer-S [30], VoxFormer-T [30]1 1https://github.com/NVlabs/VoxFormer and StereoScene [22]2 as our baseline models. VoxFormerS only adopts the current frame from left camera as input while VoxFormer-T combines the previous 4 images. StereoScene uses both the left and right camera images to train the model. The input image size is set to 1220 \u00d7 370 and 1280 \u00d7 384 for VoxFormer and StereoScene, respectively. Other training settings are keep the same as the corresponding baselines. The number of selected hard voxels during training (N) is set to 4096. The coefficients (\u03b1, \u03b2) of linear transformation from A to Hlocal are set to 0.2 and 1.0, respectively. The distillation weight \u03bb is set to 48. The tradeoff coefficient \u03b4 is set to 0.1. All the models in experiments are trained on four GeForce RTX 4090 GPUs with 24G memory, and the inference speed is reported by a single GeForce RTX 4090 GPU. More implementation details with different baselines are given in our supplementary material. 4.2. Performance Qualitative Comparisons. We firstly present the quantitative comparison with the our baseline models on the validation set of SemanticKITTI. As shown in Tab. 1, HASSC effectively improves the accuracy over baseline methods including VoxFormer-S (+1.13%mIoU, +0.80%IoU), VoxFormer-T (+1.39%mIoU, +0.43%IoU) and Stere2https://github.com/Arlo0o/StereoScene 6 \fcar road pole truck fence terrain person trunk bicycle building parking sidewalk vegetation other-vehicle other-ground bicyclist motocycle motorcyclist traffic-sign Camera View (Left) Ground Truth MonoScene TPVFormer VoxFormer-T Ours Figure 5. Visual results of our method (HASSC-VoxFormer-T) and the state-of-the-art camera-based methods on the validation set of SemanticKITTI. The left shows the perspective view image from left camera, which is the input for model training and inference. The right is the ground truth and the corresponding predicted semantic scene from these methods. oScene (+0.64%mIoU, +0.89%IoU) at full range. Our HASSC-VoxFormer-T obtains a more obvious improvement at the closer range including short (+2.55%mIoU) and middle (+1.85%mIoU) to ensure the safety of autonomous vehicles. Besides, it is worthy of noting that HASSC-VoxFormer-S with single image input even outperforms VoxFormer-T with 5 images (13.48% v.s. 13.35%). Then, we submit our prediction results to the website of SemanticKITTI for the online evaluation of hidden test set. In Tab. 2, we compare our approach against the stateof-the-art camera-based methods. HASSC shows consistent improvements with both VoxFormer-S (+1.14%mIoU) and VoxFormer-T (+0.97%mIoU). Note that HASSCVoxFormer-S outperforms all the camera-based methods with single image input. Comparisons on detailed semantic categories and further discussions about our method are provided in the supplementary material. Qualitative Comparisons. To further investigate the effectiveness of our proposed HASSC, we visualize the predictions of different models on the validation set of SemanticKITTI. As shown in Fig. 5, our method (HASSCVoxFormer-T) performs better at complex classes junctions (e.g., road, sidewalk and truck) compared to other camerabased approaches. This also corresponds to the improvements in quantitative evaluation (road +3.66%mIoU, sidewalk +2.56%mIoU, truck +9.80%mIoU) of our method, which demonstrates the effectiveness of the proposed hard voxel mining scheme. 4.3. Ablation Studies In this section, we perform exhaustive ablation experiments on the validation set of SemanticKITTI with VoxFormerMethods SSC Input Pub. IoU (%)\u2191mIoU (%)\u2191 LMSCNet\u2217[40] \u02c6 xocc 3D 3DV 2020 31.38 7.07 3DSketch\u2217[5] xrgb,\u02c6 xTSDF CVPR 2020 26.85 6.23 AICNet\u2217[24] xrgb,\u02c6 xdepth CVPR 2020 23.93 7.09 JS3C-Net\u2217[53] \u02c6 xpts AAAI 2021 34.00 8.97 MonoScene [4] xrgb CVPR 2022 34.16 11.08 TPVFormer [18] xrgb CVPR 2023 34.25 11.26 OccFormer [60] xrgb ICCV 2023 34.53 12.32 NDC-Scene [56] xrgb ICCV 2023 36.19 12.58 VoxFormer-S [30] xrgb CVPR 2023 42.95 12.20 VoxFormer-T [30] xrgb \u00d7 5 CVPR 2023 43.21 13.41 HASSC-VoxFormer-S xrgb 43.40 13.34 HASSC-VoxFormer-T xrgb \u00d7 5 42.87 14.38 Table 2. Quantitative comparisons with the state-of-the-art camera-based methods on the hidden test set of SemanticKITTI. \u2217denotes that the method is converted to the camera-based model by MonoScene [4]. T [30] as the baseline model for fair comparison. Ablation on HASSC Scheme. Firstly, we provide the ablation of the proposed HASSC scheme. As illustrated in Tab. 3, the first row is the result reproduced with the original implementation of VoxFormer-T. It can be observed that using global hardness Hglobal and local hardness Hlocal individually obtains the limited performance improvements. Only the combination of Hglobal and Hlocal can effectively improve model performance. With the teacher-guided hard voxel mining (T-HVM), HASSC achieves stable improvements in both semantics and geometry. The self-distillation (T-Distill) from teacher-branch can provide consistent supervision with reliable soft labels and further improve the model accuracy when coupled with HVM head. Furthermore, we visualize the sum of the local hardness of N se7 \fTraining Epoch Sum of Local Hardness Figure 6. Visualization of the sum of the local hardness change during training on both student and teacher branches. Global Local T-HVM T-Distill IoU (%)\u2191 mIoU (%)\u2191 44.16 13.33 \u2713 43.89 13.30 \u2713 44.00 13.40 \u2713 \u2713 43.98 13.91 \u2713 \u2713 \u2713 44.12 14.03 \u2713 44.38 13.65 \u2713 \u2713 \u2713 \u2713 44.58 14.74 Table 3. Ablation study on our proposed HASSC scheme. Methods VoxFormer-T HASSC-VoxFormer-T Params (M) 57.91 58.43 Inference Speed (ms) 724.05 720.84 IoU (%)\u2191 44.16 44.58 mIoU (%)\u2191 13.33 14.74 Table 4. Comparison with baseline model on the training and inference efficiency. lected voxels during training. As shown in Fig. 6, the sum of local hardness continues to increase on both the student and teacher branches, which means the process of selecting voxels based on the learned global hardness Hglobal is consistent with local one Hlocal. Comparison of Training and Inference Efficiency. The model complexity analysis regarding training and inference is provided in Tab. 4. Compared with the vanilla VoxFormer-T, it can be seen that our proposed HASSC only introduces minimal overheads (+0.90% parameters) during training but promotes 10.58% relative performance without incurring extra cost for inference (724.05ms v.s. 720.84ms). Ablation on Hard Voxel Selection. As there are totally 262, 144 (128\u00d7128\u00d716) voxels in coarse prediction Pcoarse, we need to find an appropriate number (N) of hard voxels for refinement. Therefore, the ablation experiments on the number (N) of hard voxel selection are shown in Tab. 5. These experiments are conducted without distillation loss Ldistill from teacher-branch. A small N yields an minimal improvement while a large one may include relatively easy voxels resulting in erroneous optimization. When N is set to 4096, the HVM head achieves the best performance. Ablation on Self-Distillation. We present ablation on the weight (\u03bb) of self-distillation loss Ldistill. As illustrated in Voxel Numbers (N) 0 1024 2048 4096 8192 IoU (%)\u2191 44.16 44.01 43.92 44.12 44.09 mIoU (%)\u2191 13.33 13.52 13.64 14.03 13.74 Table 5. Ablation study on the number of hard voxel selection. Distill Weight (\u03bb) 0 12 24 48 96 IoU (%)\u2191 44.12 44.06 44.25 44.58 44.51 mIoU (%)\u2191 14.03 13.80 14.23 14.74 14.39 Table 6. Ablation study on the weight of self-distillation from teacher model. Methods Hardness IoU (%)\u2191 mIoU (%)\u2191 PALNet [23] Local 44.28 13.28 PointRend [20] Global 44.29 13.57 Xiao et al. [52] Global 44.10 13.33 Ours Global & Local 44.58 14.74 Table 7. Comparison with other hard sample mining schemes. We re-implement them with VoxFormer-T for fair comparison. Tab. 6, inappropriate distillation loss weight may lead to the inferior model performance when \u03bb = 12. We set \u03bb to 48 in order to better integrate it into the hard voxel mining head. Comparison with Other Schemes. Finally, we provide comprehensive comparison with existing hard sample mining schemes. We re-implement PALNet [23], PointRend [20] and [52] with VoxFormer-T [30] to facilitate a fair comparison. PALNet [23] is originally for indoor semantic scene completion and only considers local geometry. PointRend [20] and [52] are designed for 2D image segmentation and just use the globally updated information from the network optimization process. As shown in Tab. 7, they obtain marginal performance enhancements in 3D space of large-scale scenes. Our proposed HASSC outperforms all the reference methods. 5. Conclusion In this paper, we adhere to the principle of not all voxels are equal and propose hardness-aware semantic scene completion (HASSC). The hard voxel mining head consists of hard voxel selection and voxel-wise refinement module, which combines global and local hardness to optimize the network on difficult regions. Additionally, self-distillation training strategy is introduced to improve the stability and consistency of completion. We have conducted extensive experiments to demonstrate that HASSC can effectively promote the existing semantic scene completion models without incurring the overheads during inference. Acknowledgments This work is supported by National Natural Science Foundation of China under Grants (62376244). It is also supported by Information Technology Center and State Key Lab of CAD&CG, Zhejiang University. 8" + }, + { + "url": "http://arxiv.org/abs/2403.08040v1", + "title": "MicroT: Low-Energy and Adaptive Models for MCUs", + "abstract": "We propose MicroT, a low-energy, multi-task adaptive model framework for\nresource-constrained MCUs. We divide the original model into a feature\nextractor and a classifier. The feature extractor is obtained through\nself-supervised knowledge distillation and further optimized into part and full\nmodels through model splitting and joint training. These models are then\ndeployed on MCUs, with classifiers added and trained on local tasks, ultimately\nperforming stage-decision for joint inference. In this process, the part model\ninitially processes the sample, and if the confidence score falls below the set\nthreshold, the full model will resume and continue the inference. We evaluate\nMicroT on two models, three datasets, and two MCU boards. Our experimental\nevaluation shows that MicroT effectively improves model performance and reduces\nenergy consumption when dealing with multiple local tasks. Compared to the\nunoptimized feature extractor, MicroT can improve accuracy by up to 9.87%. On\nMCUs, compared to the standard full model inference, MicroT can save up to\nabout 29.13% in energy consumption. MicroT also allows users to adaptively\nadjust the stage-decision ratio as needed, better balancing model performance\nand energy consumption. Under the standard stage-decision ratio configuration,\nMicroT can increase accuracy by 5.91% and save about 14.47% of energy\nconsumption.", + "authors": "Yushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, Hamed Haddadi", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "MicroT: Low-Energy and Adaptive Models for MCUs", + "main_content": "Introduction The deployment of Deep Neural Networks (DNNs) on resourceconstrained devices, known as Tiny Machine Learning (TinyML), has garnered widespread attention across both academia and industry [1\u20133]. Unlike traditional Machine Learning (ML), TinyML necessitates the reduction in size and computational demand of DNNs to align with the stringent resource limitations of the device. These devices are typically deployed in specific environments to perform localized tasks. However, due to the constrained communication and privacy [4, 5], the cloud often lacks data specific to these localized tasks [6, 7]. Therefore, conventional cloud-trained, local-deployed, and task-specific models may not be effectively sufficient for the diverse requirements of multiple local tasks [8, 9]. This disparity presents a significant challenge to the application of TinyML. Inspired by these challenges, several methods have been developed in the community. Popular approaches can be categorized into three types: (i) Federated Learning (FL) [10], (ii) Split Learning (SL) [11], and (iii) Multitask Learning (MTL) [12]. FL designs a collaborative approach between the cloud and local resources, training a global model in the cloud and fine-tuning the complete or partial model locally to adapt to specific local tasks. SL also involves cooperative training, with the model divided between the cloud and local devices. However, both FL and SL require regular and frequent communication between the cloud and local devices [13, 14], which can be challenging to apply in some real-world situations. For example, in deep-sea environments [15], limited communication may impede joint training and the transmitting of extensive training parameters [16]. In addition, transmitting model parameters incurs excessive communication and energy costs [17, 18]. MTL learns multiple specific local tasks in the cloud and shares portions of the model\u2019s structure, but still needs a small portion of local data [19]. Despite these advancements, none of the existing methods can fully address the multi-task challenges in Microcontroller Units (MCUs), thus limiting their broader application. Recently, novel model compression methods have been proposed to further reduce the energy consumption of running DNNs on MCUs, such as model pruning [20], model quantization [21], and knowledge distillation (KD) [22]. Model pruning involves cutting off unimportant parameters in the model. However, in the context of TinyML, where models are inherently small, pruning might significantly decrease accuracy [23]. Model quantization converts the data type of model parameters from FLOAT32 to more efficient formats like INT8, offering a balance between energy savings and acceptable accuracy loss. KD involves a smaller model arXiv:2403.08040v1 [cs.LG] 12 Mar 2024 \fYushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi learning from a larger one, thereby achieving both model compression and energy reduction. However, these methods are usually executed in the cloud before deployment on MCUs [24\u201326]. This approach hinders their adaptability in locally and dynamically balancing energy consumption with model performance when processing multiple local tasks, limiting their application and effectiveness on MCUs. Contribution. In this study, we introduce MicroT, a low-energy and multitask adaptive model framework 1 designed for MCUs. MicroT incorporates a powerful and tiny feature extractor and a classifier locally trained for multiple local tasks. The feature extractor, developed by self-supervised knowledge distillation (SSKD), inherits and learns general features from a teacher model, thereby improving model performance and reducing size. The classifier, trained locally on the MCU, leverages these general features, enabling MicroT to utilize simple classifiers to achieve low-energy training and sufficient model performance. To further reduce energy consumption, MicroT employs strategies including model segmentation, joint training, and joint inference (stage-decision). Additionally, MicroT offers user-configurable stage-decision ratios and thresholds, providing a way to adaptively adjust the balance between model performance and energy cost. To demonstrate its feasibility, we implement and evaluate MicroT on two models, three datasets, and two MCU boards. These experiments involve evaluating the model performance for multiple local tasks, the effectiveness of model segmentation and stage-decision, and the system cost. The results show that MicroT effectively improves model performance for multiple local tasks on MCUs and can achieve low-energy classifier training and model inference. MicroT is practical as it can be easily extended to various models and MCUs. Compared to unoptimized feature extractors, MicroT can improve model performance by up to 9.87%. On MCUs, compared to standard full-model inference, MicroT can save up to about 29.13% of energy cost. With the standard stagedecision ratio of 0.5, MicroT can improve model performance by 5.91% and save about 14.47% of energy cost. 2 Background and Related Work In this section, we provide the necessary background to comprehend MicroT. Self-Supervised Learning. Self-supervised learning (SSL) automatically generates training signals by designing proxy tasks, such as predicting a part of an image or the next word in a text, allowing algorithms to learn useful data representations from unlabeled data [27]. A major advantage of SSL is its independence from labeled data, which also enhances the model\u2019s generalization capabilities. Recently, self-supervised 1We will release our code and design as open source upon acceptance. models (e.g., DinoV2 [28]) have reached, and sometimes surpassed, the performance of their supervised learning counterparts, with some studies currently utilizing the more generalized embedding features extracted from these models for specific tasks [29, 30]. We believe that such general features also have application potential for MCU multi-task problems. TinyML and Model Compression. TinyML focuses on resource-constrained hardware. Model compression is essential for efficient TinyML to achieve tiny models that are compatible with devices with constrained computational capacity and minimal storage. Model compression utilizes several techniques such as pruning [31], quantization [21], and KD [22]. Typically, KD aims at learning the teacher model\u2019s logit output [32], class distribution [33], or embedding features [34]. When the teacher model\u2019s extracted features are both advanced and general, learning from these embedding features equips the student model to excel with new and unseen data [35]. This ability is particularly advantageous to address the multi-task challenge on MCUs. Joint Training and Inference. Joint training involves the simultaneous training of multiple models or tasks within an integrated framework. This approach facilitates the sharing of information and learning of features across different models or tasks, enhancing the overall model performance [36, 37], and has been widely used in multi-task issues [38, 39]. During the inference stage, joint inference enables collaboration among various models, integrating their outputs to obtain more comprehensive and accurate outcomes. This approach is particularly important in applications that require rapid and accurate responses. Leveraging the advantages of joint training and inference, we optimize it for the multi-task challenge on MCUs. Rather than employing two separate, independent models, our strategy involves using a large and a small model with shared parameters, thereby minimizing memory demands for MCU deployment. Special emphasis is placed on the performance of the smaller model to decrease reliance on the larger one, which contributes to reduced energy consumption during joint inference. Transfer Learning and Multi-Tasking. Transfer learning involves pre-training models on data-rich, large-scale tasks to acquire universal features, followed by transferring the knowledge to specific tasks [40]. This approach is particularly effective for scenarios demanding complex tasks processing with limited resources [41, 42]. In the context of MCUs, Wu et al. [43] propose EMO, an approach for emotion recognition across various task objects in practical applications. EMO trains a feature extractor in the cloud and finetunes a Kmeans-based classifier on the MCU. However, our evaluation reveals that EMO underperforms in processing complex images and tasks, achieving an average accuracy of only 24.5% (details in Section. 5.4). These results may stem \fMicroT: Low-Energy and Adaptive Models for MCUs Figure 1: System Overview of MicroT from the feature extractor\u2019s lack of universality. Nonetheless, EMO still inspires the application of transfer learning to multi-task scenarios, as training only the classifier is efficient for the resource-constrained MCUs (details in Section. 5.6). 3 MicroT Design Assumptions. Our study focuses on image data. We consider a realistic scenario: (i) On the cloud, there are public datasets available, but no local target task datasets. While the local MCU can access labeled data for various local target tasks. For example, on the cloud, the ImageNet and Sea Animals Image datasets [44] are available, but there are no datasets for local target tasks such as pet recognition [45]. (ii) The communication between the MCU and the cloud is unstable, as might be encountered in challenging environments like deep-sea [46]. This unstable communication makes learning approaches like FL challenging to be applied in these scenarios. (iii) The local MCUs are energy-cost-sensitive, such as systems associated with energy harvesting devices [47]. 3.1 Overview We propose a low-energy and multi-task adaptive model framework designed to enable MCUs to efficiently process multiple local tasks with low-energy training and inference. Fig. 1 shows the overview of this framework. MicroT\u2019s design provides the following functionalities: Powerful and Tiny Feature Extractor. MicroT splits the original model into a feature extractor and a classifier. The feature extractor is adept at capturing general features, making it ideal for a range of local tasks. For its training, we employ Self-Supervised Knowledge Distillation (SSKD), a blend of SSL [28] and KD. Note that, the feature extractor is trained on the cloud, utilizing public datasets instead of specific local task data. In SSKD, we learn the embedding features instead of the logit knowledge extracted by the teacher model (details in Section. 3.2). Low-Energy MCU Local Training. MicroT facilitates low-energy classifier training on MCUs for multiple local tasks. This low energy consumption is achieved by leveraging the general and universal features extracted by the powerful and tiny feature extractor. Consequently, MicroT requires only simple classifiers to ensure adequate performance across multiple tasks. To optimize memory usage on MCUs, MicroT strategically offloads the memory burden of the feature extractor during the classifier training stage, maintaining just the essential extracted features. This approach effectively frees up additional memory space on the MCUs for classifier training (details in Section. 3.4). Low-Energy MCU inference. MCUs employ a joint inference mechanism, called stage-decision, to reduce inference energy costs. The stage-decision means that samples are first processed by the part model, followed by a score calculation. If the score is below the set threshold, the full model will resume and continue the inference from the features before the part model\u2019s output layer. This mechanism enables efficient and low-energy local inference (details in Section. 3.3 and 3.5). Dynamic Parameter Adjustment. On the MCUs, MicroT\u2019s stage-decision mechanism offers dynamic adjustment, enabling users to modify the stage-decision ratio according to the requirements of the balance between model performance and energy cost for various local tasks. This ratio is crucial as it dictates the proportion of samples processed only by the part model. MicroT sets a standard ratio as a benchmark. Users have the flexibility to decrease this ratio to improve model performance, or conversely, to raise it for enhanced energy efficiency (details in Section. 3.5). \fYushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi 3.2 Self-Supervised Knowledge Distillation The design of the feature extractor has two considerations: (i) The necessity for the extracted features to possess universality for the multiple local tasks. (ii) The unavailability of local task data in the cloud for training. To address these considerations, SSKD is employed, utilizing KD to obtain a compact student feature extractor that demands less size and computational power. Concurrently, SSKD enables the student to learn general SSL-extracted features from the teacher. Firstly, we apply SSL to the teacher. The rationale for not applying SSL directly to the student model includes: (i) SSL generally enhances greater model performance improvements in larger models, given the simpler structure of the student model, direct SSL application may only offer limited improvements [48]. (ii) Currently, there are several large models available that have already utilized SSL [28, 49, 50], these pre-trained models can be efficiently employed for specific task [29, 30]. Next, we use KD to obtain a compact student feature extractor. There are several challenges: (i) Without knowledge of the local task classes, we cannot directly learn the logits from the teacher model. (ii) The student model\u2019s output layer dimensions \ud835\udc43differs from the teacher model\u2019s \ud835\udc44. (iii) The absence of a local task dataset in the cloud and the difficulties of performing KD on the MCU directly. To address the (i) and (ii) challenges, we introduce a fully connected layer (Matching Layer) after the student model to align the output dimensions (\ud835\udc43,\ud835\udc44) [51]. For the (iii) challenge, we utilize a non-target public dataset in the cloud, which requires a variety of image categories and ample learning samples. This ensures that the student feature extractor can adequately inherit the teacher model\u2019s powerful representation capabilities across a broad range of image content. As shown in Fig. 1 Stage 1, we first obtain a teacher model with SSL capabilities, then use its extracted general features as learning targets for the student model. Upon integrating the matching layer into the student model, we perform KD from scratch, using the same dataset. The distillation loss, considering \ud835\udc5das the student\u2019s extracted features, \ud835\udc5eas the teacher\u2019s extracted features, and \ud835\udc3fMSE as the Mean Squared Error, can be formulated as follows: \ud835\udc3fdistill = \ud835\udc3fMSE(\ud835\udc5d,\ud835\udc5e) (1) 3.3 Model Segmentation and Joint Training To enhance the efficiency of local inference on MCUs, MicroT employs model segmentation and joint training. It segments the feature extractor into a part model and a full model, facilitating stage-decision and joint inference on the MCU. To rationally split the feature extractor, we propose a model split fused score \ud835\udc39score, and adopt joint training to further optimize the performance of the part model and the joint Algorithm 1 Determine Optimal Model Segmentation Point 1: Initialize: 2: \ud835\udc43\ud835\udc40\u2190Split full model into part model based on module 3: \ud835\udc34\ud835\udc53\ud835\udc62\ud835\udc59\ud835\udc59\u2190Compute accuracy of the full model 4: \ud835\udc40\ud835\udc53\ud835\udc62\ud835\udc59\ud835\udc59\u2190Compute MAC of the full model 5: \ud835\udc39\ud835\udc5a\ud835\udc4e\ud835\udc65\u21900 6: \ud835\udc42\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc59\ud835\udc46\ud835\udc5d\ud835\udc59\ud835\udc56\ud835\udc61\u21901 7: for \ud835\udc56= 1 to length of \ud835\udc43\ud835\udc40do 8: \ud835\udc34\ud835\udc56\u2190Compute accuracy of \ud835\udc43\ud835\udc40[\ud835\udc56] 9: \ud835\udc40\ud835\udc56\u2190Compute MAC of \ud835\udc43\ud835\udc40[\ud835\udc56] 10: if \ud835\udc56> 1 then 11: \u0394\ud835\udc34\u2190\ud835\udc34\ud835\udc56\u2212\ud835\udc34\ud835\udc56\u22121 12: \u0394\ud835\udc40\u2190\ud835\udc40\ud835\udc56\u2212\ud835\udc40\ud835\udc56\u22121 13: \ud835\udc3a\u2190Normalize(\u0394\ud835\udc34/\u0394\ud835\udc40) 14: end if 15: \ud835\udc45\ud835\udc34\u2190(\ud835\udc34\ud835\udc53\ud835\udc62\ud835\udc59\ud835\udc59\u2212\ud835\udc34\ud835\udc56)/\ud835\udc34\ud835\udc53\ud835\udc62\ud835\udc59\ud835\udc59 16: \ud835\udc45\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a \ud835\udc34 \u2190Normalize(\ud835\udc45\ud835\udc34) 17: \ud835\udc45\ud835\udc40\u2190(\ud835\udc40\ud835\udc53\ud835\udc62\ud835\udc59\ud835\udc59\u2212\ud835\udc40\ud835\udc56)/\ud835\udc40\ud835\udc53\ud835\udc62\ud835\udc59\ud835\udc59 18: \ud835\udc45\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a \ud835\udc40 \u2190Normalize(\ud835\udc45\ud835\udc40) 19: \ud835\udc39\ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52\u2190[3\u2217(1\u2212\ud835\udc45\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a \ud835\udc34 )\u2217\ud835\udc3a\u2217\ud835\udc45\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a \ud835\udc40 ]/[(1\u2212\ud835\udc45\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a \ud835\udc34 )+\ud835\udc3a+\ud835\udc45\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a \ud835\udc40 ] 20: if \ud835\udc39\ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52> \ud835\udc39\ud835\udc5a\ud835\udc4e\ud835\udc65then 21: \ud835\udc39\ud835\udc5a\ud835\udc4e\ud835\udc65\u2190\ud835\udc39\ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52 22: \ud835\udc42\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc59\ud835\udc46\ud835\udc5d\ud835\udc59\ud835\udc56\ud835\udc61\u2190\ud835\udc56 23: end if 24: end for 25: return \ud835\udc42\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc59\ud835\udc46\ud835\udc5d\ud835\udc59\ud835\udc56\ud835\udc61 model. Considering the absence of local tasks in the cloud, we utilize the public non-task dataset to calculate the optimal segmentation point. The definition of \ud835\udc39score is: \ud835\udc39score = 3 \u00d7 (1 \u2212\ud835\udc45norm \ud835\udc34 ) \u00d7 \ud835\udc3a\u00d7 \ud835\udc45norm \ud835\udc40 (1 \u2212\ud835\udc45norm \ud835\udc34 ) + \ud835\udc3a+ \ud835\udc45norm \ud835\udc40 (2) where \ud835\udc39score is the score used to evaluate the model segmentation point. \ud835\udc45norm \ud835\udc34 represents the normalized ratio of accuracy loss, which measures the decrease in part model\u2019s accuracy relative to the full model\u2019s accuracy. \ud835\udc45norm \ud835\udc40 is the normalized Multiply\u2013Accumulate (MAC) reduction ratio, which quantifies the decrease in part model\u2019s MAC relative to the full model\u2019s MAC. \ud835\udc3ais the normalized gain ratio, representing the trade-off between the increased accuracy and MAC from the part model split by the current point to the previous. In identifying the optimal model segmentation point, we consider both model performance and computational efficiency, acknowledging the impact of computational demand on energy cost [52, 53]. For example, we use accuracy as the performance indicator for the model and MAC as the indicator for computational cost. Specifically, as the Algorithm 1 shows, the algorithm initially splits the complete model into multiple part models based on modules, and then computes the accuracy (\ud835\udc34) and MAC (\ud835\udc40) for each part model. Subsequently, the algorithm calculates the accuracy improvement (\u0394\ud835\udc34) and MAC change (\u0394\ud835\udc40) for each part model relative to the previous part model, and from this calculates the normalized gain ratio (\ud835\udc3a). Additionally, the algorithm evaluates \fMicroT: Low-Energy and Adaptive Models for MCUs Algorithm 2 Stage-Decision 1: Initialize: 2: \ud835\udc46\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b\u2190Set of N training samples to determine threshold 3: \ud835\udc46\ud835\udc56\ud835\udc5b\ud835\udc53\ud835\udc52\ud835\udc5f\u2190Set of M new samples for inference 4: \ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc61\ud835\udc40\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59\u2190Function to process samples using the part model 5: \ud835\udc39\ud835\udc62\ud835\udc59\ud835\udc59\ud835\udc40\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59\u2190Function to process samples using the full model 6: \ud835\udc36\u2190[] Array to store confidence scores 7: \ud835\udc45\u2190[] Array to store results after stage-decision 8: \ud835\udc34\ud835\udc51\ud835\udc57\ud835\udc62\ud835\udc60\ud835\udc61\ud835\udc39\ud835\udc4e\ud835\udc50\ud835\udc61\ud835\udc5c\ud835\udc5f\u2190Default = 1, User-defined factor to adjust the threshold 9: Determine the median confidence threshold from training samples: 10: for \ud835\udc56= 1 to \ud835\udc41do 11: \ud835\udc53\ud835\udc52\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc5d,\ud835\udc50\ud835\udc56\ud835\udc5d\u2190\ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc61\ud835\udc40\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59(\ud835\udc46\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b[\ud835\udc56]) 12: Append \ud835\udc50\ud835\udc56\ud835\udc5dto \ud835\udc36 13: end for 14: Sort \ud835\udc36in ascending order 15: if \ud835\udc41mod 2 = 1 then 16: \ud835\udc36\ud835\udc5a\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc4e\ud835\udc5b\u2190\ud835\udc36[ \ud835\udc41+1 2 ] 17: else 18: \ud835\udc36\ud835\udc5a\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc4e\ud835\udc5b\u2190 \ud835\udc36[ \ud835\udc41 2 ]+\ud835\udc36[ \ud835\udc41 2 +1] 2 19: end if 20: \ud835\udc47\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51\u2190\ud835\udc36\ud835\udc5a\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc4e\ud835\udc5b\u00d7 \ud835\udc34\ud835\udc51\ud835\udc57\ud835\udc62\ud835\udc60\ud835\udc61\ud835\udc39\ud835\udc4e\ud835\udc50\ud835\udc61\ud835\udc5c\ud835\udc5f 21: Perform stage-decision on inference samples: 22: for \ud835\udc56= 1 to \ud835\udc40do 23: \ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56\ud835\udc5d,\ud835\udc50\ud835\udc56\ud835\udc5d\u2190\ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc61\ud835\udc40\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59(\ud835\udc46\ud835\udc56\ud835\udc5b\ud835\udc53\ud835\udc52\ud835\udc5f[\ud835\udc56]) 24: if \ud835\udc50\ud835\udc56\ud835\udc5d< \ud835\udc47\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51then 25: \ud835\udc53\ud835\udc52\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc5d\u2190Features before output layer of the \ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc61\ud835\udc40\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59 26: \ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56\ud835\udc53\u2190\ud835\udc39\ud835\udc62\ud835\udc59\ud835\udc59\ud835\udc40\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59(\ud835\udc53\ud835\udc52\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc5d) 27: \ud835\udc45[\ud835\udc56] \u2190\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56\ud835\udc53 28: else 29: \ud835\udc45[\ud835\udc56] \u2190\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56\ud835\udc5d 30: end if 31: end for 32: return \ud835\udc45 the accuracy reduction rate (\ud835\udc45\ud835\udc34) and MAC reduction rate (\ud835\udc45\ud835\udc40) for each part model relative to the complete model and normalizes these ratios. By integrating these metrics, the algorithm calculates a composite score (\ud835\udc39score) for each part model. After completing the calculation of the composite scores for all part models, the algorithm selects the split point with the highest composite score as the optimal split point, and obtains the part model and full model, as shown in Fig. 1 Stage 2. This approach ensures that the selected optimal model segmentation point does not significantly affect the model performance while ensuring lower computational complexity, thereby reducing energy consumption. When computing \ud835\udc39score, we utilize another available public non-task dataset in the cloud, which is different from the dataset used in SSKD. The main reasons for this are as follows: (i) The dataset used in SSKD requires a wide range of categories and ample samples, generally resulting in a large dataset. Using this dataset would reduce the efficiency of score computation; (ii) Using a dataset different from the one used in SSKD increases the credibility of the score, as the new public non-task dataset, like the MCU local task dataset, has not been learned before. Following the model segmentation, we implement joint training. As shown in Fig. 1 Stage 3, this process involves augmenting the part model with an additional output layer and a matching layer to align the feature dimensions extracted by the teacher. Subsequently, the models undergo SSKD joint training, building upon the initial feature extractor. We first independently train the part model, and then delete the added layers and freeze its parameters to further train the full model. This joint training can enhance the part model\u2019s independence and performance, enabling the part model to process a larger proportion of samples more effectively, thereby reducing the dependency on the full model and saving energy. Moreover, joint training ensures cohesive integration of the part model within the full model. This integration facilitates parameter sharing between the two, thereby preventing the need for additional memory allocation when deploying these two models on MCUs. Due to model segmentation and the part model, this joint training approach may lead to some accuracy loss in the full model. However, the performance improvement from the part model can also transfer to the full model, thereby compensating for this accuracy loss. Our experiments also demonstrate that this loss in accuracy is acceptable (details in Section. 5.3). The cooperation between the models, improved through joint training, thus optimizes both performance and energy cost, crucial for applications on resource-constrained devices. 3.4 Classifier Training The MCU obtains the feature extractor (with INT8 quantization and only about 0.63% accuracy loss) from the cloud, and locally builds and trains classifiers to process multiple tasks, as shown in Fig. 1 Stage 4. The general features from the feature extractor enable the use of a basic classifier, which can achieve desirable model performance while mitigating memory constraints on the MCU. Classifier development and training on the MCU are executed in the \ud835\udc36programming language. For example, constructing a 2-layer Neural Network (NN) classifier entails several key steps: 1) Defining the classifier\u2019s architecture, including layer count, neurons per layer, input and desired output of the NN, layers (excluding the input layer), and the learning rate; 2) Structuring parameter matrices, encompassing weight matrix, bias array, output array, and error (the partial derivative of the total error with respect to the weighted sum); 3) Defining activation functions; 4) Implementing a function to load datasets and models; 5) Creating a model construction function for memory allocation and model establishment; 6) Developing a training function, which includes forward and backward propagation processes; 7) Instituting a function to save the model; 8) Developing a function to release memory. To alleviate memory load during classifier training, the MCU offloads \fYushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi the memory used by the feature extractor prior to training, retaining only the extracted features. Given the MCU\u2019s limited memory capacity, the batch size is set to one. Since both the part model and full model are deployed on the MCUs and may process multiple tasks, there will be multiple classifiers, each of them trained separately. 3.5 Stage-Decision Upon completing local training, MicroT implements stagedecision to further reduce energy cost during inference, as shown in Figure 1 Stage 5. Note that, in local training and stage-decision, data preprocessing is also necessary, including resizing and normalization of input images, which should match the preprocessing steps in the cloud. Stage-decision involves initially processing samples by the part model and calculating confidence scores. If the score falls below the set threshold, the full model will resume and continue inference from the stage preceding the part model\u2019s output layer. Setting an appropriate threshold is crucial, as it determines the stage-decision ratio, the ratio of samples processed only by the part model. The algorithm of stage-decision is detailed in Algorithm 2. This threshold is established using confidence scores from the part model. The median of \ud835\udc36, denoted as \ud835\udc36\ud835\udc5a\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc4e\ud835\udc5b, is selected as the threshold. Consequently, approximately 50% of the samples are processed only by the part model, establishing a stage-decision ratio of 0.5. Likewise, employing the 1/4 and 3/4 quantiles of \ud835\udc36as thresholds results in stage-decision ratios of 0.25 and 0.75, respectively. Our experiments (details in Section. 5.5) show that the threshold can be determined with just a few samples, avoiding excessive resource consumption on the MCU. Moreover, a ratio of 0.5, based on the median confidence score, balances model performance with energy efficiency. Therefore, this ratio is adopted as the standard stage-decision ratio (details in Section 5.4). MicroT enables diverse balances between model performance and energy consumption by offering variable thresholds and ratios. It offers the \ud835\udc34\ud835\udc51\ud835\udc57\ud835\udc62\ud835\udc60\ud835\udc61\ud835\udc39\ud835\udc4e\ud835\udc50\ud835\udc61\ud835\udc5c\ud835\udc5ffunction, allowing users to dynamically modify the threshold according to their specific needs. In practical scenarios, users can customize their balance of model performance and energy efficiency. By raising the threshold, and consequently decreasing the stage-decision ratio, users can enhance model performance. Alternatively, lowering the threshold increases the ratio, leading to reduced energy consumption. 4 Implementation & Evaluation Setup In this section, we first introduce the implementation of the MicroT system (Section 4.1), and then detail how we assess its performance on various DNN models and datasets (Section 4.2) using different metrics (Section 4.3). 4.1 MicroT Prototype We deploy MicroT by STM32-X-CUBE-AI [54], and modify the original \ud835\udc36code to achieve the preprocessing, feature extractor implementation, classifier inference and training, dynamic threshold adjustment, and stage-decision. We run MicroT on STM32H7A3ZI and STM32L4R5ZI respectively. The STM32H7A3ZI is equipped with an ARM Cortex-M7 core, 2MB of Flash memory, and 1.18MB of RAM, with a maximum frequency of 280MHz. The STM32L4R5ZI features an ARM Cortex-M4 core, 2MB of Flash memory, and 640KB of RAM, with a maximum frequency of 120MHz. We utilize Monzoon High Voltage Power Monitor [55] to set the MCU power supply voltage to 1.9\ud835\udc49[6]. We set the on-device inference and training with the MCU board frequency at 120MHz. Fig. 2 shows the overview of the MicroT prototype. 4.2 Models and Datasets We select DinoV2 [28] as the teacher model due to its ability to learn general image features from a substantial collection of public image datasets and also benefits from the Vision Transformer model and SSL. For the student model, to balance accuracy, memory footprint, and energy consumption, we utilize ProxylessNAS_w0.3 [56] and MCUNet_int3 [57], which are the commonly used lightweight CNN and modulebased models in the related literature [58, 59]. For MCUNet and ProxylessNAS, both architectures are fundamentally characterized by Mobile Inverted Bottleneck Convolutions (MBConv), as Fig. 3 shows. Note that, we remove the original classifier layers from ProxylessNAS and MCUNet, retaining only the remaining structures as feature extractors. Subsequently, we integrate either Logistic Regression (LR) or a 2-layer Neural Network (NN) as classifiers. The input dimension of the classifier aligns with the output dimension of the teacher model (DinoV2 = 384), suppose the number of output categories is \ud835\udc49. The LR classifier follows a (384,\ud835\udc49) structure, while the 2-layer NN follows a (384, 128,\ud835\udc49) structure. For SSKD on the initial feature extractor, we train the model from scratch with 0.01 learning rate, SGD optimizer, and 50 epochs. After splitting the initial feature extractor, in the joint training, we further jointly train the part model and full model with 0.005 learning rate, maintaining the same optimizer and epoch count. In addition, we consider the impact of the input image resolution, and utilize the 224 and 128 resolutions, which are practical for MCUs [60, 61]. To avoid complex retraining for 128 resolution, we fine-tune the model from its 224 resolution state, using 0.005 learning rate, SGD optimizer, and 50 epochs. On MCUs, classifier training and system cost evaluation are conducted using SGD optimizer, cross-entropy loss function, Softmax output layer activation function, RELU hidden layer activation function, 0.01 learning rate, and 200 epochs, without momentum [61] (details in \fMicroT: Low-Energy and Adaptive Models for MCUs Figure 2: MicroT Prototype Overview Figure 3: Main Structure of ProxylessNAS and MCUNet Section. 5.2). Some examples of common abbreviations and their meanings in these experiments are as follows: \u2022 MCUNet_r224: MCUNet without the original classifier, and image resolution is 224. \u2022 MCUNet_r224_LR: MCUNet with LR classifier added after removing the original classifier, and 224 resolution. \u2022 MCUNet_r224_part: The part model of MCUNet after model segmentation, 224 resolution, no original classifier. \u2022 MCUNet_r224_part_NN: The part model of MCUNet with NN classifier added after removing the original classifier, and 224 resolution. \u2022 MCUNet_r224_sd_0.5: The part model and full model with added classifier and execute stage-decision, the stagedecision ratio is 0.5, and 224 resolution. The datasets are categorized into cloud datasets and local datasets. For the cloud datasets, we utilize ImageNet to train feature extractors, as it provides sufficiently rich samples for the student to learn from the teacher. We also utilize the Sea Animals Image (Sea) dataset [44] to identify the best model segmentation point, which is similar to the local dataset as it is unlearned by the student. For the local datasets, we utilize several datasets to represent multiple local tasks, including The Oxford-IIIT Pet (Pet) [45], CUB-200-2011 Bird (Bird) [62], and PlantCLEF 2017 (Plant) [63]. The Pet dataset [45] contains 3,680 images and 37 distinct pet species. The bird dataset [62] contains a collection of 11,788 images in 200 distinct categories. For the Plant dataset [62], we select a subset of it based on image quantities per category, containing 20 plant categories with a total of 11,660 training images. We randomly split the training and test datasets by the ratio of 0.7 and 0.3. 4.3 Performance Metrics Our evaluation of MicroT focuses on two main aspects: (i) ML model performance on multiple local tasks. (ii) The system cost on the MCUs. All the results are the average values of ten repeat experiments. Model Performance. Accuracy of MicroT models with different configurations on the multiple local datasets. Table 1: Overall performance of MicroT with varying stagedecision ratios. The red color indicates a decrease in accuracy, whereas green signifies improvements in accuracy and energy efficiency. MicroT\u2019s standard ratio is set at 0.5. Model Acc. Acc. Improvement Energy Cost Energy Saving Baseline 51.86% 5.39mJ MicroT_0 61.73% 9.87% 5.39mJ MicroT_0.25 60.17% 8.31% 5.00mJ 7.21% MicroT_0.5 57.77% 5.91% 4.61mJ 14.47% MicroT_0.75 51.26% -0.60% 4.21mJ 21.89% MicroT_1 41.14% -10.72% 3.82mJ 29.13% Table 2: Results of training classifiers on MCU and GPU with varying batch sizes and partial samples from the Pet dataset. Batch Size Feature Extractor Pet LR NN Avg. 128 (GPU) MCUNet_r128_full 71.47 76.87 74.17 Proxy_r128_full 64.26 67.86 66.06 1 (MicroT) MCUNet_r128_full 69.74 74.48 72.11 Proxy_r128_full 62.95 65.78 64.37 System Cost. We monitor the efficiency of local training and inference on MCUs, and measure the following costs: (i) Runtime (\ud835\udc60): The time taken by the MCU to process operations, reported and calculated by reading the current time from the MCU\u2019s internal clock. (ii) Memory Usage (\ud835\udc40\ud835\udc35or \ud835\udc3e\ud835\udc35): The maximum Flash memory and RAM usage, obtained by STM32-CUBE-IDE [64]. (iii) Energy Consumption (\ud835\udc5a\ud835\udc3d): The energy cost to perform operations, measured by Monsoon High Voltage Power Monitor [55] with 50\ud835\udc3b\ud835\udc67sampling rate. We utilize Monsoon [55] to set the input voltage (\ud835\udc48) to 1.9V [6] and measure the time (\ud835\udc61) and average current (\ud835\udc3c). Then we calculate the average power (\ud835\udc43= \ud835\udc48\ud835\udc3c) and average energy consumption (\ud835\udc38= \ud835\udc43\ud835\udc61). To obtain stable values, we only utilize the current recorded after running for 2 \ud835\udc5a\ud835\udc56\ud835\udc5b. 5 Evaluation Results In this section, we present the experimental evaluation of MicroT aiming to answer a set of key questions. 5.1 Overall Performance Table. 1 shows an overall performance comparison between MicroT with varying stage-decision ratios and the baseline, including accuracy and energy cost. The model accuracy here represents average values taken from three datasets, two models, and two resolutions. The energy cost is averaged over two MCU boards, and the Plant [63] and Bird [62] datasets, as they have the smallest and largest number of classes. The baseline refers to the model employing an unoptimized feature extractor and standard full model inference. Analysis reveals that, across different ratios, MicroT\u2019s accuracy and energy consumption vary, achieving a maximum increase of 9.87% in accuracy and 29.13% in energy saving \fYushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi compared to the baseline. The ratio is set at 0.5 by default, achieving a balance between accuracy and energy savings, leading to a 5.91% improvement in accuracy and a 14.47% reduction in energy usage. Table. 1 summarizes the results from all experiments, with details in the following section. 5.2 Does MicroT\u2019s Local Training Match Cloud-Level Accuracy? For training the classifier on MCUs, due to memory limitations, the batch size is set to one. However, this reduces the efficiency of the experiments, as it does not utilize hardware parallelism. Therefore, at the start of the experiments, we investigate the performance gap between classifier training on MCUs with a single-batch and on the GPU with a batch size of 128, to see if the latter can be used as an approximation to improve the efficiency of the experiments. The training settings on the MCU and cloud are kept consistent, as illustrated in Section. 4.2. The reason for not using momentum is that for a single-batch setup, momentum does not benefit model performance and can increase memory usage [61]. The classifier training on the MCUs is based on \ud835\udc36 language, while on the GPU is based on \ud835\udc43\ud835\udc66\ud835\udc61\u210e\ud835\udc5c\ud835\udc5b. Due to the low efficiency of single-batch training, we randomly select 20 images with 128 resolution from each class in the Pet dataset (a total of 740 images), the results are shown in Table. 2. We find that, keeping the classifier training configuration consistent, local training on the MCU with single-batch and training on the GPU with a 128 batch are in similar accuracy. This allows us to use the results of batch training on the GPU for evaluation. Therefore, we default to reporting the GPU classifier training results with based on the aforementioned classifier training configuration, unless otherwise stated. 5.3 How Much does MicroT Improve Performance in Segmented Models? MicroT divides the original model into a a part model and a full model to make stage-decision, thereby reducing energy consumption. A superior performance of the two models can potentially improve the performance of stage-decision. Therefore, we evaluate the performance improvement of the segmented models. The first step is to determine the optimal model segmentation point by the model segmentation fused score (as illustrated in Section. 3.3), considering four aspects: the comparison of accuracy and MAC between the part model up to the current module and the previous module, and the comparison of accuracy and MAC between the part model the full model. We first train the ProxylessNAS and MCUNet feature extractors on the ImageNet dataset using SSKD. Subsequently, to find the optimal segmentation point, we utilize the Sea dataset [44] which is also public on the cloud, different from the SSKD dataset (ImageNet), and has never been learned by the feature extractors (as illustrated in Section. 3.3). Note that, we do not want the part model to be too small to have a poor performance, also do not want it to be too close to the full model, as this would result in minimal optimization for the system cost. Therefore, for MCUNet, we only analyze modules 4 to 14, and for ProxylessNAS, we analyze modules 6 to 17. Since the output dimensions of different part models are different, we also add a matching layer. The model segmentation fused scores of MCUNet and ProxylessNAS are shown in Fig. 4. We can observe that for 128 and 224 resolutions, the optimal segmentation point for the MCUNet is at the 9th module, while the ProxylessNAS is at the 14th module. In these four models, the MAC generally increases steadily with the depth of the network. However, at the optimal model segmentation points, there is a significant improvement in accuracy, indicating that the model has acquired essential information and extracted key features at this depth and structure. We believe this demonstrates that the current model depth offers the best cost-performance ratio when considering model performance and computational complexity. Furthermore, regardless of changes in resolution, the optimal model segmentation points are at the same depth for the same model, suggesting that for a given model, the optimal segmentation point may have a certain stability, and the main factor influencing this point is the model structure. Subsequently, we split the MCUNet at its 9th module and the ProxylessNAS at its 14th module, resulting in part models and full models. These models are then further trained on ImageNet by SSKD and joint training. After adding the additional output layer and matching layer to the part model, we first train it, and then delete the added layers and freeze its parameters to further train the full model. We compare the performance of models with and without MicroT to evaluate the performance improvement that MicroT brings to the segmented models. The results are shown in Fig. 5. We can observe that MicroT can improve varying degrees of model performance on the part model and full model. Specifically, MicroT leads to an average increase of 9.87% in full models\u2019 accuracy and 12.56% in part models\u2019 accuracy. We believe that the reason for the improvement in the accuracy of part models is due to the SSKD and joint training of MicroT. SSKD helps the part model and full model learn more general embedding features from the teacher, which enhances the universality of the extracted features, thereby enabling stable performance across different local tasks. Additionally, in joint training, we pay extra attention to the part model training to improve its independence. However, we note that the improvement in accuracy of the full model is slightly less than that of the part model. We believe this is the \fMicroT: Low-Energy and Adaptive Models for MCUs Figure 4: The model segmentation fused score of MCUNet and ProxylessNAS. The X-axis represents the part model up to which module, and Y-axis represents the fused score (green), accuracy (blue), and MAC (red). Figure 5: The performance improvement by MicroT to the segmented models. The dashed connecting lines show the changes in model performance produced by the part model and full model with and without MicroT. result of a combination of several factors: (i) In joint training, the addition of the model segmentation point and the part model may lead to some degree of information loss. (ii) The accuracy improvement of the part model is also beneficial for the full model, as it can feed back and propagate to the full model. Nevertheless, the information loss in the full model is still acceptable considering the performance improvements of both the full model and part model. 5.4 How does the ML Performance of MicroT Compare to State-of-the-Art? In these experiments, we compare the ML performance of MicroT with state-of-the-art (SOTA). For the selection of SOTA, Lin et al. [61] propose a method for on-device training, but this method is not designed for multi-task issues. Moreover, training the entire model on MCUs for local tasks leads to higher latency and longer model convergence time, which is not efficient for practical deployment and application. Therefore, we do not consider it as a comparative method in our research. Wu et al. [43] propose EMO, which involves training a CNN-based feature extractor and a Kmeans-based classifier in the cloud, and then fine-tuning the classifier on the MCUs based on local tasks. The classifier in EMO is an unsupervised classifier, which mainly adjusts the cluster point centers and the range of class distances on the MCUs. However, we find that EMO does not perform well on complex datasets like Pet [45], Bird [62], and Plant [63], with an average accuracy of only 24.5%. This may be caused by the simple CNN feature extractor and unsupervised classifier. Therefore, we maintain the key method of EMO, modify the feature extractor to MCUNet and ProxylessNAS, change the classifier to LR or NN, and regard this as the baseline. MicroT with different stage-decision ratios is then compared with the baseline, the results are shown in Fig. 6. We can observe that when the stage-decision ratios are 0, 0.25, and 0.5, the model performance consistently surpasses the baseline, with average improvements of 9.87%, 8.31%, and 5.91%. This indicates that at these ratios, the stage-decision between the part model and the full model effectively enhances ML performance. Lower ratios mean that the full model processes more low-confidence samples from the part model, thereby increasing accuracy but costing more energy. Conversely, at a ratio of 0.75, only some MicroT performance exceeds the baseline. This is because, at this threshold, more samples are processed only by the part model, speeding up processing while leading to a decline in ML performance. The different ML performances under various ratios emphasize the importance of threshold configuration in balancing model performance and computational efficiency. Appropriate threshold settings can ensure sufficient accuracy while reducing computational resource consumption, which is especially critical in resource-limited environments. We discuss experiments and analysis regarding computational efficiency and energy consumption in detail in Section. 5.6. We also find that the decrease in model performance with increasing stage-decision ratio is not linear. Specifically, as \fYushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi Figure 6: The ML performance of MicroT with different stagedecision ratios. The dashed lines show the accuracy of the baseline. The numbers in the legend represent the ratio. shown in Fig. 7, with a fixed ratio increment of 0.25, we calculate the performance decrease with the previous ratio (for example, ratio 0.25 compared to threshold 0, ratio 0.5 compared to ratio 0.25). We notice that the decline in model performance exhibits a non-linear trend. This non-linearity may stem from several reasons: (i) The model exhibits noticeable differences in processing samples of varying difficulty, performance drops are particularly pronounced when the less capable part model processes more samples. (ii) The distribution of sample difficulties in the dataset might be non-uniform, leading to more pronounced performance decreases at certain ratio points. (iii) Although we determine the ratio by confidence scores, there may still be other factors that affect the model\u2019s handling of samples with varying difficulty. It\u2019s possible that the challenging samples are not always accurately identified and passed to the full model. At Figure 7: Model average performance decreases from the current stage-decision ratio to the previous ratio, with a fixed ratio increment of 0.25. Figure 8: Achieve a stage-decision ratio of 0.5 by random selection and using the confidence median (MicroT). a ratio of 0.5, we can observe a slower rate of performance decrease, indicating a reduced dependency on the full model while maintaining performance, thus lowering energy consumption. This setting not only considers the model performance in processing samples of different difficulties, but also the energy efficiency requirements on resource-constrained MCUs, making it a safe standard ratio setting in balancing performance and energy consumption. 5.5 How Many Samples Do We Need to Determine the Confidence Thresholds? The confidence threshold dictates the stage-decision ratio, determining the proportion of samples processed only by the part model. The threshold setting is based on the confidence score sequence from the part model. Specifically, after the part and full models have been trained on MCUs, the part model processes \ud835\udc41samples and generates a confidence score sequence. We then select the median of this sequence to achieve the standard stage-decision ratio of 0.5. Ideally, selecting the median of the entire test samples\u2019 confidence \fMicroT: Low-Energy and Adaptive Models for MCUs Table 3: Median value, stage-decision ratio, and model accuracy with different number of samples. The \ud835\udc41represents the number of samples used to calculate the median value. The \ud835\udc40\ud835\udc52\ud835\udc51. represents the median value, the \ud835\udc45\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5crepresents the stage-decision ratio, the \ud835\udc34\ud835\udc50\ud835\udc50. represents the model accuracy (%). N MCUNet_r224_LR ProxylessNAS_r128_NN Pet Bird Plant Pet Bird Plant Med. Ratio Acc. Med. Ratio Acc. Med. Ratio Acc. Med. Ratio Acc. Med. Ratio Acc. Med. Ratio Acc. 5 0.688 0.486 82.35 0.408 0.456 54.35 0.467 0.447 54.72 0.183 0.444 64.89 0.183 0.423 35.86 0.298 0.456 51.92 10 0.674 0.504 82.12 0.400 0.465 52.45 0.439 0.484 54.15 0.168 0.519 64.43 0.154 0.516 35.80 0.273 0.530 51.79 20 0.676 0.501 82.17 0.384 0.491 52.76 0.439 0.484 54.15 0.167 0.525 64.39 0.153 0.519 35.74 0.280 0.512 51.85 40 0.679 0.497 82.26 0.386 0.488 52.74 0.436 0.491 54.49 0.170 0.507 64.16 0.161 0.492 35.57 0.282 0.504 51.85 All 0.677 0.500 82.21 0.378 0.500 52.85 0.430 0.500 54.61 0.172 0.500 63.76 0.159 0.500 35.57 0.283 0.500 51.82 Table 4: The system costs during local training. \ud835\udc61represents time (\ud835\udc60), \ud835\udc43represents average power (\ud835\udc5a\ud835\udc4a), and \ud835\udc38represents energy consumption (\ud835\udc5a\ud835\udc3d). Object STM32H7A3ZI STM32L4R5ZI t P E t P E MCUNet_r224_full_NN 2.86 3.90 11.16 5.19 2.79 14.48 MCUNet_r224_part_NN 2.18 3.91 8.52 3.85 2.70 10.40 MCUNet_r128_full_NN 1.25 3.81 4.76 2.15 2.72 5.85 MCUNet_r128_part_NN 1.02 3.76 3.83 1.75 2.51 4.39 ProxylessNAS_r224_full_NN 1.58 3.87 6.11 3.16 2.58 8.15 ProxylessNAS_r224_part_NN 1.32 3.82 5.04 2.56 2.53 6.48 ProxylessNAS_r128_full_NN 0.83 3.54 2.94 1.54 2.41 3.71 ProxylessNAS_r128_part_NN 0.75 3.47 2.60 1.31 2.38 3.12 Table 5: The energy costs of MicroT under different stagedecision ratios. \ud835\udc38represents energy consumption (\ud835\udc5a\ud835\udc3d), \ud835\udc38\ud835\udc46 represents the energy saving rate compared with the baseline (%), \ud835\udc34\ud835\udc63\ud835\udc54. represents the average energy saving rate (%). Model STM32H7A3ZI STM32H7A3ZI Avg. E ES E ES MCUNet_r224_sd_0 9.52 12.42 MCUNet_r128_sd_0 3.12 4.03 ProxylessNAS_r224_sd_0 4.64 6.19 ProxylessNAS_r224_sd_0 1.45 1.78 MCUNet_r224_sd_0.25 8.87 6.83 11.48 7.57 7.24 MCUNet_r128_sd_0.25 2.89 7.37 3.70 8.19 ProxylessNAS_r224_sd_0.25 4.33 6.68 5.75 7.11 ProxylessNAS_r128_sd_0.25 1.34 7.59 1.65 7.30 MCUNet_r224_sd_0.5 8.22 13.66 10.55 15.06 14.47 MCUNet_r128_sd_0.5 2.66 14.74 3.36 16.63 ProxylessNAS_r224_sd_0.5 4.02 13.36 5.31 14.22 ProxylessNAS_r128_sd_0.5 1.23 15.17 1.51 15.17 MCUNet_r224_sd_0.75 7.57 20.48 9.61 22.62 21.89 MCUNet_r128_sd_0.75 2.42 22.44 3.03 24.81 ProxylessNAS_r224_sd_0.75 3.71 20.04 4.87 21.32 ProxylessNAS_r128_sd_0.75 1.12 22.76 1.38 22.74 MCUNet_r224_sd_1 6.92 27.31 8.67 30.19 29.13 MCUNet_r128_sd_1 2.19 29.81 2.69 33.25 ProxylessNAS_r224_sd_1 3.40 26.72 4.43 28.43 ProxylessNAS_r128_sd_1 1.01 30.34 1.24 30.34 sequence can set the stage-decision ratio to 0.5, meaning 50% of the samples are processed only by the part model. However, in real-world applications, storing and computing the confidence scores for all test samples is not feasible. Therefore, to determine if it is efficient to determine the threshold locally, we utilize various quantities of samples to calculate their medians, and analyze these medians\u2019 corresponding stage-decision ratios and model performance. We test the MCUNet at 128 resolution with an LR classifier and ProxylessNAS at 224 resolution with an NN classifier, as these two models contain all types of experimental feature extractors, classifiers, and resolutions. The results are shown in Table. 3. As Table. 3 shows, we find that as the number of samples increases, the median, stage-decision ratio, and model performance increasingly approximate those obtained by all test samples. However, even when using only five samples, these differences remain minor and acceptable, with an average difference of 0.048 in median and 0.38% in accuracy. This means that on MCUs, processing just five samples is sufficient to quickly and accurately determine the threshold. Random selecting 50% samples is also a way to achieve the 0.5 stage-decision ratio. Therefore, regarding this random selection method as the baseline, we conduct a comparative experiment. For the baseline, we randomly select half of the samples to be processed only by the part model, the other half are further processed by the full model. For MicroT, the decision on which samples should be forwarded to the full model for further processing is based on the median of the confidence scores. As Fig. 8 shows, compared to a strategy relying on random selection, MicroT can improve the average accuracy by 4.83%. This finding suggests using the median of confidence scores as the sample selection strategy can enhance the model performance. 5.6 What is the System Cost of MicroT? MicroT has two stages on the MCU: classifier training and stage-decision. In these experiments, we measure the maximum memory usage, time, average power, and energy consumption under different model and resolution conditions. We select the Plant [63] and Bird dataset [62] to measure the system cost, as these two datasets have the smallest and largest number of classes. To efficiently analyze the system cost, we use the average of these two datasets. \fYushan Huang, Ranya Aloufi, Xavier Cadet, Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi Firstly, we measure the memory usage of MicroT by STM32CUBE-IDE [64]. To simplify the process and show that MicroT can meet the memory requirements of the experimental MCU boards: (i) We only analyze the local training stage, since the memory usage of the local training stage is higher than the state-decision stage. (ii) We only analyze the full model, since the memory usage of the full model is higher than the part model. (iii) We only analyze the 2-layer NN classifier, since the memory usage of the NN classifier is higher than the LR classifier. For maximum RAM usage, MCUNet is 614.40\ud835\udc3e\ud835\udc35(resolution 224) and 221.34\ud835\udc3e\ud835\udc35(resolution 128), ProxylessNAS is 624.64\ud835\udc3e\ud835\udc35(resolution 224) and 225.28\ud835\udc3e\ud835\udc35 (resolution 128). For flash memory usage, MCUNet uses about 0.91\ud835\udc40\ud835\udc35to store the model (feature extractor 0.67\ud835\udc40\ud835\udc35, classifier 0.24\ud835\udc40\ud835\udc35), and ProxylessNAS uses about 0.70\ud835\udc40\ud835\udc35(feature extractor 0.46\ud835\udc40\ud835\udc35, classifier 0.24 \ud835\udc40\ud835\udc35). These experimental results indicate that, from the perspective of memory usage, MicroT can meet the memory requirements of MCUs and can store several classifiers for multiple local tasks. We utilize Monsoon [55] to measure and calculate the time, average power, and energy consumption (as illustrated in Section. 4.3). For efficient analysis: (i) We only analyze the local training stage, since the system costs of it are higher than the stage-decision stage. (ii) We only analyze the NN classifier, since the system costs of it are higher than the LR classifier. The results are shown in Table. 4. We find that MicroT has fast processing speeds and low energy consumption during the local training stage. This is because only the classifier requires training, and it has a simple structure. This simple structure benefits from the general extracted features. We can also find that although the power of STM32H7A3ZI is higher, its processing speed is faster, resulting in overall lower energy consumption compared to STM32L4R5ZI. For example, with the MCUNet_r224_full_NN, the processing time on STM32H7A3-ZI is 2.86\ud835\udc60with a power of 3.90\ud835\udc5a\ud835\udc4aand energy consumption of 11.16\ud835\udc5a\ud835\udc3d, whereas on STM32L4R5ZI, the processing time is 5.19\ud835\udc60, power is 2.79\ud835\udc5a\ud835\udc4a, and energy consumption is 14.48\ud835\udc5a\ud835\udc3d. This could be attributed to the more efficient performance of the ARM Cortex-M7 core used in STM32H7A3ZI compared to the ARM Cortex-M4 core in STM32L4R5ZI. In terms of models, ProxylessNAS shows better energy and time efficiency than MCUNet. For example, on STM32H7A3ZI, the ProxylessNAS_r224_full_NN takes 1.58\ud835\udc60and consumes 6.11\ud835\udc5a\ud835\udc3d, while the MCUNet_r224_full_NN takes 2.86\ud835\udc60and consumes 11.16\ud835\udc5a\ud835\udc3d. Compared to the full model, the part model reduces training time and energy consumption. For example, the MCUNet_r224_full_NN and MCUNet_r224_part_NN on STM-32H7A3ZI require 2.86\ud835\udc60/11.16\ud835\udc5a\ud835\udc3dand 2.18\ud835\udc60/8.52\ud835\udc5a\ud835\udc3d, respectively. This is because the local training includes feature extractor inference and classifier training, and the part model makes the feature extractor inference more efficient and energy-saving. This energy-saving becomes more evident during the stage-decision inference stage, as the energy consumption for classifier inference is lower than during training, which can improve the proportion of the feature extractor\u2019s energy cost in the overall energy cost. These results demonstrate the effectiveness of the part model and the feasibility of MicroT\u2019s low-energy local training. Subsequently, we analyze the energy consumption during the stage-decision with different ratios. When the ratio is set to 0, meaning all samples are processed by the full model, we regard this condition as the baseline to assess how much energy MicroT can save. Since Section. 5.5 shows that the threshold calculated by five samples is not much different compared to the exact threshold, we use the exact threshold here to determine the ratio. The results are shown in Table. 5. We find that as the stage-decision ratio increases, MicroT achieves greater energy savings, with a minimum of 7.24% (ratio 0.25) and a maximum of 29.13% (ratio 1). For the standard ratio 0.5, MicroT saves 14.47% of energy. This indicates that MicroT can adaptively and effectively save energy while enhancing the performance of multi-task models on MCUs. 6 Discussion and Future Work Key Findings. Our evaluation results showed that: \u2022 SSKD can improve the model performance on multiple local tasks, with up to a 9.87% increase in accuracy compared with the SOTA. \u2022 The model segmentation fused score and joint training can effectively optimize the model performance of the part and full model, laying the foundation for stage-decision. \u2022 MicroT can achieve low energy local training on MCUs, with the lowest energy consumption at 2.60\ud835\udc5a\ud835\udc3dfor the part model, and 2.94\ud835\udc5a\ud835\udc3dfor the full model on the STM32H7A3ZI. \u2022 The stage-decision ratio determines how many samples are processed only by the part model. And the stage-decision can effectively save energy consumption, up to 29.13% (ratio of 1). Under the standard ratio of 0.5, MicroT can save about 14.47% energy and improve 5.91% model performance. \u2022 Determining the stage-decision ratio by the confidence threshold is efficient and accurate. MicroT only needs five local samples on the MCUs to obtain the confidence threshold. At the standard stage-decision ratio of 0.5, compared to randomly selecting 50% of the samples, MicroT can improve model performance by 4.83%. Model Architectures. MicroT is adaptable to various models. In addition to the models used in the experiments, SqueezeNet\u2019s Fire Module [65], MobileNet\u2019s Inverted Residual Structure [66], models from Neural Architecture Search, and LSTM (regard each LSTM-layer as one module) can facilitate model segmentation by the fused score, then further apply the following steps of MicroT. \fMicroT: Low-Energy and Adaptive Models for MCUs Accelerating MCU Inference and Training. The efficiency of MicroT\u2019s local inference and training on MCUs can be further improved. Integrating Sparse Updates [61], which minimize memory usage, could accelerate local training and inference within MicroT. This involves assessing parameter significance in both the part model and full model\u2019s shared sections. In the part model, only highly important parameters would be operational, while the full model would utilize all parameters. Such an approach could substantially decrease runtime, power, and energy usage, thereby improving MicroT\u2019s overall energy efficiency. Unsupervised Classifier. MicroT relies on labeled data for classifier training on MCUs, but exploring unsupervised classifiers presents an interesting direction. While our results show that EMO\u2019s [43] unsupervised classifier struggles with complex datasets, its potential merits further investigation. The unsupervised classifier\u2019s poor performance on complex datasets may be caused by the vast number of categories and the dense feature space. Nonetheless, refining the differentiation of categories in this feature space, perhaps through methods like contrastive learning [50], could enhance the classifier\u2019s performance. Such advancements would broaden MicroT\u2019s applicability. Multimodal Feature. MicroT focuses only on image data. However, the potential to handle diverse signal sources, like environmental sensor signals [67, 68], is essential for MCUs. Recent studies have investigated data and feature fusion techniques for multimodal data analysis on edge devices [69, 70]. MicroT could integrate its feature extractor with these fusion methods to map multimodal data into a unified feature space. This expansion not only widens MicroT\u2019s utility but also enhances its model robustness. Multiple Model Segmentation. MicroT splits the initial feature extractor into a part model and a full model to enable low-energy training and inference on MCUs. In fact, MicroT can further improve the granularity of model segmentation by dividing the initial feature extractor into multiple part models, thus achieving higher adaptability. 7 Conclusion In this paper, we introduced MicroT, a practical, low-energy, and open-source framework to address the multi-task challenge of MCUs. MicroT leverages a powerful and tiny feature extractor by SSKD, along with classifiers that can be specifically trained for local multiple tasks. To further reduce energy consumption, MicroT utilizes model segmentation, joint training, and stage-decision. Our design supports user-defined stage-decision ratio and threshold to adaptively balance model performance and energy consumption. Our experimental results show that MicroT exhibits an enhancement in model performance by up to 9.87%, and energy consumption by up to about 29.13%. Employing standard settings for the stage-decision ratio and threshold, MicroT can achieve a 5.91% improvement in model performance and an energy saving of about 14.47%. MicroT\u2019s adaptability, energy efficiency, and sufficient model performance make it a feasible solution for the multi-task challenge of MCUs." + }, + { + "url": "http://arxiv.org/abs/2404.09161v1", + "title": "Coreset Selection for Object Detection", + "abstract": "Coreset selection is a method for selecting a small, representative subset of\nan entire dataset. It has been primarily researched in image classification,\nassuming there is only one object per image. However, coreset selection for\nobject detection is more challenging as an image can contain multiple objects.\nAs a result, much research has yet to be done on this topic. Therefore, we\nintroduce a new approach, Coreset Selection for Object Detection (CSOD). CSOD\ngenerates imagewise and classwise representative feature vectors for multiple\nobjects of the same class within each image. Subsequently, we adopt submodular\noptimization for considering both representativeness and diversity and utilize\nthe representative vectors in the submodular optimization process to select a\nsubset. When we evaluated CSOD on the Pascal VOC dataset, CSOD outperformed\nrandom selection by +6.4%p in AP$_{50}$ when selecting 200 images.", + "authors": "Hojun Lee, Suyoung Kim, Junhoo Lee, Jaeyoung Yoo, Nojun Kwak", + "published": "2024-04-14", + "updated": "2024-04-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Coreset Selection for Object Detection", + "main_content": "Introduction In today\u2019s data-driven era, managing the sheer volume and variety of data presents a crucial challenge, particularly in areas like computer vision and deep learning, which deal with a tremendous amount of data [4, 22, 36]. With the advent of technologies such as autonomous vehicles and smart surveillance systems, accurate and efficient recognition of such image data has become paramount. One key strategy in managing these massive datasets involves \u2018coreset selection,\u2019 a method aimed at identifying a smaller, representative subset of the original dataset. This subset is then used to streamline complex computations and enhance processing efficiency. However, as illustrated in Figure 1, traditional coreset selection methods are unrealistic because they were developed under the na\u00a8 \u0131ve assumption of a single object per image, a condition that real-world images do not often meet [1, 5, 10]. Real-world images typically contain the *Corresponding author Img Img . . . select select Img Img Img Img Img Img Object Detection Select or Not? \u201cMulti-Object & Multi-Label within a Single Image\u201d Bad for training Bad for training Good for training Img Image Classification Figure 1. The difference in coreset selection between image classification and object detection. natural variability, such as multiple objects of various categories, sizes, and locations. This implies that we should develop methods that consider the natural variability. As Figure 1 illustrates, if we evaluate the suitability of an image on an object-by-object basis, considering one object as suitable for training does not necessarily imply that the others are also suitable. In other words, the decision of a core image should be based on all objects in an image. However, traditional methods, designed to solve only the single-image-single-object pairing, fail to consider this, and therefore struggle in realistic conditions. In this paper, we address a significant limitation in the field of coreset selection which has traditionally operated under the assumption of a single object per image. We introduce a realistic approach tailored to the more complex, yet common, scenario where images inherently contain multiple objects. This shift from single-object to multiobject consideration is a central advancement of our work. CSOD not only recognizes the presence of numerous objects within each image but also tackles the compounded 1 arXiv:2404.09161v1 [cs.CV] 14 Apr 2024 \funcertainties by considering objects\u2019 spatial information such as size and location. To validate our method, we implemented and conducted experiments in object detection, a representative task of situations where multiple objects may reside within a single image. Our method, CSOD, is built upon a unique concept: the \u2018imagewise-classwise vector.\u2019 To select the most representative images among compounded uncertainties, we need a way to summarize the information of each image effectively. The imagewise-classwise vector serves this purpose by averaging the features of objects of the same class within an image. This comprehensive representation allows for informed decision-making when addressing the complexities of multi-object images. We employ a greedy approach to select individual data points sequentially by class order, thereby constructing the coreset step by step. Although this method considers only one class at each selection step, it guarantees that the most pertinent selections for each class are made, enhancing the representativeness and diversity of the coreset. Furthermore, to ensure that the selected subset informatively represents the entire dataset, we introduce a mathematical tool known as a \u2018submodular function,\u2019 as delineated by Krause and Golovin [17]. The function aids in selecting the most informative subset based on the imagewise-classwise average features for each category. Our empirical evaluations, particularly in scenarios involving the detection of multiple objects, demonstrate the effectiveness of CSOD. For instance, when selecting 200 images from the Pascal VOC dataset [8], our method achieved an impressive improvement of +6.4% point in AP50 compared to random selection. Moreover, we also evaluated it on the BDD100k [33] and MS COCO2017 [18] datasets and confirmed that our method outperforms random selection. These significant achievements emphasize the efficacy and innovativeness of CSOD in addressing the challenges of coreset selection in multi-object image data. In summary, CSOD represents a pivotal extension of existing coreset selection frameworks to encompass scenarios with multiple objects per image. This approach addresses a gap that traditional methods, which assumed only one object per image, did not cover. While our focus is on multi-objects of the same class, we acknowledge the potential for future expansions of our method to accommodate images featuring various categories of objects. With our unique problem recognition and solution, we aim to shift the paradigm of coreset selection towards more realistic scenarios, specifically image datasets containing multiple objects. This research transcends mere technical advancement, marking a pivotal shift in processing complex real-world datasets and opening new horizons for coreset selection, thereby addressing a significant challenge of the big data era. 2. Background and Prior works 2.1. Coreset selection Welling [31] introduced the concept of herding for iterative data point selection near class centers. Wei et al. [30] applied the submodular function to the Na\u00a8 \u0131ve Bayes and Nearest Neighbor classifier. We also adopt this function, so we provide further explanation in Section 2.3. Braverman et al. [1], Huang et al. [12] modified statistical clustering algorithms like k-median and k-means to identify data points that effectively represent the dataset. Coleman et al. [5] utilized uncertainties measured by entropy or confidence. Huang et al. [13] theoretically explained the upper and lower bounds on the coreset size for k-median clustering in low-dimensional spaces. However, most previous researches focused on image classification, and to the best of our knowledge, our work is the first research to design coreset selection specifically for object detection. 2.2. Dataset Distillation Coreset Selection and Dataset Distillation are crucial in enhancing model training efficiency, with the former selecting informative data points and the latter synthesizing data to distill the dataset\u2019s information. Despite their different approaches\u2014selection versus synthesis\u2014both methods aim to encapsulate data. Current Dataset Distillation research [3, 7, 20, 28, 35], primarily focused on image classification, presents unexplored potential in object detection. Advancements in Coreset Selection for object detection may have a significant influence on Dataset Distillation strategies for object detection. 2.3. Submodular function A set function f : 2V \u2192R is considered submodular if, for any subsets A and B of V where A \u2286B and x is an element not in B, the following inequality holds: f ( \\mat h cal { A} \\ cup \\ {x\\}) f(\\mathcal {A}) \\geq f(\\mathcal {B} \\cup \\{x\\}) f(\\mathcal {B}) (1) Here, \u2206(x|A) := f(A \u222a{x}) \u2212f(A) represents the benefit of adding x to the set A. In simple terms, this inequality means that adding x to B provides less additional benefit than adding x to A. This is because B already contains some of the information that x can offer to A. Therefore, we can use submodularity to find a subset that maximizes the benefit of adding each element. However, in general, selecting a finite subset S with the maximum benefit is a computationally challenging problem (NP-hard) [17]. To address this, we employ a greedy algorithm that starts with an empty set and adds one element at a time. Specifically, Si is updated as Si\u22121 \u222a argmaxx \u2206(x|Si\u22121). For more information, please refer to Krause and Golovin [17]. 2 \fBackbone Feature map RPN RoI Pooling RoI Feature Class Probability Box Coordinate Image Classifier Regressor Figure 2. The forward process during the training phase of Faster R-CNN. The RoI features include both foreground and background regions at the forward process. 2.4. Faster R-CNN Various object detectors exist, including Faster R-CNN [24], SSD [9], YOLO [23], and DETR [2]. We chose Faster R-CNN as our base model. This choice was motivated by its widespread adoption not only in supervised detection but also in various research areas such as few-shot detection [29], continual learning [27], and semi-supervised object detection [15]. Faster R-CNN operates as a two-stage detector. As illustrated in Figure 2, the first stage employs Region Proposal Network (RPN) to generate class-agnostic object candidate regions in the image, followed by pooling these regions to obtain Region of Interest (RoI) feature vectors. In the second stage, the model utilizes these RoI feature vectors for final class prediction and bounding box regression. Our research uses these RoI feature vectors for coreset selection. 2.5. Active Learning for Object Detection Active learning is concerned with selecting which unlabeled data to annotate and is thus related to coreset selection. In the context of active learning for object detection, Yuan et al. [34] proposed a method based on uncertainty that utilizes confidence scores on unlabeled data. Kothawade et al. [16] aimed to address the low performance issue in rare classes when conducting active learning. The method extracted features of rare classes from labeled data and aimed to maximize the information of rare classes by submodular function and computing the cosine similarity between these labeled features and the features of unlabeled data. 3. Method 3.1. Problem Setup We have an entire training dataset T = {xi, yi}D i=1. Here, xi \u2208X is an input image, and yi \u2208Y is a ground truth. Because these data are for object detection, yi = {ci,j, bi,j}Gi j=1 contains variable numbers of annotations depending on the image. In the Gi annotations, ci,j is a class index, and bi,j = {bleft i,j , btop i,j , bright i,j , bbottom i,j } denotes the coordinates of the j-th bounding box. Coreset selection aims to choose a labeled subset S \u2282T that best approximates the performance of a model trained on the entire labeled dataset, T . In our approach, we prioritize the number of images over Algorithm 1 CSOD Pseudocode Require: Training Data T = {(xi, yi)}D i=1 with C classes, where yi = {(ci,j, bi,j)}Gi j=1 and Gi is the number of ground truth objects in the i-th image. Trained backbone f\u03b8. RoI pooler g, Global Average Pooling function h. Ensure: Selected subset S with size N 1: Initialize Sc = \u2205, Pc = \u2205, Qc = \u2205for all c \u2208{1, . . . , C} 2: Stage 1: Preparing Imagewise-Classwise Features 3: for i = 1 to D do 4: RoI features Ri = {ri,j}Gi j=1 = h(g(f\u03b8(xi), yi)) \u25b7Sec. 3.3 5: for all classes c present in yi do 6: pi,c = 1 |{j|ci,j=c}| P {j|ci,j=c} ri,j \u25b7Sec. 3.4 7: Update Pc = Pc \u222a{pi,c} 8: end for 9: end for 10: Stage 2: Subset Selection 11: while |S| < N do 12: for c = 1 to C do \u25b7Sec. 3.5 13: Compute scores si,c = score(pi,c, Qc), \u2200i \u25b7Eq.4 14: Select the image i\u2217= arg maxi si,c 15: Update Sc = Sc \u222a{(xi\u2217, yi\u2217)} 16: Update Qc\u2032 = Qc\u2032 \u222a{pi\u2217,c\u2032}, \u2200c\u2032 \u2208yi\u2217 17: Remove pi\u2217,c\u2032 from Pc\u2032, \u2200c\u2032 \u2208yi\u2217 18: end for 19: Update S = S c\u2208{1,...,C} Sc 20: end while the number of annotations. This is because annotations typically consist of relatively few strings, and what primarily affects training time and data storage is the number of images rather than the number of annotations. 3.2. Overview CSOD picks out the most useful images by looking at one object category at a time. Below are the steps of our CSOD: Preparing Object Features: We extract RoI feature vectors from the ground truth of the entire training set (Sec. 3.3). Then, we average the RoI features of the same class within one image (Sec. 3.4). Choosing the Best Images: We utilize the averaged RoI feature vectors to greedily select images one by one for each class in a rotating manner (Sec. 3.5). In doing so, the submodular optimization technique is introduced to ensure that the selection process considers both representativeness and diversity (Eq. 4). When we pick an image, we do not just use one object in it for training; we use all the objects it contains. Algorithm 1 provides the pseudocode, while Figure S1 in the supplementary material aids understanding. 3.3. Ground Truth RoI Feature Extraction With Faster R-CNN, we extract RoI feature vectors from training images by the ground truth (not from the RPN output). If the i-th training image contains Gi ground truth objects, then we have Gi RoI feature vectors, Ri, as follows: \\ mathcal {R} _ i = \\hj {\\{ \\mathrm {\\boldsymbol {r}}_{i,j}\\}_{j=1}^{G_i}} = h(\\displaystyle g(\\displaystyle f_{\\theta }(x_i), y_i)) (2) 3 \fwhere xi is an input image, yi is a ground truth, f\u03b8 is the backbone trained by the entire data, g is the RoI pooler, h is global average pooling and ri,j is the j-th RoI feature vector of the i-th image. 3.4. Imagewise and Classwise Average Once we have extracted all the RoI feature vectors for each image, we have a choice to make: For coreset selection, should we average the RoI feature vectors of the same class within a single image to create a single prototype vector representing that class for the image, or should we use these RoI feature vectors directly? As mentioned in Section 1, we chose the averaging approach. If Ri = {ri,j}Gi j=1 represents the RoI feature vectors for the i-th data with Gi ground truth objects, then the average RoI feature vector for class c in the i-th data, denoted as pi,c, is calculated as follows: \\m a t hrm {\\bo l dsy m bol {p}}_{ i,c} = \\frac {1}{\\hj {|\\{j|c_{i,j}=c\\}|}} \\sum _{\\{\\hj {j|c_{i,j}=c}\\}} \\mathrm {\\boldsymbol {r}}_{i,j} (3) 3.5. Greedy Selection After obtaining averaged RoI feature vectors, our selection process follows a greedy approach, iteratively choosing one data point from each class at a time. To facilitate this, we compute a similarity-based score for each RoI feature vector. This scoring mechanism based on the submodular function assigns higher scores to RoI feature vectors that are similar to others within the same class and lower scores to those similar to RoI feature vectors that have already been selected. This strategy enables us to take into account previously selected data points when making new selections. The score function computes the score s for the i-th data point within class c as follows: \\la b e l { eq:submod ular_ g a i n} s_{i,c } = \\lambda \\cdot \\sum _j cos(\\mathrm {\\boldsymbol {p}}_{i,c},\\mathrm {\\boldsymbol {p}}_{j,c}) \\sum _j cos(\\mathrm {\\boldsymbol {p}}_{i,c}, \\mathrm {\\boldsymbol {q}}_{j,c}) (4) The term \u201ccos\u201d represents the cosine similarity, pi represents the averaged RoI feature vectors that have not been selected yet, and qi denotes the previously selected RoI feature vectors. The hyperparameter \u03bb is introduced to balance the contributions within the scoring function, in which the former term aims to select the most representative one from among those that have not been selected, while the latter term aims to select something different from what has already been selected before. The experiment related to \u03bb can be found in Section 4.4.2. CSOD selects data corresponding to the maximum value in Eq. (4) for each class. If a chosen data point includes multiple classes, the features of these classes are considered part of the previously selected qi. This method systematically cycles through each class, ensuring unique selections, until it reaches the targeted number of choices. 4. Experiments In this section, we empirically validate the effectiveness of CSOD through experiments. First, we will show that CSOD outperforms various random selections and other coreset selection methods originally designed for image classification. We will then investigate the tendency associated with the number of selected images and the hyperparameter \u03bb of Eq. (4). Additionally, we will compare performance when averaging RoI features of a class within an image versus using individual features as they are. Furthermore, we will extend our analysis to evaluate the performance of different datasets and various network architectures. 4.1. Implementation details We conducted experiments on Pascal VOC 2007+2012 [8], using the trainval set for selection and training, and the VOC07 test set for evaluation. The metric is Average Precision at IoU 0.5 (AP50). For ablation and analysis, we chose 200 images from 20 classes, training for 1000 iterations. We averaged performance over 20 runs due to the limited number of images. We used Faster R-CNN-C4 [24] with ResNet50 [11]. For the selection phase, we used the model weight trained on VOC07+12, provided by detectron2 [32]. After selection, a new model was trained on the chosen subset, with a backbone pre-trained on ImageNet [6]. For further details, see Section S1 in the supplementary material. 4.2. Comparision with Other Selections Comparison Targets. Table 1 shows the comparison list. Random refers to the method where one image per class is randomly selected in turn. There can be multiple classes in a single image, and when selecting in turn by class, the images were chosen without duplication to make the target number of images. Additional experiments regarding Random Selection are conducted in Section 4.3. Coreset selection methods for image classification, such as Herding [31], k-Center Greedy [25], and Submodular function [14], were also compared. The CSOD\u2019s network weight was employed, and the backbone feature was globally average pooled to utilize that feature vector for selection. Similar to random selection, images were evenly chosen from each class. Result. As seen in Table 1, our method consistently shows the highest results. Random selection shows higher performance than some existing methods, which implies that these methods are designed only for the image classification task and yield lower results in object detection. The submodular function showed some effectiveness when the number of data was low. However, as mentioned in Section 1 and Figure 1, it is a modeling that fundamentally cannot consider multiple objects of various sizes and locations. Therefore, it not only showed lower results than random when selecting over 500 images but also significantly lower results than 4 \fSelection Method 20 100 200 500 1000 Random (Uniform) 9.8 \u00b1 2.2 27.9 \u00b1 1.6 37.9 \u00b1 1.1 50.7 \u00b1 1.0 58.4 \u00b1 0.6 Herding [31] 4.1 \u00b1 0.7 17.7 \u00b1 1.2 26.0 \u00b1 0.8 37.8 \u00b1 0.7 46.4 \u00b1 0.7 k-Center Greedy [25] 10.0 \u00b1 1.3 21.8 \u00b1 2.1 32.3 \u00b1 0.9 47.4 \u00b1 1.0 55.9 \u00b1 0.3 Submodular Function [14] 12.9 \u00b1 0.9 30.5 \u00b1 1.3 38.6 \u00b1 0.9 48.8 \u00b1 0.7 55.8 \u00b1 0.3 CSOD (Ours) 14.5 \u00b1 1.6 34.4 \u00b1 1.0 44.3 \u00b1 0.7 54.1 \u00b1 0.7 60.6 \u00b1 0.4 Table 1. Comparison with random and coreset selection for image classification, reporting AP50 on the VOC07 test data. We ran all experiments 20 times, with \u00b1 indicating standard deviation. Note that the standard deviation is to show that the performance gap is clear. Herding, Submodular, and CSOD select subsets deterministically, with only network weight initialization affected by random seeds. 45 40 35 AP50 Full Random Uniform Ratio # 0.7-0.8k # 0.8-0.9k # 0.9-1.0k # 1.0-1.1k # max Ours Figure 3. Comparison with various selection methods. \u2018#\u2019 denotes the number of objects in the selected data. CSOD. Based on these results, future experiments for comparison will be conducted with Random selection. 4.3. Comparison with Random Selections Figure 3 shows that our approach consistently outperforms other selection methods when selecting 200 images. Notably, in this comparison, \u201c# max\u201d and \u201cOurs\u201d are the only methods without randomness, while the rest incorporate some degree of randomness. Therefore, we did not specifically address the performance variance of each selection method. Our method was implemented with a fixed set and without randomness, leading to reduced performance variance only comes from the training process. This experiment\u2019s significance lies in the clearly higher performance of ours compared with those of other methods. We categorized random selection into several methods. \u201cFull random\u201d selects 200 images randomly, but repeats the process if any classes have no objects in those 200 images. \u201cUniform\u201d and \u201cRatio\u201d involve sampling images one by one for each class until 200 images are selected (sampling without replacement). In these cases, images selected from one class are excluded from selection in other classes, as an image can contain objects in multiple classes. \u201cUniform\u201d distributes images evenly with 10 per class, while \u201cRatio\u201d selects images based on the proportion of images per class. The CSOD result consists of 1,032 annotations. Therefore, we also experimented with random selection while Ratio Small Medium Large 1.0 0.8 0.6 0.4 0.2 0.0 Figure 4. Ratio of box sizes. We followed the size criteria provided by VOC. \u2018#\u2019 denotes the number of objects in the selected data. controlling for the number of annotations. \u201c# 700-1100\u201d limits annotations to this range using the Uniform method. \u201c# max\u201d also follows Uniform but selects images based on the annotation count in descending order rather than selecting them randomly. 4.3.1 The performance and object size ratio. Figure 4 shows the relationship between box size, object count, and performance. We conducted two comparisons. First, we compared Uniform and CSOD. We observed that the ratio of Uniform was closer to that of the entire dataset in terms of KL divergence, while Ours had more annotations. Second, we compared CSOD and \u2018# 1-1.1k\u2019. Both methods had similar object counts, but CSOD\u2019s box size ratio was closer to that of the training data. While we cannot definitively assert causality, it appears that a well-represented subset with an equal number of images has a correlation with both box size and object count. 4.4. Analysis of the number of images and the hyperparameter \u03bb 4.4.1 The number of selected images Figure 5 shows performance with the number of selected images. Since selecting 20 images indicates only one image per class is selected, \u03bb is meaningless. For other cases (100, 200, 500, and 1000), we set \u03bb as (0.0125, 0.04375, 0.0625, 5 \f20 100 200 500 1000 The number of images 0 20 40 60 AP50 14.5 34.4 44.3 54.1 60.6 9.8 27.9 37.9 50.7 58.4 Ours Random Figure 5. Performance according to the selected image counts. 1e-10 0.1 0.2 0.3 0.4 0.5 Hyperparameter 27.9 37.5 50.7 58.4 AP50 1e+10 100 200 500 1000 Figure 6. AP50 and \u03bb. Dashed lines represent random selection performance. and 0.025), respectively. Compared to the random selection, we observe that as the number of selected images increases, the performance gap naturally decreases, but it consistently remains at a high level. 4.4.2 Balance hyperparameter \u03bb Figure 6 illustrates the relationship between performance and \u03bb in Eq. (4). A high \u03bb value (1e+10) means selecting images based solely on cosine similarity, prioritizing representative images. In contrast, a small \u03bb value (1e-10) means selecting an image per class with the highest cosine similarity first and then selecting images that are as dissimilar as possible from those already selected. In other words, it emphasizes diversity from a cosine similarity viewpoint. The observations can be made: Firstly, our approach outperforms random selection when \u03bb is above a certain threshold. Secondly, it is better to consider both representativeness and diversity by appropriately tuning \u03bb rather than simply selecting images purely based on the order of cosine similarity (1e+10). Lastly, the optimal \u03bb value varies depending on the number of images to be selected, as the greedy selection process (Section 3.5) progressively increases the number of selected images. Please refer to Table S1 in the supplementary material for the AP50 values corresponding to the \u03bb values. The number of changes in the image list 0 18 32 45 113 Objectwise \u03bb 1e+10 0.125 0.075 0.051 0.015 AP50 40.4 40.3 40.7 41.4 39.0 Imagewise (Ours) \u03bb 1e+10 0.100 0.050 0.038 0.013 AP50 42.1 43.3 43.5 43.8 41.5 Table 2. Comparison with two methods, \u201cImagewise\u201d (averaging RoI vectors) and \u201cObjectwise\u201d (not averaging), for selecting 200 images. The number of changes in the image list is based on \u03bb = 1e + 10 as the reference point (The smaller \u03bb, the severer the change). \u03bb is rounded to the fourth decimal place. 20 40 60 80 100 200 Objectwise 13.0 20.6 25.8 29.1 31.5 40.4 Imagewise (Ours) 14.2 23.1 27.3 30.1 32.9 42.1 Table 3. \u03bb is 1e+10 in all cases, meaning that we selected based solely on cosine similarity ranking. 4.5. Effectiveness of Averaging RoI feature vectors 4.5.1 Performance comparison Table 2 compares performance between averaging the RoI feature vectors of the same class (Imagewise) or not (Objectwise). These two cases have different balance strengths for \u03bb, as Imagewise averages within the same class, resulting in significantly fewer RoI feature vectors. Therefore, we compared the extent to which the image list changes, using 1e+10 as the reference point. Remarkably, we observed consistent high performance regardless of \u03bb values. However, even Objectwise outperformed random selection, achieving a result higher than 37.5 of random selection. 4.5.2 Representativeness: Objectwise vs. Imagewise feature vector Table 3 compares Objectwise and Imagewise based on the number of selected images when \u03bb=1e+10. This experiment highlights that even if the cosine similarity of a single object within an image is exceptionally high, that image may not effectively represent the overall distribution of the data. For example, the case where only one image per class is selected (20 in the table) indicates how well a single image represents the corresponding class. The table shows the superiority of our imagewise selection over objectwise selection. 4.5.3 Visulization of objectwise selection Figure 7 illustrates a limitation of the objectwise approach, where the selected image may not effectively represent the 6 \fSelected Selected \u00a0 \u00a0 Ranking: \u00a0 \u00a0 \u00a0 {8k, 10k, 11k, 13k, 14k} / 16k Ranking:\u00a03929 / 4008 Ranking:\u00a03700 / 4008 Figure 7. Examples of the Objectwise selection. Top: \u2018car\u2019 class. Bottom: \u2018person\u2019 class. The red dotted boxes indicate objects with low rankings in Eq. (4). entire dataset. Even if an image is selected because it contains an object with high cosine similarity, it does not guarantee that other objects within the same image will have similarly high cosine similarities. In other words, the cosine similarity of one object in an image with all the other objects in the entire dataset may not accurately represent the cosine similarities of all objects in that image. 4.5.4 Why imagewise (averaging) selection over objectwise selection? Let us compare object counts and size ratios in Section 4.3.1. In our Imagewise approach, there are 1,032 objects in our 200 selected images, which is higher than 806 in the Objectwise approach. Additionally, when considering the size ratios (small, medium, large), the Imagewise approach results in (7.3%, 38.2%, 54.5%), which is closer to the large object ratio in the entire train data, (10.5%, 32.0%, 57.5%), compared to the Objectwise, (10.2%, 37.2%, 52.6%). When we calculated the KL divergence between the distributions of the selected images and the entire training data, we found that Objectwise had a KL divergence of 0.006, lower than Imagewise\u2019s 0.013. Object count 1 2-4 5 or more Large 0.866 0.931 0.939 Medium 0.862 0.915 0.931 Small 0.798 0.818 0.831 Table 4. Cosine similarity between the entire average features and the average feature of each image by size in the \u2018person\u2019 class. This suggests that the number of annotations played a more significant role in the performance than the size ratio in the case of Imagewise and Objectwise. Despite the higher KL divergence for Ours, there were substantial differences in the number of annotations for each size. Ours had counts of (75, 395, 562) for each size, whereas Objectwise had counts of (82, 300, 424). We formulated a hypothesis that \u201cAs the number of objects within an image increases and their sizes are larger, the cosine similarity between the class\u2019s entire average RoI vector (class prototype) and the image\u2019s average RoI vector (image prototype) for that class will be higher.\u201d To validate this hypothesis, we conducted the experiment presented in Table 4. Initially, we averaged all RoI vectors for the \u2018person\u2019 class (class prototype). Then, we made an averaged RoI vector by size within each image (imagewisesizewise prototype). We subsequently computed the cosine similarity between the class prototype and the imagewisesizewise prototypes. The results confirmed that the Imagewise approach leads to a higher selection of larger objects, resembling the entire dataset. 4.6. Evaluation on the BDD100k dataset BDD100k [33] is a significant dataset for autonomous driving, consisting of 100k images, 1.8M annotations, and 10 different classes. Following the official practice, we split the dataset into 70k for training with 1.3M annotations. Table 5 shows the results on the validation data, illustrating that our method consistently achieves higher AP50, AP75, and AP compared to random selection. Notably, similar performance improvements were observed in our experiments with the VOC dataset. However, BDD100k, explicitly designed for real-world autonomous driving applications, offers a more challenging and realistic benchmark. The fact that our method excels even in the challenging environment of BDD100k further demonstrates its effectiveness and practicality. For implementation details, kindly refer to Section S2 in the supplementary material. 4.7. Evaluation on the COCO dataset The COCO2017 dataset [18] encompasses 80 classes, partitioned into 118K images for the training set and 5K for the validation set. Experiments were conducted with subsets of 400 and 800 images selected from the training data. Table 6 7 \fnum img AP50 AP75 AP 200 Random 25.8 9.7 12.0 Ours 29.0 10.8 13.5 \u2206 +3.2 +1.1 +1.5 500 Random 32.2 13.4 15.8 Ours 35.1 14.9 17.5 \u2206 +2.9 +1.5 +1.7 1000 Random 37.1 16.2 18.6 Ours 39.4 17.8 20.1 \u2206 +2.3 +1.6 +1.5 2000 Random 42.1 19.7 22.0 Ours 43.7 21.0 23.2 \u2206 +1.6 +1.3 +1.2 Table 5. BDD100k result num img AP50 AP75 AP 400 Random 15.1 4.9 6.6 Ours 16.7 5.8 7.5 \u2206 +1.6 +0.9 +0.9 800 Random 19.4 7.5 9.1 Ours 20.1 8.2 9.6 \u2206 +0.7 +0.7 +0.5 Table 6. COCO2017 result shows the outcomes for the validation set. Despite the substantially larger size of the dataset compared to VOC, it was observed that our method was effective. For reproducibility please refer to Section S3 in the supplementary material. The BDD dataset has more annotations despite having fewer images compared to COCO. Interestingly, the performance improvement margin on COCO was smaller than on BDD. This observation raises several points for consideration. The BDD dataset, focused on autonomous driving, predominantly encompasses outdoor scenes with less diversity, such as perspective, compared to COCO. Conversely, COCO spans a wider spectrum, including both indoor and outdoor scenes and a larger variety of classes. This diversity potentially renders the accurate representation of the entire data distribution with a subset of images more challenging. This observation not only clarifies our current results but also highlights this as a key area for future study. 4.8. Cross-architecture evaluation Table 7 presents an experiment in which we assessed whether the 500 images selected using Faster R-CNN remained effective for different networks, namely RetinaNet [19] and FCOS [26]. We were able to confirm the effectiveness of images selected with Faster R-CNN for other networks as well. Unlike Faster R-CNN, these two networks often encountered training issues due to loss exploRetinaNet FCOS random 54.5 47.9 ours 58.3 53.1 \u2206 +3.8 +5.2 Table 7. Cross architecture experiment. We trained the models on 500 VOC images and reported AP50. sion when following their respective default hyperparameters. Therefore, we adjusted the hyperparameters, such as the learning rate and gradient clipping, but it is important to note that the hyperparameters for random selection and our method remained consistent. Please refer to Section S4 in the supplementary material for reproducibility. 5. Discussion Conclusion. We have proposed a Coreset selection method for Object Detection tasks, addressing the unique challenges presented by multi-object and multi-label scenarios. This stands in contrast to traditional image classification approaches. Our approach considers both representativeness and diversity while taking into account the difficulties we have outlined in Section 1 and illustrated in Figure 1. Through experiments, we have demonstrated the effectiveness of our method, and its applicability to various architectures. We hope this research will further develop and find applications in diverse areas, such as dataset distillation. Limitation. While our research leveraged RoI features from ground truth boxes and achieved promising results, it is important to note certain limitations. Firstly, we did not explicitly incorporate background features, which could provide additional context and potentially enhance coreset selection in object detection. Future research could explore the explicit utilization of background features. Our approach, which selects greedily on a class-by-class basis, can take into account the RoI features of the current class even when they were selected during the turn of other classes. However, our method does not simultaneously incorporate the features of other classes within the same image. Further research could explore ways to capture interactions between different classes more effectively within a single image. Future work. Since CSOD considers localization, there may be aspects that can be applied to other tasks related to localization, such as 3D object detection. Furthermore, while dataset distillation has predominantly been studied in the context of image classification, it could also become a subject of research in the field of object detection datasets. Acknowledgements. This work was supported by NRF grant (2021R1A2C3006659) and IITP grants (20220-00953, 2021-0-01343), all funded by MSIT of the Korean Government. 8" + } + ] +} \ No newline at end of file