AcademicEval / abs_28K /test_abstract_long_2404.16348v2.json
jiyuuuu's picture
syn
9059969
raw
history blame
193 kB
{
"url": "http://arxiv.org/abs/2404.16348v2",
"title": "Dual Expert Distillation Network for Generalized Zero-Shot Learning",
"abstract": "Zero-shot learning has consistently yielded remarkable progress via modeling\nnuanced one-to-one visual-attribute correlation. Existing studies resort to\nrefining a uniform mapping function to align and correlate the sample regions\nand subattributes, ignoring two crucial issues: 1) the inherent asymmetry of\nattributes; and 2) the unutilized channel information. This paper addresses\nthese issues by introducing a simple yet effective approach, dubbed Dual Expert\nDistillation Network (DEDN), where two experts are dedicated to coarse- and\nfine-grained visual-attribute modeling, respectively. Concretely, one coarse\nexpert, namely cExp, has a complete perceptual scope to coordinate\nvisual-attribute similarity metrics across dimensions, and moreover, another\nfine expert, namely fExp, consists of multiple specialized subnetworks, each\ncorresponds to an exclusive set of attributes. Two experts cooperatively\ndistill from each other to reach a mutual agreement during training. Meanwhile,\nwe further equip DEDN with a newly designed backbone network, i.e., Dual\nAttention Network (DAN), which incorporates both region and channel attention\ninformation to fully exploit and leverage visual semantic knowledge.\nExperiments on various benchmark datasets indicate a new state-of-the-art.",
"authors": "Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao",
"published": "2024-04-25",
"updated": "2024-04-29",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Distillation",
"gt": "Zero-shot learning has consistently yielded remarkable progress via modeling\nnuanced one-to-one visual-attribute correlation. Existing studies resort to\nrefining a uniform mapping function to align and correlate the sample regions\nand subattributes, ignoring two crucial issues: 1) the inherent asymmetry of\nattributes; and 2) the unutilized channel information. This paper addresses\nthese issues by introducing a simple yet effective approach, dubbed Dual Expert\nDistillation Network (DEDN), where two experts are dedicated to coarse- and\nfine-grained visual-attribute modeling, respectively. Concretely, one coarse\nexpert, namely cExp, has a complete perceptual scope to coordinate\nvisual-attribute similarity metrics across dimensions, and moreover, another\nfine expert, namely fExp, consists of multiple specialized subnetworks, each\ncorresponds to an exclusive set of attributes. Two experts cooperatively\ndistill from each other to reach a mutual agreement during training. Meanwhile,\nwe further equip DEDN with a newly designed backbone network, i.e., Dual\nAttention Network (DAN), which incorporates both region and channel attention\ninformation to fully exploit and leverage visual semantic knowledge.\nExperiments on various benchmark datasets indicate a new state-of-the-art.",
"main_content": "Introduction Recognizing unknown categories in the open environment is a critical challenge for automatic recognition systems. ZeroShot Learning (ZSL) [Lampert et al., 2009] that serves as a promising solution has received increasing attention, which is inspired by human text-to-image reasoning capabilities. The objective of ZSL is to transfer the visual knowledge of seen classes to the unseen domain by virtue of shared semantic information, thus empowering the model to recognize the unseen classes. More trickily, Generalized Zero-Shot Learning (GZSL) [Chao et al., 2016] requires recognizing samples \u2217Corresponding author: Jingcai Guo. \u2020: Equal contribution. (a) cExp (b) fExp crown eye bill \u00b7\u00b7\u00b7 belly breast wing \u00b7\u00b7\u00b7 belly wing breast \u00b7\u00b7\u00b7 torso: crown bill eye \u00b7\u00b7\u00b7 head: Figure 1: (a) cExp, also the common practice in existing works, possesses complete attribute-awareness capability yet lacks the ability to process fine-grained semantic information. (b) fExp, which consists of multiple specialized sub-networks, lacks a global perception field. from both seen and unseen classes in the inference phase. Mainstream studies broadly follow two routes, generative [Xian et al., 2018][Xie et al., 2022][Li et al., 2023] and embedding techniques [Zhang et al., 2017][Liu et al., 2020][Chen et al., 2021b], where most of the schemes are devoted to mining and constructing class-wise visual-attribute relations. To strengthen the fine-grained perceptual capabilities of the model, recent research has invested considerable effort into modeling local-subattribute correlations [Xie et al., 2019][Huynh and Elhamifar, 2020][Xu et al., 2020]. The motivation is to build a refined pairwise relation map via searching and binding subattributes and the corresponding region visual features (Figure 1 (a)). Despite their contribution to boosting performance, the inherent asymmetry of attributes remains undiscussed, and the channel information is not fully exploited. The asymmetry of attributes stems from the fact that 1) the semantic dimensions between attributes are heterogeneous or even antagonistic. Take the SUN dataset [Patterson and Hays, 2012] as an example, where 38 attributes (studying, playing, etc.) describe the function of one scene, while 27 attributes arXiv:2404.16348v2 [cs.CV] 29 Apr 2024 \f(trees, flowers, etc.) describe the entities in the scene. It can be obviously observed that the former are abstract and global, while the latter are concrete and local; 2) the visual features corresponding to attributes are intertwined. For example, neighboring regions tend to be more semantically similar, a phenomenon that is exacerbated by the local information fusion mechanism of the convolutional kernel, which leads to difficulties in accurately locating fine-grained attributes such as head, crown, and so on. In this paper, we revisit the task of modeling visualattribute relations from the perspective of attribute annotations. Given the inherent complexity of attribute descriptions, existing learning paradigms are virtually forcing a single model to undertake a multi-objective hybrid task, which is ideally appealing yet empirically challenging. Naturally, we employ the idea of divide-and-conquer to release the pressure of a single model. We meticulously decompose the hybrid task into multiple subtasks, i.e., dividing the attributes into multiple disjoint clusters and assigning specialized learnable networks to them. Our approach is referred to as, Dual Expert Distillation Network, abbreviated DEDN. As shown in Figure 1, our approach sets up two experts. cExp, in line with common practices, is equipped with complete attribute perception capability to harmonize holistic visual-attribute measure results. fExp, consists of multiple subnetworks, where each subnetwork is only responsible for capturing the characteristics of a specific attribute cluster. During the training phase, we encourage the two to learn cooperatively to compensate for their respective deficiencies in a mutually distilling manner. The decision results of the two experts are combined for final inference. For the issue of underutilized channel information, we design a novel attention network, Dual Attention Network (DAN), as the backbone. DAN employs a dual-attention mechanism that fully exploits the potential semantic knowledge of both regions and channels to facilitate more precise visual-attribute correlation metrics. To further boost performance, we present Margin-Aware Loss (MAL) as the training loss function to address the confidence imbalance between seen and unseen classes. Our contributions are summarized below: \u2022 We rethink the issue of modeling visual-attribute relations from the perspective of attribute annotations and point out that the inherent complexity of attributes is one of the major bottlenecks. We propose a simple yet effective strategy of establishing two experts working on distinct attribute perception scopes to learn and infer collaboratively in a complementary manner. \u2022 We present a novel attention network, dubbed DAN, which incorporates both region and channel attention information to better capture correlations between visuals and attributes. Furthermore, a new learning function named MAL is designed to balance the confidence of seen and unseen classes. \u2022 We conduct extensive experiments on mainstream evaluation datasets, and the results show that the proposed method effectively improves the performance. 2 Related Work In ZSL/GZSL, attributes are the only ties that bridge seen and unseen classes, hence exploring and constructing the link between visuals and attributes is a core subject. Existing methods fall into class-wise visual-attribute modeling, which treats both visual features and attribute vectors as a whole, and regional visual-subattribute modeling, which seeks to explore the correlation between local visual information and subattributes. 2.1 Class-wise Visual-Attribute Modeling Mainstream researches broadly follow two technical routes, generative and embedding techniques. Generative techniques utilize the latent distribution fitting ability of generative models such as GAN and VAE to implicitly learn the relationship between attributes and categories to construct hallucinatory samples of unseen classes [Xian et al., 2018][Verma et al., 2018][Felix et al., 2018][Li et al., 2019][Vyas et al., 2020][Keshari et al., 2020][Xie et al., 2022][Li et al., 2023]. The technical bottleneck of this route is the poor realism of the hallucinatory samples, thus many studies incorporate other techniques such as meta-learning [Yu et al., 2020], representation learning [Li et al., 2021][Chen et al., 2021c][Chen et al., 2021a][Han et al., 2021][Kong et al., 2022], etc. for joint training. Embedding techniques aim at projecting visual and attribute features to a certain space, from which the most similar semantic information is searched. In general, embedding techniques are categorized into three directions: visual-to-attribute space [Changpinyo et al., 2016][Kodirov et al., 2017][Liu et al., 2020][Chen et al., 2022a], attribute-to-visual space [Zhang et al., 2017][Annadani and Biswas, 2018], and common space [Liu et al., 2018][Jiang et al., 2019]. Researchers in the first two directions invest considerable effort in designing robust mapping functions to cope with domain shift and out-of-distribution generalization problems. The third direction centers on finding a suitable semantic space. Class-level visual-attribute modeling lacks the fine-grained perceptual ability to respond to interactions between local visual features and subattributes. 2.2 Region-wise Visual-Attribute Modeling Region-wise modeling is a promising direction in embedding techniques. Unlike other embedding approaches, region-wise modeling focuses on the correlation between local information and subattributes to build more detailed mapping functions. Models based on attention mechanisms are the dominant means in this direction, motivated by training models to search for corresponding visual features based on semantic vectors. Recent approaches include feature-to-attribute attention networks [Xie et al., 2019][Huynh and Elhamifar, 2020], bidirectional attention networks [Chen et al., 2022b], and multi-attention networks [Zhu et al., 2019]. In addition, some studies resort to prototype learning, where the goal is to explicitly learn the corresponding prototypical visual features of individual subattributes, thus aiding the model\u2019s judgment [Xu et al., 2020][Wang et al., 2021]. Further, modeling the topological structure between regional features with the help of graph convolution techniques also yields promising results \fcExp fExp DAN DAN Distillation MAL MAL concat W1 W2 F V CxR DxG Sr DxR Ar softmax DxR \u00a0Product&Sum Or D W3 W4 F V RxC DxG Sc DxC Ac softmax DxC \u00a0Product&Sum Oc D \u00a0Weighted&Sum O D DAN Visual Feature crown bill eye \u00b7\u00b7\u00b7 head: belly wing breast \u00b7\u00b7\u00b7 torso: crown eye bill \u00b7\u00b7\u00b7 belly breast wing \u00b7\u00b7\u00b7 Figure 2: Left: cExp possesses the scope of a holistic attribute set, while fExp consists of multiple sub-networks, each of which is responsible for the prediction of only partial attributes. We concatenate all outputs of subnetworks as the final result of fExp. Then, distillation loss is implemented to facilitate joint learning. Right: The architecture of DAN. [Xie et al., 2020][Guo et al., 2023]. While the main idea of these approaches is to design appropriate attention networks or regularization functions, ignoring the inherent complexity of attribute annotations, we provide a new perspective to think about the visual-attribute modeling problem. In addition, existing region-attribute methods, although achieving good results, neglect the utilization of channel information, and we design a new attention network that utilizes both region and channel information. 3 Methodology 3.1 Preliminary Following previous studies [Chen et al., 2022b][Li et al., 2023], we adopt a fixed feature extractor, ResNet-101 [He et al., 2016], to extract visual features. Suppose Ds = {(F s i , Y s i )} denotes the seen classes, where F s i is the visual feature and Y s i denotes its label. Note that F \u2208RC\u00d7H\u00d7W , where C, H, W are the channel number, height, and width, respectively. Similarly have Du = {(F u i , Y u i )} to denote the unseen classes. Normally, the visual features of the unseen classes are not accessible during the training phase. Alternatively, we have the shared attribute A \u2208RK\u00d7D, where K denotes the total number of categories, and D denotes the number of attributes. Also, we use the semantic vectors of each attribute learned by GloVe, denoted by V \u2208RD\u00d7G, where G denotes the dimension of the vector. 3.2 Overview Our approach is shown in Figure 2 (Left). First, we disassemble the attribute set into multiple clusters based on their characteristics. Then the attribute vectors and the visual feature are fed into cExp and fExp simultaneously. cExp directly computes the scores of all attributes on that visual feature, while the scores of fExp are obtained by combining the computation results of each subnetwork. We constrain the two to learn from each other using distillation loss. Meanwhile, we introduce DAN as the backbone and MAL as the optimization objective. 3.3 Dual Attention Network Firstly we introduce the proposed novel backbone network, Dual Attention Network (DAN). Mining and constructing relations between visual features and attributes is crucial for zero-shot learning. Recently many works have been devoted to modeling the association between regions and attributes, such as attention-based approaches [Xie et al., 2019][Huynh and Elhamifar, 2020][Chen et al., 2022b] and prototypebased techniques [Xu et al., 2020][Wang et al., 2021]. However, these methods only focus on the semantic information of regions and ignore the role of channels. Therefore, DAN incorporates both the attention information of regions and channels to promote the efficacy of the model in utilizing visual features. As shown in Figure 2 (Right), DAN contains two parallel components that model region-attribute and channel-attribute relations, respectively. We first introduce the region-attribute component. We have visual features F \u2208RC\u00d7H\u00d7W , which is flattened to F \u2208RC\u00d7R, where R = H \u00d7 W denotes the number of regions. Let W1, W2 \u2208RG\u00d7C denote two learnable matrices. W1 maps the attribute vectors to the visual space and computes their similarity. The formula is expressed as: Sr = V W1F, (1) where Sr \u2208RD\u00d7R represents the score obtained for each attribute on each region. W2 is in charge of computing the attention weights to encourage the model to focus on the region-attribute pairs with the highest similarity. The formula is expressed as: Ar = V W2F P r\u2208R V W2Fr , (2) where Ar \u2208RD\u00d7R denote the normalized weight obtained by softmax. Then we naturally get the weighted matrix of \fscores, represented as: Or = X R Sr \u00d7 Ar, (3) where Or \u2208RD represents the similarity score obtained for each attribute on a visual feature. Next, we introduce the channel-attribute section, which has a similar principle. We have the scaled visual feature F \u2208RR\u00d7C and W3, W4 \u2208RG\u00d7R. Then W3 is charged with calculating the similarity score obtained by the attribute on each channel, formulated as: Sc = V W3F, (4) where Sc \u2208RD\u00d7C. And W4 computes its attention weights: Ac = V W4F P c\u2208C V W4Fc , (5) where Ac \u2208RD\u00d7C. Finally, we get the weighted score map: Oc = X C Sc \u00d7 Ac, (6) where Oc \u2208RD. We expect the final scores of attributes from different scale features to be consistent, i.e., semantic consistency. Therefore we employ Lalign, which contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, to align the outputs of both, formulated as: Lalign = 1 2(LKL(Or||Oc) + LKL(Oc||Or)) + ||Or \u2212Oc||2 2, (7) where LKL denotes Kullback-Leibler Divergence. In the inference phase, we use the weighted sum of Or and Oc as the final output, expressed as: O = \u03bbrc \u00d7 Or + (1 \u2212\u03bbrc) \u00d7 Oc, (8) where \u03bbrc is a hyperparameter. 3.4 Dual Expert Distillation Network Despite the fact that DAN enhances the modeling capability of the network, it is extremely challenging for a single model to simultaneously handle attributes with different semantic dimensions as well as visual features with different granularities. To this end, we propose the Dual Expert Distillation Network (DEDN) to alleviate the pressure on a single network (Figure 2 (left)). cExp is set up with a complete attributeaware scope as in conventional practice. Specifically, the input of cExp is the semantic vectors of all attributes, and the output is the similarity scores of all attributes. Denote cExp by \u03d5ec = {W ec 1 , W ec 2 , W ec 3 , W ec 4 }, the output is defined as: Oec = \u03d5ec(V, F), (9) where Oec \u2208RD and V \u2208RD\u00d7G. fExp consists of multiple subnetworks, each focusing on a specific attribute cluster. At first, we elaborate on how the attribute clusters are divided. Since attribute annotations are manually labeled based on semantics, they are inherently clustered in nature. For example, in the SUN dataset [Patterson and Hays, 2012], the top 38 prompts are used to describe the scene function. Therefore, it is easy to perform the division by human operation, Chat-GPT [Radford et al., 2018], or clustering algorithm. It requires a trivial amount of effort but is worth it. Assuming that the attribute set is divided into Q disjoint clusters, i.e. V = {V1 \u2208RD1\u00d7G, V2 \u2208RD2\u00d7G, ..., VQ \u2208 RDQ\u00d7G}, where D1 + D2 + ... + DQ = D. Accordingly, there are Q subnetworks for fExp to handle these attribute clusters one-to-one. Let \u03d5ef = {\u03d51 ef, \u03d52 ef, ..., \u03d5Q ef} denotes fExp, then the output is defined as: Oef = \u03d51 ef(V1, F) \u2295\u03d52 ef(V2, F) \u2295... \u2295\u03d5Q ef(VQ, F), (10) where \u2295denotes concat operation. After that, we calculate the score of each category for training and inference. Specifically, we compute the similarity with the output of the expert and the attributes of each category, defined as: Pec = OecAT, Pef = OefAT, (11) where Pec, Pef \u2208RK. To facilitate cooperative learning between two expert networks, we introduce distillation loss to constrain their semantic consistency. Concretely, the distillation loss contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, defined as: Ldistill = 1 2(LKL(Pec||Pef)+LKL(Pef||Pec))+||Pec\u2212Pef||2 2. (12) 3.5 Margin-Aware Loss Once the category scores are obtained, the network is optimized by using the cross-entropy loss, which is formulated as: Lce = \u2212log exp(P y ec) PK yi exp(P yi ec ) , (13) where y is the ground truth. The loss of Pef ditto. Note that we next narrate with Pec only, and the principle is the same for Pef. Due to the lack of access to samples from the unseen classes during the training phase, the scores of the unseen classes are relatively low and thus cannot compete with the seen classes in GZSL. To address this problem, the common practice [Huynh and Elhamifar, 2020][Chen et al., 2022b] is to add a margin to the scores: PMec = [P 1 ec \u2212\u03f5, ..., P N ec \u2212\u03f5, P N+1 ec + \u03f5, ..., P K ec + \u03f5], (14) where \u03f5 is a constant, P 1 ec \u223cP N ec are seen classes score, and P N+1 ec \u223cP K ec are unseen classes score. However, this method leads to misclassification of seen classes that would otherwise be correctly predicted. In order to maintain the correctness of the predicted classes while enhancing the competitiveness of the unseen classes. We propose Margin-Aware Loss (MAL), which takes the form: Lmal = \u2212log exp(P y ec\u22122\u03f5) exp(P y ec\u22122\u03f5)+PS yi\u0338=y exp(P yi ec +\u03f5)+PU exp(P yi ec ) , (15) \fwhere S, U denote seen and unseen classes, respectively. In contrast to the cross-entropy loss, MAL reactivates the confidence of the predicted class to ensure that it stays ahead in the margin-processed scores, while suppressing the confidence of the other seen classes to ensure the competitiveness of the unseen classes. 3.6 Summarize In the training phase, the basic training loss of cExp stems from the classification and the alignment loss, which is expressed as: Lec = Lec mal + \u03b2Lec align, (16) where \u03b2 is a hyperparameter. Similarly, we have the basic training loss of fExp: Lef = Lef mal + \u03b2Lef align. (17) Then the final loss is obtained from the combination of basic losses and distillation loss, denoted as: LDEDN = Lec + Lef + \u03b3Ldistill, (18) where \u03b3 is a hyperparameter. In the inference phase, the recommendations of the two experts are combined and used for final judgment. The predicted result is expressed as: arg max \u03bbe \u00d7 Pec + (1 \u2212\u03bbe) \u00d7 Pef, (19) where \u03bbe is a hyperparameter. 4 Experiments Datasets. We conduct extensive experiments on three benchmark datasets to verify the effectiveness of the method, including CUB (Caltech UCSD Birds 200) [Wah et al., 2011], SUN (SUN Attribute) [Patterson and Hays, 2012], and AWA2 (Animals with Attributes 2) [Xian et al., 2017]. We split all datasets following [Xian et al., 2017]. CUB comprises 200 bird species totaling 11,788 image samples, of which 50 categories are planned as unseen classes. We use class attributes for fair comparison, which contain 312 subattributes. SUN has a sample of 717 different scenes totaling 14,340 images, where 72 categories are unseen classes. Attribute annotations are 102-dimensional. AWA2 includes 50 classes of assorted animals totaling 37,322 samples, of which 10 categories are considered unseen classes. Its number of attributes is 85. Evaluation Protocols. We perform experiments in both the Zero-Shot learning (ZSL) and Generalized Zero-Shot learning (GZSL) settings. For ZSL, we employ top-1 accuracy to evaluate the performance of the model, denoted as T. For GZSL, we record the accuracy for both seen classes, and unseen classes, denoted as S, and U, respectively. We also record the harmonic mean H, which is computed as, H = (2 \u00d7 S \u00d7 U)/(S + U). Implementation Details. For a fair comparison, we use the fixed ResNet-101 [He et al., 2016] without finetune as the feature extractor. We set the batch size to 50 and the learning rate to 0.0001. The RMSProp optimizer with the momentum CUB SUN AWA2 #Des. #Num. #Des. #Num. #Des. #Num. head 112 function 38 texture 18 torso 87 instance 27 organ 14 wing 24 environ. 17 environ. 13 tail 40 light 20 abstract 40 leg 15 whole 34 Table 1: Manual division of attribute clusters. Des. (description) indicates the criteria for classification. Num. (number) is the size of the attribute cluster. environ: environment. set as 0.9 and weight decay set as 1e-4 is employed. For hyperparameters, [\u03b2, \u03b3] are fixed to [0.001, 0.1]. We empirically set [\u03bbrc, \u03bbe] to [0.8, 0.9] for CUB, [0.95, 0.3] for SUN, [0.8, 0.5] for AWA2. Subsequent experimental analyses show that the performance of our method has low sensitivity to hyperparameters. For attribute clusters, we classify attribute sets according to their characteristics, and the results are shown in Table 1. 4.1 Compared with State-of-the-arts To evaluate the performance of the proposed method, we compare it with the state-of-the-art various methods. Generative methods: f-CLSWGAN (CVPR \u203218) [Xian et al., 2018], f-VAEGAN-D2 (CVPR \u203219) [Xian et al., 2019], TF-VAEGAN (ECCV \u203220) [Narayan et al., 2020], E-PGN (CVPR \u203220) [Yu et al., 2020], CADA-VAE (CVPR \u203219) [Schonfeld et al., 2019], FREE (ICCV \u203221) [Chen et al., 2021a], SDGZSL (ICCV \u203221) [Chen et al., 2021c], CE-GZSL (CVPR \u203221) [Han et al., 2021], VS-Boost (IJCAI \u203223) [Li et al., 2023]; Embedding methos: LFGAA (ICCV \u203219) [Liu et al., 2019], APN (NeurIPS \u203220) [Xu et al., 2020], DCN (NeurIPS \u203218) [Liu et al., 2018], HSVA (NeurIPS \u203221) [Chen et al., 2021b]; Region-Attribute modeling: SGMA (NeurIPS \u203219) [Zhu et al., 2019], AREN (CVPR \u203219) [Xie et al., 2019], DAZLE (CVPR \u203220) [Huynh and Elhamifar, 2020], MSDN (CVPR \u203222) [Chen et al., 2022b]. The experimental results are shown in Table 1. Our method achieves the best performance in seven metrics and second place in one metric. For Generalized Zero-Shot Learning (GZSL), we beat VS-Boost by 2% in the H-score of CUB, a fine-grained bird dataset whose attribute annotations possess explicit correspondences to visual features. It demonstrates the superiority of the proposed method for fine-grained modeling. On the SUN and AWA2 datasets, we obtain the best and second-best results in H-score, respectively. These two datasets have fewer attributes and contain complex semantic dimensions, including abstract, concrete, etc. The experimental results demonstrate the effectiveness of the proposed method in deconstructing complex tasks to alleviate the modeling pressure of a single network. In addition, the U-scores of our method on all three datasets are well ahead of the others, demonstrating that the proposed method effectively captures the relationship between attributes and visuals to generalize to unseen classes. For Zero-Shot Learning (ZSL), we achieve the highest top\fCUB SUN AWA2 METHOD ROUTE T U S H T U S H T U S H f-CLSWGAN Gen. 57.3 43.7 57.7 49.7 60.8 42.6 36.6 39.4 68.2 57.9 61.4 59.6 f-VAEGAN-D2 Gen. 61.0 48.4 60.1 53.6 64.7 45.1 38.0 41.3 71.1 57.6 70.6 63.5 TF-VAEGAN Gen. 64.9 52.8 64.7 58.1 66.0 45.6 40.7 43.0 72.2 59.8 75.1 66.6 E-PGN Gen. 72.4 52.0 61.1 56.2 73.4 52.6 83.5 64.6 CADA-VAE Gen. 59.8 51.6 53.5 52.4 61.7 47.2 35.7 40.6 63.0 55.8 75.0 63.9 FREE Gen. 55.7 59.9 57.7 47.4 37.2 41.7 60.4 75.4 67.1 SDGZSL Gen. 75.5 59.9 66.4 63.0 62.4 48.2 36.1 41.3 72.1 64.6 73.6 68.8 CE-GZSL Gen. 77.5 63.9 66.8 65.3 63.3 48.8 38.6 43.1 70.4 63.1 78.6 70.0 VS-Boost Gen. 79.8 68.0 68.7 68.4 62.4 49.2 37.4 42.5 67.9 81.6 74.1 SGMA Emb.\u2020 71.0 36.7 71.3 48.5 68.8 37.6 87.1 52.5 AREN Emb.\u2020 71.8 38.9 78.7 52.1 60.6 19.0 38.8 25.5 67.9 15.6 92.9 26.7 LFGAA Emb. 67.6 36.2 80.9 50.0 61.5 18.5 40.0 25.3 68.1 27.0 93.4 41.9 DAZLE Emb.\u2020 66.0 56.7 59.6 58.1 59.4 52.3 24.3 33.2 67.9 60.3 75.7 67.1 APN Emb. 72.0 65.3 69.3 67.2 61.6 41.9 34.0 37.6 68.4 57.1 72.4 63.9 DCN Emb. 56.2 28.4 60.7 38.7 61.8 25.5 37.0 30.2 65.2 25.5 84.2 39.1 HSVA Emb. 62.8 52.7 58.3 55.3 63.8 48.6 39.0 43.3 59.3 76.6 66.8 MSDN Emb.\u2020 76.1 68.7 67.5 68.1 65.8 52.2 34.2 41.3 70.1 62.0 74.5 67.7 DEDN(Ours) Emb. 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 2: Comparison with state-of-the-art methods (%). Gen. denotes generative method and Emb. denotes embedding method. \u2020 denotes the region-attribute modeling method. The best and second-best results are highlighted in blue and underlined, respectively. CUB SUN AWA2 SETTING T U S H T U S H T U S H cExp w/o Ldistill 74.6 62.4 71.4 66.6 64.0 41.6 35.7 38.4 71.1 62.8 78.8 69.9 fExp w/o Ldistill 75.5 68.1 67.9 68.0 64.0 42.8 35.5 38.7 71.1 62.9 79.1 70.1 DEDN w/o Ldistill 75.7 66.7 70.7 68.6 65.2 47.3 35.0 40.3 72.1 63.8 79.3 70.7 DAN w/o CA\u2217 77.0 58.7 73.6 65.3 65.8 48.5 34.6 40.4 74.6 61.7 79.8 69.6 DEDN w/o Lmal 75.8 73.2 62.5 67.4 66.0 56.5 34.3 42.7 73.1 66.5 72.4 69.3 DAN w/o Lalign 77.6 63.3 72.8 67.7 65.5 47.5 35.3 40.5 74.6 64.8 76.8 70.3 DEDN(full) 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 3: Ablation Study (%). w/o denotes remove the module. CA\u2217denotes channel attention. The best result is highlighted in bold. 1 accuracy on the SUN and AWA2 datasets, as well as competitive performance on CUB. Specifically, our method outperforms TF-VAEGAN by 1.4% on the SUN dataset. On AWA2, we have a 2.4% lead relative to the second-place EPGN. The experimental results validate the superiority of the proposed method. Notably, our method achieves far better results than existing region-attribute modeling methods in both ZSL and GZSL settings, which implies the potential of attribute intrinsic asymmetry and channel information is not fully exploited. 4.2 Ablation Study To evaluate the role of each module, we perform a series of ablation experiments. The results of the experiments are shown in Table 3. Comprehensively, removing any of the modules leads to different degrees of performance degradation, verifying the rationality and necessity of the design of each module. Concretely, it is observed that the performance of cExp is slightly lower than that of fExp without the distillation loss constraint, which indicates the potential research value of the inherent asymmetry of the attributes. Meanwhile, without distillation, the performance of DEDN is higher than both cExp and fExp, demonstrating the complementary properties of the dual experts. In addition, it is worth noting that DAN removing the channel attention results in a substantial performance degradation, demonstrating the importance of channel information. Moreover, the role of Lmal in balancing the confidence of unseen and seen classes can be observed from the metrics U and S. When Lmal is removed, the metric U increases dramatically while S decreases dramatically. Finally, the results also demonstrate the importance of Lalign for constraining semantic consistency. 4.3 Empirical Analysis 4.4 The influence of parameters \u03bbe and \u03bbrc We launch a series of empirical analyses, including evaluating the impact of parameters \u03bbe and \u03bbrc on the final performance. Figure 4 (a) illustrates the sensitivity of the harmonic mean for each dataset with respect to parameter \u03bbe. It can be observed that the influence of parameter a is extremely small. Of particular note, when \u03bbe is set to 1 or 0, it indicates that \fFigure 3: Visualization of the attention heat maps. The first row represents the heat maps of cExp, and the second row denotes the heat maps of fExp. (a) (b) (c) (d) Figure 4: (a) Sensitivity to \u03bbe. (b) Sensitivity to \u03bbrc. The harmonic mean (H) is reported. (c) Comparison with Kmeans. (d) Impact of the number of attribute clusters. The harmonic mean (H) and top-1 accuracy (T) are reported. only the cExp or fExp after distillation learning is used for the inference phase. It implies that by mutual distillation learning, each of the two experts learns the strengths of the other, thereby reaching an agreement. Figure 4 (b) illustrates the impact of \u03bbrc. It can be seen that setting \u03bbrc above 0.7 stabilizes the performance. Optimization is achieved when it is set between 0.7 and 0.9. The influence of different clustering algorithms We further evaluate the impact of the clustering algorithm on performance. In Introducing Table 1, we have explained that attribute clusters are obtained by humans to classify the attribute sets based on their characteristics. In this subsection, we use the K-Means algorithm for attribute clustering as a comparison to evaluate the performance. The experimental results are shown in Figure 4 (c), where the harmonic mean (H) and top-1 accuracy (T) are reported. From the figure, it can be seen that the K-Means algorithm is slightly poorer compared to human classification, but a good result is also achieved. It again shows that the idea of dividing the attribute set into different clusters holds great promise. The influence of the number of attribute clusters We evaluate the impact of the number of attribute clusters on performance. The attributes of CUB, SUN, and AWA2 are classified into 6, 4, and 4 categories, respectively (Table 1). In this subsection, we halve the categories, i.e., the numbers of attribute clusters for CUB, SUN, and AWA2 are 3, 2, and 2. The experimental results are shown in Figure 4 (d), where half denotes that the cluster number is halved. We can see that half leads to a reduction of H by 0.6%, 1.0%, and 6.8%, respectively, and a reduction of T by 0.7%, 0.2%, and 11%, respectively. The results show that detailed attribute classification facilitates the model in capturing more fine-grained information and thus improves the performance. Visual analysis of attention. We perform a visual analysis of the attention of the two experts, and the schematic is shown in Figure 3. It can be observed that cExp has a better localization for some global attributes, such as HeadPatternMaler, BellyColorGrey, ShapePerchingLike. Meanwhile, fExp has more detailed and precise localization for some local attributes, such as UpperTailColorGrey, ThroatColorGrey, LegColorWhite. The two experts collaborate and learn in a complementary way to improve together, which leads to better performance. 5",
"additional_info": [
{
"url": "http://arxiv.org/abs/2402.12821v1",
"title": "Identifying Factual Inconsistency in Summaries: Towards Effective Utilization of Large Language Model",
"abstract": "Factual inconsistency poses a significant hurdle for the commercial\ndeployment of abstractive summarizers. Under this Large Language Model (LLM)\nera, this work focuses around two important questions: what is the best way to\nleverage LLM for factual inconsistency detection, and how could we distill a\nsmaller LLM with both high efficiency and efficacy? Three zero-shot paradigms\nare firstly proposed and evaluated across five diverse datasets: direct\ninference on the entire summary or each summary window; entity verification\nthrough question generation and answering. Experiments suggest that LLM itself\nis capable to resolve this task train-free under the proper paradigm design,\nsurpassing strong trained baselines by 2.8% on average. To further promote\npractical utility, we then propose training strategies aimed at distilling\nsmaller open-source LLM that learns to score the entire summary at once with\nhigh accuracy, which outperforms the zero-shot approaches by much larger LLM,\nserving as an effective and efficient ready-to-use scorer.",
"authors": "Liyan Xu, Zhenlin Su, Mo Yu, Jin Xu, Jinho D. Choi, Jie Zhou, Fei Liu",
"published": "2024-02-20",
"updated": "2024-02-20",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "Distillation",
"gt": "Factual inconsistency poses a significant hurdle for the commercial\ndeployment of abstractive summarizers. Under this Large Language Model (LLM)\nera, this work focuses around two important questions: what is the best way to\nleverage LLM for factual inconsistency detection, and how could we distill a\nsmaller LLM with both high efficiency and efficacy? Three zero-shot paradigms\nare firstly proposed and evaluated across five diverse datasets: direct\ninference on the entire summary or each summary window; entity verification\nthrough question generation and answering. Experiments suggest that LLM itself\nis capable to resolve this task train-free under the proper paradigm design,\nsurpassing strong trained baselines by 2.8% on average. To further promote\npractical utility, we then propose training strategies aimed at distilling\nsmaller open-source LLM that learns to score the entire summary at once with\nhigh accuracy, which outperforms the zero-shot approaches by much larger LLM,\nserving as an effective and efficient ready-to-use scorer.",
"main_content": "Introduction Pretrained generative models such as BART (Lewis et al., 2020) have advanced the fundamental development of abstractive summarization. However, it is well aware that summaries from those systems are still prone to factual inconsistency, where certain facts presented in the summary are not mentioned in or not consistent with the original document (Maynez et al., 2020; Kryscinski et al., 2020). Previous works to detect factual inconsistency mostly encompass BERT-variants (Devlin et al., 2019) to perform reasoning. Particularly, two main directions arise with state-of-the-art performance: approaches represented by SummaC (Laban et al., 2022a) that adopt trained Natural Language Inference (NLI) models to score the en*Equal contributions. tailment between the document and summary; approaches represented by QuestEval (Scialom et al., 2021) and QAFactEval (Fabbri et al., 2022), which first select entities in the summary to be verified, then utilize Question Generation (QG) models on the summary to generate questions for each entity, and finally employ Question Answering (QA) models on the document to verify whether their answers match the corresponding entities in the summary. As evidenced by previous works, this task is positioned heavily towards document understanding and reasoning, demanding models with strong capabilities. It is naturally occurring under this Large Language Model (LLM) era: (1) how could we leverage LLM\u2019s powerful reasoning abilities for this task, and how good it can be? (2) can we have a smaller and practical LLM model for this task with both efficiency and efficacy in mind? For the first question, recent studies have utilized LLM to evaluate summaries directly (Shen et al., 2023; Chen et al., 2023; Liu et al., 2023). To this end, we perform comprehensive evaluation of LLM\u2019s capability, by proposing and comparing three zero-shot paradigms that adapt the ideas of NLI and QA-based approaches (Sec. 3). Specifically, two of the paradigms, Summ-NLI and SentNLI, resemble NLI methods that directly reason on a (document, summary) pair, where Summ-NLI determines their factual consistency at once, and Sent-NLI is applied on each summary window then aggregates the final judgement. The third paradigm QG-QA adopts the explicit entity verification, performing zero-shot QG and QA, based on our unique design of entity types, question forms, verification criteria, and decomposed reasoning steps. The three zero-shot paradigms are evaluated on five diverse datasets using LLM of different models and sizes (Sec. 3.4). Empirical results suggest that, LLM itself is capable enough to identify factual errors directly, and highlight the importance of the appropriate paradigm design. Particularly, SentarXiv:2402.12821v1 [cs.CL] 20 Feb 2024 \fNLI and QG-QA by ChatGPT leads up to 2.8% upon previous trained baselines with sophisticated components, and surpass the rudimentary zero-shot method Summ-NLI by a large margin. In terms of LLM itself, the best open-source LLM could come close to ChatGPT, with only 1.7% degradation by using Vicuna-13B. Impressively, adopting GPT-4 boosts more than 10% improvement directly on each paradigm, which substantiates the intuition that more powerful LLM along with proper prompt designs may become the simple and effective task solution in near future (Sec. 3.5). Our aforementioned second question is motivated by two remaining problems. Firstly, though our zero-shot approaches achieve strong results, the best performance is obtained by Sent-NLI and QGQA, which is less efficient compared to Summ-NLI that scores at once. Secondly, the best zero-shot result requires either paid OpenAI services, or large open-source LLM that is not convenient in practice. To resolve the second question, Sec. 4 seeks to enable smaller open-source LLM that scores in the same efficient way as Summ-NLI, while maintaining relatively high accuracy. To this end, we propose strategies to train Llama-2 7B models (Touvron et al., 2023) that learn from gold labels, as well as distilling from the available reasoning from a more capable model. Based on our strategies, the trained model successfully outperforms both Summ-NLI and Sent-NLI by ChatGPT, while being much more efficient to use. Our strategy to distill from reasoning is also shown 2% robust improvement for both in-domain and out-of-domain evaluation (Sec. 4.2). Overall, our contributions in this work can be summarized as follows: \u2022 Three paradigms leveraging LLM to identify factual inconsistency in summaries are proposed, evaluating the best zero-shot design to properly induce and utilize LLM\u2019s capability. \u2022 Experiments on five datasets with multiple models and sizes corroborate that LLM itself is capable to tackle this task, while also highlighting the importance of the paradigm design. \u2022 We further present smaller open-source models of both high efficiency and efficacy through our proposed training strategies, serving as an independent and practical substitution of large LLM. 2 Related Work Datasets Multiple datasets to evaluate factual inconsistency detection in summaries have been independently introduced in recent years, e.g. Goyal and Durrett (2021), SummEval (Fabbri et al., 2021), FRANK (Pagnoni et al., 2021), CLIFF (Cao and Wang, 2021), DiaSumFact (Zhu et al., 2023), LongEval (Krishna et al., 2023), etc. Each dataset may focus on its own types of factual errors, thus they are usually not completely comparable. Recent work has also attempted to unify those factual error types by defining a fine-grained schema (Tang et al., 2023). In this work, we choose five datasets that focus on a similar set of explicit error types (Table 1), including e.g. entity errors, coreference errors, predicate errors. Our chosen datasets encompass numerous domains, including news, dialogues, official documents, and stories. Their summary lengths also vary significantly. Non-LLM Approaches Prior to LLM, previous works with state-of-the-art performance can be mainly categorized into two directions. NLI-based approaches, such as Falke et al. (2019), SummaC (Laban et al., 2022b), utilize existing NLI models to score the level of entailment between the document and summary to determine their factual consistency. While for QA-based approaches, such as QuestEval (Scialom et al., 2021) and QAFactEval (Fabbri et al., 2022), they take explicit entities appeared in summaries, and verify their context on the document through separate QG and QA steps. Apart from these two main directions, other works have also explored to recognize factual errors through information extraction (Nan et al., 2021), syntactic dependencies (Goyal and Durrett, 2021) and multi-task learning (Zha et al., 2023a,b). LLM Approaches The potentials of LLMs have been investigated by recent studies for factual inconsistency detection. The most common utilization is to evaluate the summary directly by LLM in a zero-shot or few-shot manner, as adopted by most previous works (Luo et al., 2023; Shen et al., 2023; Chen et al., 2023; Liu et al., 2023). Other directions have been proposed as well, such as synthetic data generation by LLM (Gekhman et al., 2023). In this work, we focus on the zero-shot utilization. 3 Approach: LLM Zero-Shot Figure 1 illustrates three zero-shot paradigms we propose to identify factual errors in summaries, including wrong entities or predicates, coreference or logical errors, etc. We adopt zero-shot prompting, as we found few-shot examples could intro\fJudgement: VER_NO Final Judgement: VER_NO Final Judgement: VER_NO S1: Paul is almost there, but he\u2018s still 30 minutes away. S2: Laura is angry, because Paul is not coming. S1 LLM Summary-level NLI Reasoning Process: In the summary, it is mentioned that Paul is still 30 minutes away. But in the source text, Paul said: \u201c15 mins\u201d. There are inconsistencies between the summary and the source text Judgement: Inconsistent(0) Summary QA-pairs generated based on S1: Q1: Who is almost there? A1: Paul Q2: How far away is Paul? A2: 30 minutes away QA-pairs generated based on S2: Q3: Who is angry? A3: Laura \u2026\u2026. \u2026\u2026 Judgement: Consistent (1) Q2. Reasoning process: Paul says \u201c15 mins\u201d when Laura expresses that being close to the Mac is far away. The answer \u201c30 minutes\u201d contradicts the information provided in the source text. Judgement: Inconsistent (0) \u2026\u2026 Reasoning Process\uff1a This sentence contains a factual error by blending Paul\u2018s assertion of being \u201calmost there\u201d with the conclusion that he is still \u201c30 minutes away.\u201d Judgement: Inconsistent (0) Reasoning Process\uff1a Despite waiting for 30 minutes and expecting Paul to arrive within that time frame, he still hasn't shown up, leading Laura to feel frustrated. \u2026.. Judgement: Consistent (1) Source Text Final Judgement: Inconsistent (0) Final Judgement: Inconsistent (0 \uff061\uff09 Final Judgement: Inconsistent (1 \uff060 \uff06\u2026\uff09 Laura: Where are you? Paul: Almost there. Laura: Which is? Paul: Close to the Mac. Laura: That's so far away! Paul: 15 mins Laura: I am not waiting any more, see you some other time. Paul: Please, wait! Laura: I've waited 30 minutes, 15 minutes ago you wrote you were almost here. Question Generation LLM Sentence-level NLI Question Answering S2 LLM LLM Figure 1: Illustration of our three zero-shot paradigms: Summ-NLI, Sent-NLI, QG-QA (Sec. 3). duce bias and do not contribute consistently. Instead, we spend efforts on refining prompts for each paradigm, aiming to outline full criteria, clear instructions and thought process to guide the LLM reasoning. Zero-shot Chain-of-Thought (CoT) (Kojima et al., 2022) is applied on all prompts. Our full prompts are provided in Appx. A. 3.1 Summ-NLI The most straightforward way to integrate LLM is to directly ask it to score or classify the given document and summary pair according to their factual consistency. This resembles previous works employing NLI models to determine their level of entailment. As LLM takes the entire (document, summary) pair and scores at once, regardless of the summary length, we dub this paradigm as SummNLI (Summary-level NLI). Several previous works have utilized LLM similar to Summ-NLI that directly score the entire summary (Luo et al., 2023; Shen et al., 2023; Liu et al., 2023; Wu et al., 2023). Our Summ-NLI features a unique and comprehensive prompt with detailed scoring criteria and reasoning steps, and we effectively regard Summ-NLI as a general baseline of zero-shot LLM prompting. We instruct Summ-NLI to either yield a binary label in the end (whether the summary has any factual errors), or to produce a consistency score within a range, according to the specific dataset. Let Si be the ith summary, Di be its corresponding document. The output yi can be denoted as: ySumm-NLI i = LLM (Di, Si) (1) 3.2 Sent-NLI Apart from scoring the entire summary at once, one could also score by each local summary context, then aggregate (Laban et al., 2022a; Chen et al., 2023). The intuition is simple: when the summary gets longer, LLM might overlook certain errors scattered across many sentences, which is indeed a mistake humans can make. Thus, we adjust SummNLI to operate on a window each time that consists of a few summary sentences. For brevity, we dub it Sent-NLI (Sentence-level NLI). To aggregate each window, we consider the entire summary having factual errors if any of its windows has errors. Let Wij be the jth window of summary Si that has \u03be(Si) windows in total, the output yi can be denoted as: ySent-NLI i = any\u03be(Si) j=1 \u0000LLM (Di, Wij) \u0001 (2) 3.3 QG-QA Distinct from NLI-based approaches, previous QAbased approaches such as QAFactEval (Fabbri et al., 2022) take a more fine-grained way that employ QG and QA models to verify explicit entities in summaries. However, they require careful tuning of multiple pipeline components, including answer selection and answer checking, in addition to the question generation and answering modules. We design our QG-QA paradigm accomplishable by LLM as follows, integrating answer selection and checking into the following two phases. Question Generation For this phase, LLM is only conditioned on a summary window without accessing the document; the primary goal is to create (question, entity) pairs from the summary window, to be verified on the document later. Entities include named entities, e.g. person, locations, products, as well as general noun phrases. We confine the corresponding question to be whquestion in a subject-verb-object structure, such that the answer to the question is the entity itself. To induce high-quality pairs, we facilitate LLM reasoning by a three-step decomposition: \f# Summaries # Tokens Domain Label Metric Error Types FRANK 175 59.0 News Binary (85.7%) Balanced Accuracy Relation Error, Entity Error, Coreference Error, Circumstance Error, Out-of-Article Error DiaSumFact 475 43.7 Dialogues Binary (43.0%) Balanced Accuracy Entity Error, Coreference Error, Circumstance Error, Predicate Error CONFIT 600 17.8 Dialogues Binary (62.3%) Balanced Accuracy Circumstantial Error, Negation Error, Object Error, Wrong Reference Error GovReport 204 397.2 Official Reports Score Pearson Correlation Entity Error, Coreference Error, Circumstance Error, Out-of-Article Error SQuALITY 40 347.3 Stories Score Pearson Correlation Correctness Table 1: Statistics and evaluation metrics for our five experimented datasets, including the number of summaries, averaged number of tokens per summary, and their label types and error types. The ratios of positive labels (no factual errors) are shown in parenthesis. For FRANK, we use summaries generated from BART for CNN/DM. 1. As there are commonly pronouns in the summary window, coreference resolution is firstly performed by LLM on the entire summary context, thereby (question, entity) pairs will always present explicit entities without ambiguity. By contrast, most previous approaches neglect this step, e.g. QAFactEval produces pronouns as entities, which could be troublesome during verification due to its ambiguity. 2. Important entities within this summary window are then explicitly listed, where each entity is unique, and supports compound entities such as \u201cMike and Amy\u201d. 3. Generate (question, entity) pairs based on the listed entities, according to certain rules, such as one question per entity, no pronouns as entities, no open-ended question, etc. Above steps are written as instructions in the prompt that LLM is expected to follow. After we gather all pairs from all summary windows, we move to the next phase. Question Answering With all pairs obtained, this phase prompts LLM the second time to verify each pair according to the document. Note that this should be a new LLM session rather than continuing from QG, so that it is free from any interference by the original summary. Providing the document and (question, entity) pairs, we instruct LLM to do the following for each pair: reason the question based on the document, and then check whether its reasoned answer is consistent with the provided entity. In contrast to previous works that use lexical overlap or cosine similarity for answer checking, we can now simply shift it to LLM to judge their consistency. Overall, QG-QA guides LLM to recognize factual errors through explicit entities. In this process, predicate errors and logical errors will be detected as well: a question with the wrong predicate or logic should not have an answer by the document, thus not consistent with the provided entity. Let summary Si have \u03b6(Si) pairs, and (qk, ek) be its kth pair, QG-QA can be represented as: {(qk,ek)} = LLM-QG (Si) (3) yQG-QA i = any\u03b6(Si) k=1 \u0000LLM-QA (Di, qk, ek) \u0001 (4) 3.4 Zero-Shot Experiments Our experiments are conducted on five datasets of diverse domains and summary length. Table 1 provides an overall summary of all the datasets. Three of the datasets have regular-length documents with binary labels for factual errors: \u2022 FRANK (Pagnoni et al., 2021) consists of news articles and their summaries. Here we evaluate a subset of summaries generated by BART for CNN/DM (Hermann et al., 2015). \u2022 DiaSumFact (Zhu et al., 2023) consists of dialogue documents and their system-generated summaries, including daily conversations for SAMSum (Gliwa et al., 2019), and meeting transcripts for QMSum (Zhong et al., 2021). \u2022 CONFIT (Tang et al., 2022) contains factual annotations for SAMSum summaries as well. We use summaries generated by all six models. Two of the datasets have long document length with factual consistency scores as labels: \u2022 GovReport (Koh et al., 2022) consists of official reports and summaries for Huang et al. (2021), and each report has on-average 3.8k words. \fBalanced Accuracy Pearson Correlation FRANK DiaSumFact CONFIT GovReport SQuALITY Macro-Average QuestEval 62.67 56.03 59.50 26.90 42.21 49.46 QAFactEval 53.00 67.29 56.34 40.59 44.79 52.40 ALIGNSCORE 51.48 69.95 64.86 37.07 43.77 53.43 ALIGN 59.75 70.77 60.06 35.05 46.26 54.38 Summ-NLI 51.33 58.89 65.59 19.95 35.46 46.24 Sent-NLI 61.42 65.83 62.12 46.46 49.76 57.12 QG-QA 63.35 64.57 67.51 50.17 40.03 57.13 Table 2: Evaluation results on five datasets, along with their macro-average scores as the overall evaluation metric. Sent-NLI and QG-QA outperform four non-LLM baselines as well as the LLM baseline Summ-NLI. \u2022 SQuALITY (Krishna et al., 2023) consists of stories and long articles with summaries for Wang et al. (2022), with \u223c5k words per document. Evaluation Protocol The factual error types of our experimented datasets are largely shared (Tang et al., 2023), and Table 1 lists the corresponding error types our experiments consider. We exclude grammar errors, as they still largely convey consistent factual information. For the first three datasets, each summary has a binary label indicating whether it has any factual errors, and we report Balanced Accuracy for evaluation. For the latter two, each label is a score within a range, and we report Pearson Correlation following previous works. Baselines We adopt four state-of-the-art models as non-LLM baselines: 1) QuestEval: QA-based approach by Scialom et al. (2021); 2) QAFactEval: QA-based approach by Fabbri et al. (2022); 3) ALIGNSCORE: multi-task trained model by Zha et al. (2023a); 4) ALIGN: another multi-task model by Zha et al. (2023b). Additionally, we regard Summ-NLI as the LLM baseline that aligns with recent works (Luo et al., 2023; Shen et al., 2023; Liu et al., 2023; Wu et al., 2023). LLM We use gpt-3.5-turbo-0613 (ChatGPT) for our main experiments. For analysis, we additionally run GPT-4 (gpt-4-1106-preview), Llama-21 models (Touvron et al., 2023), Vicuna2 models (Zheng et al., 2023) on DiaSumFact. For GovReport and SQuALITY, we follow Wu et al. (2023) that for each question, top sentences that maximize ROUGE scores towards each summary are retrieved as the context for factual evaluation, up to 1k tokens per document (details in Appx. B). 1https://huggingface.co/meta-llama/ Llama-2-7b-chat-hf 2https://huggingface.co/lmsys/vicuna-7b-v1.5-16k Results Table 2 shows the evaluation results on all five datasets, and we use their macro-average scores as the main evaluation metric. Sent-NLI and QG-QA are shown to obtain similar results, and both outperform the four non-LLM baselines by up to 7.6%, which confirms that LLM itself is capable enough to identify factual errors directly, ascribed to its superior understanding and reasoning ability. Nevertheless, Summ-NLI using the same LLM underperforms the other two paradigms, indicating the importance of the proper zero-shot paradigm design, which can play a significant role for the task performance. Comparing the three paradigms, the gap between Summ-NLI and the other two gets larger for GovReport and SQuALITY that have longer summaries (Table 1). This observation may not be surprising, as Summ-NLI only scores once regardless the length of the summary, being a more efficient option but potentially prone to errors scattered across sentences more easily. As both Sent-NLI and QG-QA achieve strong results, QG-QA obtains comparable or better performance than Sent-NLI on all datasets except for SQuALITY. Thus, the paradigm to explicitly verify entities not only leads to state-of-the-art performance for none-LLM approaches, and is proved still valid under the LLM era. However, QG-QA does run LLM twice per summary window, bringing more overhead than Sent-NLI, which could make Sent-NLI more appealing in practice. Despite the trivial trade-off between Sent-NLI and QG-QA, both paradigms are window-based approaches that are less efficient than Summ-NLI. In Sec. 4, we further seek to train open-source LLM with our proposed training strategies, combining both the efficiency from Summ-NLI and the efficacy from Sent-NLI. \fSumm-NLI Sent-NLI QG-QA Llama-2 7B 53.49 51.16 53.16 Llama-2 13B 52.13 54.64 53.13 Llama-2 70B 53.74 63.21 58.02 Vicuna 7B 53.15 53.27 50.12 Vicuna 13B 56.62 64.11 58.64 ChatGPT 58.89 65.83 64.57 GPT-4 70.09 76.63 74.78 Table 3: Evaluation results on DiaSumFact using LLMs of different models and sizes. Sent-NLI with Vicuna 13B achieves the best non-OpenAI performance. 3.5 Zero-Shot Analysis We focus on DiaSumFact and perform further analysis over multiple dimensions as follows. Varying LLMs and Sizes As models from OpenAI are known among the best models, Table 3 compares zero-shot results using different LLM, including GPT-4, and the open-source Llama-2 and Vicuna models of multiple sizes. Neither Llama2 nor Vicuna could outperform ChatGPT; though, their largest models do come close by Sent-NLI. The results suggest that adopting Sent-NLI accompanied by Vicuna 13B can serve as a good zero-shot alternative to ChatGPT for this task. Clearly, just by switching to GPT-4, there comes a direct boost upon ChatGPT by over 10% for each paradigm, which is quite impressive. Comparing Llama-2 and Vicuna, Vicuna 13B is able to outperform Llama-2 70B by each paradigm, proved as an outstanding open-source candidate for zero-shot evaluation. Nonetheless, increasing the size of LLMs lead to higher evaluation scores for both models. Especially, Sent-NLI and QG-QA benefit more than Summ-NLI, advocating again that evaluation by windows can be more effective than scoring the entire summary. To inspect the zero-shot variation of different models, Table 4 further shows the standard deviation of three repeated runs for Llama-2 13B, Vicuna 13B, and ChatGPT on DiaSumFact. These variation comes from imperfect instruction following and inconsistent answers on ambiguous cases. ChatGPT exhibits smallest variation among three models, suggesting that although open-source models can come close, ChatGPT still remains a preferable model owing to its stable performance. Paradigm Comparison Figure 2 plots the performance curve on different lengths of documents Summ-NLI Sent-NLI QG-QA Llama-2 13B \u00b1 3.16 \u00b1 1.60 \u00b1 2.01 Vicuna 13B \u00b1 1.80 \u00b1 3.10 \u00b1 3.37 ChatGPT \u00b1 1.34 \u00b1 0.12 \u00b1 1.04 Table 4: Standard deviation of different models and paradigms on DiaSumFact from three repeated runs. None Ent. Circ. Pred. Coref. Sent-NLI 80.96 47.97 54.26 16.50 26.67 QG-QA 77.28 42.08 41.49 43.50 30.84 Table 5: Recall of error types by Sent-NLI and QG-QA on DiaSumFact: No Errors, Entity Errors, Circumstantial Errors, Predicate Errors, Coreference Errors. and summaries with ChatGPT. Overall, Sent-NLI and QG-QA follow similar trends: both are quite robust to varying document lengths. Understandably, Summ-NLI is shown to suffer degradation on longer documents or summaries, due to its length-agnostic scoring mechanism. Though, all paradigms encounter challenges in maintaining performance with long summaries (> 75). For fine-grained analysis on error types, Table 5 discloses the recall of different types by Sent-NLI and QG-QA on DiaSumFact. Both paradigms recover entity, circumstantial and coreference errors similarly, while QG-QA recognizes more predicate errors than Sent-NLI, albeit they only constitute a small portion of factual errors. Overall, QG-QA is shown slightly more balanced than Sent-NLI. 45 55 65 75 85 10 20 30 40 50 60 70 Summary Length 45 55 65 75 85 25 50 75 100 150 200 250 300 350 400 450 Summ-NLI Sent-NLI QG-QA Document Length Figure 2: ChatGPT accuracy on the first three datasets of different lengths of documents and summaries. Strengths and Limitations For qualitative analysis, more concrete examples are provided in Appx. C to illustrate the strengths of LLM paradigms, as well as their current limitations. 4 Approach: Distilling Efficient Scorers As suggested by Table 2, Sent-NLI and QG-QA obtain strong performance and surpass other approaches by good margins. However, it does come with the overhead to evaluate on each summary window; ideally, one would prefer a model that \fonly scores once per summary, just like Summ-NLI, and still achieves similar or better performance than Sent-NLI. Furthermore, the best performance requires either the closed-source OpenAI, or large LLM that is not efficient for practical use. Motivated by above, we resort to distill into smaller open-source LLM that learn to score in the same way as Summ-NLI, aiming to serve as an efficient, effective, and independent substitution of Sent-NLI or QG-QA, without being tied to OpenAI models. To achieve this, we focus on the classification scenario, and regard the three datasets in Sec. 3.4 with binary labels as training resources. As all the open-source LLMs presented in Table 3 underperform ChatGPT by Summ-NLI, we further propose to leverage the available reasoning of ChatGPT from previous experiments into training, which could facilitate to distill useful knowledge from the more powerful ChatGPT, in addition to simply learning the task labels themselves. Concretely, the training data comprises the following two types of prompts. 4.1 Distilling Strategies Prompts with reasoning If a (document, summary) pair has been processed by ChatGPT (or GPT-4) in existing experiments, and the classification given by ChatGPT is correct, then the prompt for this pair is the same as Summ-NLI, where it instructs the model to perform reasoning first and then give the final label. During training, the opensource LLM learns to generate the same reasoning as ChatGPT that leads to the correct label, then to produce the final label in the end. The reasoning to be learned may come from two sources: if ChatGPT answers correctly by Summ-NLI, we then extract its reasoning as partially the training output; otherwise, if ChatGPT could answer correctly by Sent-NLI, then we extract its reasoning of each summary window and concatenate them together, serving as the summary-level reasoning that is consistent with the final label. Prompts without reasoning The prompt for this type is still largely similar to Summ-NLI, except that now it explicitly instructs to directly produce the classification label without any reasoning; the output to be learned is then the gold label from the dataset. This type of prompts resembles the conventional supervised classification paradigm, where the model learns to classify directly. It can apply to any (document, summary) pairs, regardless whether it has been processed by ChatGPT. Combining the above two types of prompts in training, the trained model is exposed to the reasoning process for this task from a more capable model. Moreover, for pairs processed by ChatGPT correctly, we provide both two prompts (with and without reasoning) in training, which provides contrastive examples to assist recognizing reasoning and the inference of labels. Overall, the model distills task knowledge and learns to detect factual errors more robustly, corroborated in Sec. 4.2. The inference now also becomes flexible: the model could either opt to perform reasoning first, or to produce the label directly. 4.2 Training Experiments For training, Llama-2 7B is used as the backbone model. We conduct experiments with three different strategies on reasoning: \u2022 T-wo-R + I-wo-R: the model is Trained without any Reasoning; consequently, the Inference is also performed without reasoning. \u2022 T-w-R + I-w-R: the training is assisted by reasoning, and inference also performs reasoning. \u2022 T-w-R + I-wo-R: the model receives reasoning in training, but directly yields classification labels during inference (faster inference than I-w-R). For each strategy, we further conduct two sets of experiments as follows, positioned to evaluate the capability of trained models, as well as the transfer ability on unseen domains. In-Domain Evaluation We randomly split 80% documents in FRANK, DiaSumFact and CONFIT as the training set, and the other 20% for evaluation. We adopt common hyperparameters for LLM finetuning, described in Appx. B, without requiring a development set due to the limited data. Additionally, we add remaining documents and summaries from FRANK not evaluated by ChatGPT before into training, in the form of prompts without reasoning. Specifically for DiaSumFact, as reasoning of GPT-4 is available from Sec. 3.4, we use its reasoning instead of ChatGPT\u2019s. Based on the approach design in Sec. 4, prompts with reasoning constitute 28% in-domain training examples. Out-of-Domain Evaluation In practical scenarios, the trained model may be used on domains that are much more diverse than those seen in training. To assess the performance under domain shift, we perform the out-of-domain (OOD) evaluation, \fIn-Domain Out-of-Domain FRANK DiaSumFact CONFIT Average FRANK Summ-NLI ChatGPT (Zero-Shot) 46.43 57.55 63.33 55.77 51.33 Llama (T-wo-R + I-wo-R) 58.93 76.64 64.23 66.60 52.67 Llama (T-w-R + I-w-R) 57.14 63.80 58.37 59.77 50.67 Llama (T-w-R + I-wo-R) 62.50 75.15 68.36 68.67 54.67 Sent-NLI ChatGPT (Zero-Shot) 55.36 67.30 64.00 62.22 61.42 Table 6: Results of our trained Llama-2 7B models for both in-domain and out-of-domain evaluation. The Training can be either assisted with or without Reasoning from ChatGPT (T-w-R or T-wo-R); the Inference can also opt to perform Reasoning or not (I-w-R or I-wo-R). Details of experimental settings are described in Sec. 4.2. where the model is trained on the entire DiaSumFact and CONFIT, which comprise dialogue documents, and then evaluated on FRANK that consists of news documents. For this setting, prompts with reasoning constitute 40% total training examples. # Train Length R-Ratio # Test Length ID 2918 42.0 28% 246 35.3 OOD 1801 29.2 40% 175 59.0 Table 7: Statistics for In-Domain (ID) and Out-ofDomain (OOD) evaluation: number of training examples; averaged number of training summary tokens; ratio of prompts with reasoning; number of evaluation examples; averaged number of evaluation summary tokens. Table 7 shows a brief summary of our experimental data. Especially, the summary length almost doubles from training to testing in OOD evaluation. Though the training data does not seem plentiful, our objective is not to optimize the performance by exhaustively gathering all available resources for training; rather, we aim to examine the training strategies and propose an effective and robust method to distill smaller models. Results Table 6 shows the evaluation results of different settings. For in-domain evaluation, all three trained models outperform ChatGPT by Summ-NLI, with the best setting leading up to 12.9%. Especially, two of the settings also outperform Sent-NLI by up to 6.5%, successfully fulfilling the goal to build open-source models of both superior efficiency and efficacy than the zero-shot Sent-NLI and QG-QA. Comparing three strategies, the best performance is achieved by T-w-R + I-wo-R for both ID and OOD evaluation. By receiving reasoning in training, it surpasses its counterpart (T-wo-R + I-wo-R) by 2% robustly for both ID and OOD, which validates our hypothesis to assist training through reasoning. By contrast, there is a noticeable degradation when performing reasoning during inference (T-wo-R + I-wo-R), which can be attributed by the fact that the available reasoning in training is still relatively sparse; when the model performs reasoning without full quality, it could impair the extrapolation of the labels, being more negative than positive. Overall, Table 6 suggests T-w-R + I-wo-R to be the best strategy, being the most performant and also the fastest option during inference. Quantitative Comparison Observed in Sec. 3.4, Summ-NLI suffers more degradation when the summary gets longer in the zero-shot setting. For the trained models, we also plot the performance towards different summary lengths in Figure 3, comparing zero-shot approaches and trained models. Similar to Figure 2, all approaches still perform less for summary length > 75. Notably, both trained models demonstrate sustained performance up to 60.. More importantly, they outperform Summ-NLI and Sent-NLI across nearly all summary lengths, underscoring the capability of opensource models to accurately score entire summaries at once using our proposed training strategies. 5"
},
{
"url": "http://arxiv.org/abs/2404.06311v1",
"title": "DRE: Generating Recommendation Explanations by Aligning Large Language Models at Data-level",
"abstract": "Recommendation systems play a crucial role in various domains, suggesting\nitems based on user behavior.However, the lack of transparency in presenting\nrecommendations can lead to user confusion. In this paper, we introduce\nData-level Recommendation Explanation (DRE), a non-intrusive explanation\nframework for black-box recommendation models.Different from existing methods,\nDRE does not require any intermediary representations of the recommendation\nmodel or latent alignment training, mitigating potential performance issues.We\npropose a data-level alignment method, leveraging large language models to\nreason relationships between user data and recommended items.Additionally, we\naddress the challenge of enriching the details of the explanation by\nintroducing target-aware user preference distillation, utilizing item reviews.\nExperimental results on benchmark datasets demonstrate the effectiveness of the\nDRE in providing accurate and user-centric explanations, enhancing user\nengagement with recommended item.",
"authors": "Shen Gao, Yifan Wang, Jiabao Fang, Lisi Chen, Peng Han, Shuo Shang",
"published": "2024-04-09",
"updated": "2024-04-09",
"primary_cat": "cs.IR",
"cats": [
"cs.IR"
],
"label": "Original Paper",
"paper_cat": "Distillation",
"gt": "Recommendation systems play a crucial role in various domains, suggesting\nitems based on user behavior.However, the lack of transparency in presenting\nrecommendations can lead to user confusion. In this paper, we introduce\nData-level Recommendation Explanation (DRE), a non-intrusive explanation\nframework for black-box recommendation models.Different from existing methods,\nDRE does not require any intermediary representations of the recommendation\nmodel or latent alignment training, mitigating potential performance issues.We\npropose a data-level alignment method, leveraging large language models to\nreason relationships between user data and recommended items.Additionally, we\naddress the challenge of enriching the details of the explanation by\nintroducing target-aware user preference distillation, utilizing item reviews.\nExperimental results on benchmark datasets demonstrate the effectiveness of the\nDRE in providing accurate and user-centric explanations, enhancing user\nengagement with recommended item.",
"main_content": "INTRODUCTION Recommendation systems (RecSys) play a pivotal role in learning user preferences and interests by analyzing historical user behavior data [9, 15, 17, 18]. Subsequently, the RecSys recommends relevant items from extensive databases, which are widely used in \u2217Both authors contributed equally to this research. \u2020Corresponding author. , , 2024. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn diverse domains such as e-commerce, news portals, and short video applications [16, 19, 33, 40]. However, the direct presentation of recommended items may inadvertently confuse users, as they may not always comprehend the rationale behind a particular recommendation [10, 11, 21]. This lack of transparency impedes users\u2019 inclination to explore the recommended item further [2, 7, 41]. Consequently, interpreting the recommendation results of a black-box recommender model logically has always been an important research direction [3, 29, 32]. Most of the existing methods [27, 34, 39, 45] usually focus on how to employ an additional explanation module to align with the recommendation system, subsequently generating natural language explanations. Recommender System Alignment Training Retrieve Recommender System User Behavior Data \u2026 Latent-level Alignment Data-level Alignment Explain Explain Predict Predict User User Behavior Data \u2026 User Explain Model Explain Model Predicted Item\uff1a Predicted Item\uff1a Explanation : Users will like the keyboard. Because it has simple design *** consistent with your preferences. Users will like the computer. Because computer has brief appearance *** consistent with your preferences. Explanation : computer computer Item Reviews Predicted Item\uff1a keyboard Figure 1: Comparison between existing latent-level and our proposed data-level recommendation explanation method. However, there are two key challenges of these methods: (1) Existing methods [5, 6, 21, 38] often involve intrusion into the latent representations within the recommendation model, necessitating modifications to align the explanation and recommendation modules. Considering the different training objectives of these two modules, it could adversely affect the performance of both language generation and item recommendation. Moreover, although these methods aim to align two modules through training, they still cannot guarantee that the recommendation predictions of the two arXiv:2404.06311v1 [cs.IR] 9 Apr 2024 \f, , Shen Gao, Yifan Wang, Jiabao Fang, Lisi Chen, Peng Han, and Shuo Shang modules are consistent. In the real-world application, discrepancies between the explained and recommended items may lead to user confusion. (2) The recommendation system based on ItemID models the co-occurrence relationships among items [13, 24, 35, 42, 43], lacking an understanding of the specific semantic information about the items, such as the specific purposes of the products or the particular scenarios in which users use them. However, simply aligning the two modules cannot provide the explanation module with rich semantic information. In this paper, we propose the Data-level Recommendation Explanation (DRE) which can be applied to any black-box recommendation model without accessing intermediate representations or modifying the model. To avoid modifying the recommendation system, we propose a data-level alignment method to align the explanation module and the recommendation model. Figure 1 shows the comparison between our proposed paradigm and existing methods. Since the large language models (LLMs) have shown strong reasoning capability in many tasks [12, 14, 23, 28, 36, 37, 44], we propose to employ the LLM to reason the relationships between the user\u2019s historical data and recommended items. Specifically, we feed the input user historical behavior data used by the recommendation model and the recommended item to the LLM. And we leverage the internal knowledge of LLM to find a reasonable relationship between the user preference and the attributes of the recommended item. This data-level alignment method can align these two modules without requiring any internal representation or intermediate result of the recommendation model, and it can easily be plugged into any RecSys. For the second challenge, due to the limited detailed information of item descriptions, relying solely on item descriptions for inferring relationships between items can sometimes be challenging in uncovering implicit relationship information. Therefore, we propose utilizing the reviews of the items purchased by users and the reviews of the target recommended items to enhance the explanation module\u2019s understanding of user preferences and the semantics of target items. Since there is a lengthy of reviews for items that users have purchased, extracting relevant information from these reviews and generating explanations that better align with user preferences is a challenge. Thus, we introduce the target-aware user preference distillation method, which leverages the understanding and reasoning capabilities of LLM, employing semantic matching to extract target-aware information from reviews on items previously purchased by users. Finally, by incorporating the extracted targetaware information, we generate explanations for the recommended target items. Experiments conducted on several benchmark datasets from recommendation systems demonstrate that our proposed DRE generates explanations accurately describing aspects that users care about, thereby enhancing user interest in recommended items. Our contributions are as follows: (i) We propose DRE, an LLM-based non-intrusive explanation framework for recommendation systems. (ii) We propose a data-level alignment method to align the explanation module and the recommendation model. (iii) We introduce a target-aware user preference distillation method to distill user-related information from item reviews. (iv) Experimental results on benchmark datasets illustrate the advantage of DRE in terms of the accuracy of explanation. 2 RELATED WORK Explaining the black box of recommender systems has long been a prominent research direction in the field of recommender systems. Current research can be mainly divided into two categories. The first category focuses on identifying the most critical factors influencing recommendation results[8, 26]. Tan et al. [31] formulate an optimization problem to generate minimal changes to item aspects, thereby altering the recommended result. These aspects can be viewed as the composition of an explanation detailing why the original item is recommended. Lakkaraju et al. [20], Shrikumar et al. [30], Zilke et al. [45] define information-based measures to identify the attributes that the model utilizes from the input to generate explanations. The second category mainly focuses on training a surrogate model to explain the target model. For example, Wang et al. [34] propose a reinforcement learning framework that gets rewards from the environment and modifies recommendation explanation. Catherine et al. [4], Ma et al. [22] propose a framework for generating explanations based on the knowledge graph. Lei et al. [21] employ LLMs as surrogate models, aiming to mimic and understand target recommender models by leveraging both natural language and latent spaces. After alignment, LLMs can generate target items and provide recommendation explanations. However, existing methods either rely solely on a few entity words or keywords as explanations or employ complex fine-tuning approaches to generate natural language explanations. It makes the explanations not natural or complex to use, which requires fine-tuning or modification of existing recommendation systems. 3 DRE METHODOLOGY In this section, we detail the Data-level Recommendation Explanation (DRE). An overview of DRE is shown in Figure 2. 3.1 Data-level Alignment In order to generate precise explanations for recommended results, we propose a data-level alignment method to achieve behavioral consistency between the recommendation module and the explanation module. Given a list of items \ud835\udc3c= {\ud835\udc3c1, \ud835\udc3c2, . . . , \ud835\udc3c\ud835\udc41} which is purchased by the user \ud835\udc48, the recommendation model \ud835\udc45predicts items \ud835\udc3c\ud835\udc5dthat the user\ud835\udc48might find interesting. To achieve alignment between the recommendation module and the explanation module, previous methods typically fine-tune the explanation module to perform the recommendation prediction task as well, generating items \ud835\udc3c\ud835\udc5dconsistent with the predictions of the recommendation model \ud835\udc45. However, this approach inevitably reduces the text generation capability of the explanation module due to changes in its model structure and parameters. In this paper, we propose leveraging the in-context learning and reasoning abilities of LLM to align the explanation module with the recommendation module. Given inputs \ud835\udc3cand outputs \ud835\udc3c\ud835\udc5dthat are consistent with the recommendation model \ud835\udc45, LLM can learn this prediction pattern in the context and explore the associated relationships to generate natural language explanations. 3.2 Target-aware User Preference Distillation Relying solely on item IDs and item descriptions for recommendation explanations may fail to capture the details or user actual \fDRE: Generating Recommendation Explanations by Aligning Large Language Models at Data-level , , Retrieve \ud835\udc6b\ud835\udc91 \uff1aDescription: xxx \ud835\udc6a\ud835\udc91 \uff1aOther users\u2019 reviews: yyy \ud835\udc6b :Description: xxx \ud835\udc6a:Other users\u2019 reviews: yyy Summ : LLMs Step3 \uff1aExplanation Generation \ud835\udc6d\ud835\udc91: Recommended Item Profile *** is lightweight. And the design of the Macbook is simple and classic, which*** \ud835\udc6d : User Purchased Items Profile *** is slim, it has simple appearance design.*** Users will like the Macbook because it has a brief atmospheric appearance, which is consistent with the user's past purchase preferences. It is compatible with the Magic Keyboard purchased by user. Both items have received positive reviews from users for their portability. \ud835\udc6c\ud835\udc91: Recommendation Explanation Target-aware Distill Summarize Generate Step1 \uff1aData-level Alignment Step2 \uff1aTarget-aware User Preference Distillation \ud835\udc79 : Recommender System Predict Item Reviews Distill : LLMs Explanation Sample In-context Learning Instruct \ud835\udc7a : LLMs Explain Model \u2026 \ud835\udc70\ud835\udc91: Recommended Item Macbook \ud835\udc70 : User Purchased Items \u2026 \ud835\udc7c : User Figure 2: Overview of DRE, which firstly align the explanation module and recommender with Data-level Alignment, and then generate the explanation by incorporating details of target from Target-aware User Preference Distillation experiences of the item, which are crucial for users. Therefore, we propose to incorporate the reviews of user-purchased items \ud835\udc3c and the target item \ud835\udc3c\ud835\udc5dpredicted by the recommendation model \ud835\udc45to assist the explanation model in obtaining more item detail information. Given a purchased item \ud835\udc3c\ud835\udc56of user \ud835\udc48, we retrieve \ud835\udc40 reviews \ud835\udc36\ud835\udc56= {\ud835\udc36\ud835\udc56 1,\ud835\udc36\ud835\udc56 2, . . . ,\ud835\udc36\ud835\udc56 \ud835\udc40} of item \ud835\udc3c\ud835\udc56written by other users from the database, where each \ud835\udc36\ud835\udc56 1 represents a paragraph of natural language product review. Then, we can retrieve \ud835\udc40user reviews for each purchased item \ud835\udc3c\ud835\udc56of user \ud835\udc48, and then obtain a review set \ud835\udc36= {\ud835\udc361,\ud835\udc362, . . . ,\ud835\udc36\ud835\udc41} which contains \ud835\udc40\u00d7 \ud835\udc41reviews of other users. Similarly, we can also retrieve \ud835\udc40reviews for the target item \ud835\udc3c\ud835\udc5d denoted as \ud835\udc36\ud835\udc5d= {\ud835\udc36\ud835\udc5d 1 ,\ud835\udc36\ud835\udc5d 2 , . . . ,\ud835\udc36\ud835\udc5d \ud835\udc40} which is also written by other users. In this paper, we assume that the item characteristics described in the review set \ud835\udc36are the key features that user \ud835\udc48cares about, since the user \ud835\udc48has bought these items. Therefore, we need to perform semantic matching between \ud835\udc36and \ud835\udc36\ud835\udc5dto extract those item features that are both of interest to the user in the past purchased items and possessed by the target product \ud835\udc3c\ud835\udc5d. We propose the target-aware user preference distillation method, which involves matching the target item reviews \ud835\udc36\ud835\udc5dwith \ud835\udc36to extract valuable information for generating recommendation explanations. Since the description and reviews of items are usually quite long, and not all the information is helpful for generating recommendation explanations. For the target item \ud835\udc3c\ud835\udc5d, we first construct an overview item profile \ud835\udc39\ud835\udc5dto distill the useful item features. We use the product description \ud835\udc37\ud835\udc5dand reviews information \ud835\udc36\ud835\udc5d= {\ud835\udc36\ud835\udc5d 1 ,\ud835\udc36\ud835\udc5d 2 , . . . ,\ud835\udc36\ud835\udc5d \ud835\udc40} of \ud835\udc3c\ud835\udc5das input and prompt the LLM to generate an item profile \ud835\udc39\ud835\udc5d: \ud835\udc39\ud835\udc5d= Summ \u0010 {\ud835\udc36\ud835\udc5d 1 ,\ud835\udc36\ud835\udc5d 2 , . . . ,\ud835\udc36\ud835\udc5d \ud835\udc40}, \ud835\udc37\ud835\udc5d \u0011 , (1) where \ud835\udc39\ud835\udc5dcontains both the basic information of the target item and user usage experiences and Summ is an LLM-based module that is prompted by the following instructions: You are given item\u2019s description and reviews. Response item profile using the following format: item: {item name} description: {item description} other users\u2019 reviews: {item reviews} Extract key features from reviews. However, not all the product features mentioned in \ud835\udc39\ud835\udc5dmay be of concern to the user \ud835\udc48. Therefore, we need to extract product features that user \ud835\udc48care about from \ud835\udc36= {\ud835\udc361,\ud835\udc362, . . . ,\ud835\udc36\ud835\udc41} associated with user behavior. Specifically, we use the item profile \ud835\udc39\ud835\udc5dof the target item to filter reviews in set \ud835\udc36\ud835\udc56of item \ud835\udc3c\ud835\udc56: \ud835\udc39\ud835\udc56= Distill \u0010 \ud835\udc39\ud835\udc5d, {\ud835\udc36\ud835\udc56 1,\ud835\udc36\ud835\udc56 2, . . . ,\ud835\udc36\ud835\udc56 \ud835\udc40}, \ud835\udc37\ud835\udc56 \u0011 , (2) where \ud835\udc37\ud835\udc56is the item description of item \ud835\udc3c\ud835\udc56, and Distill is an LLMbased module that is prompted by the following instructions: Finish history item profile using relevant features with recommended item, strictly adhere to the following format when responding: history item: {item name} genre: {item genre} relevant information: {item information} other users\u2019 reviews: {reviews} which relevant information mainly describes similarities between history item and recommended item, and summarize other users\u2019 reviews; By integrating these two parts of information, we obtain the target-aware item profiles \ud835\udc39= {\ud835\udc391, \ud835\udc392, . . . , \ud835\udc39\ud835\udc41} for the items the user \ud835\udc48has purchased. 3.3 Explanation Generation Finally, we integrate the item profile \ud835\udc39\ud835\udc5dof the target item with the item profiles \ud835\udc39= {\ud835\udc391, \ud835\udc392, . . . , \ud835\udc39\ud835\udc41} of the items the user has purchased. We employ an in-context learning approach and instruct the LLM as follows to generate a logically coherent recommendation explanation that aligns with the recommendation system \ud835\udc45and corresponds to user attention preferences: \ud835\udc38\ud835\udc5d= \ud835\udc46\u0000\ud835\udc39\ud835\udc5d, {\ud835\udc391, \ud835\udc392, . . . , \ud835\udc39\ud835\udc41}\u0001 , (3) where \ud835\udc46is an LLM-based module to generate the recommendation explanation which is instructed by the following instructions: Now you are a recommendation assistant, combined with history relevant items, write an explanation of the recommended item. The format of response is as below: item: {recommended item} recommend reason: {reason} 4 EXPERIMENTAL SETUP 4.1 Evaluation Metrics & Dataset We employ two evaluation metrics in our experiments: (1) Aspect Score: We assume that the aspects mentioned in the review \ud835\udc36\ud835\udc5d \ud835\udc48 of the target item \ud835\udc3c\ud835\udc5dwritten by user \ud835\udc48are crucial to the user. We use the review \ud835\udc36\ud835\udc5d \ud835\udc48as a reference of the explanation \ud835\udc38\ud835\udc5d. We first employ the LLM to extract aspects of the review \ud835\udc36\ud835\udc5d \ud835\udc48. Subsequently, we measure the alignment between recommendation explanations \ud835\udc38\ud835\udc5dand user preferences by calculating the extent of the aspect overlap between \ud835\udc38\ud835\udc5dand \ud835\udc36\ud835\udc5d \ud835\udc48: Aspect_Score = 1 \ud835\udc41\ud835\udc4e \u00cd\ud835\udc41\ud835\udc4e \ud835\udc56=1 \u210e\ud835\udc56\ud835\udc61(\ud835\udc56) \u2208[0, 1], where \ud835\udc41\ud835\udc4eis the number of aspects in the user review \ud835\udc36\ud835\udc5d \ud835\udc48. And when the aspect \ud835\udc56in the explanation is semantically the same as the aspect in the recommendation explanations \ud835\udc38\ud835\udc5dthen \u210e\ud835\udc56\ud835\udc61(\ud835\udc56) = 1, otherwise, \u210e\ud835\udc56\ud835\udc61(\ud835\udc56) = 0. (2) Rating Score: Following [21], to directly \f, , Shen Gao, Yifan Wang, Jiabao Fang, Lisi Chen, Peng Han, and Shuo Shang Table 1: Recommendation explanation performance comparison. \u2021 indicates significant improvement over ChatGPT with \ud835\udc5d\u22640.01 according to a Student\u2019s t test. Method Home & Kitchen Clothing Shoes & Jewelry Cell Phones & Accessories Asp (\u2191) Rat (\u2191) Asp (\u2191) Rat (\u2191) Asp (\u2191) Rat (\u2191) RecExplainer 0.6057 2.64 0.5628 2.68 0.6028 2.64 Mistral 0.7028 2.65 0.5757 2.79 0.6571 2.00 ChatGPT 0.6971 2.51 0.6362 2.86 0.6229 2.67 DRE-M 0.7142 2.68 0.6485 2.89 0.6857 2.57 DRE-C 0.7714\u2021 2.88\u2020 0.6728\u2021 2.94\u2021 0.7400\u2021 2.90\u2021 DRE-C w/o Rev. 0.6914 2.64 0.6400 2.65 0.6542 2.66 DRE-C w/o Dist. 0.7100 2.78 0.5971 2.72 0.7142 2.87 DRE-C w/o Dist.+\ud835\udc39\ud835\udc5d 0.7371 2.75 0.6657 2.83 0.7200 2.87 DRE-C w/ \ud835\udc39\ud835\udc5d 0.7385 1.64 0.5814 2.06 0.6585 2.03 evaluate the quality of the generated explanation, we implement a three-level scoring criteria to quantitatively evaluate the response from the LLM: (i) RATING-1: Poor Explanation, using chunks of original sentence from provided data. (ii) RATING-2: Acceptable Explanation, consider only one aspect of user history and reviews, explaining unrelated items together. (iii) RATING-3: Satisfactory Explanation. We employ the LLM to evaluate the generated explanation following these criteria and calculate the average rating score over all the testset. We employ several categories of the Amazon Review dataset [25], including Cell Phones & Accessories, Clothing Shoes & Jewelry, and Home & Kitchen. We select 100 samples in each category as testset and ensured that the length of their browsing history is not less than 2 and each item has associated reviews. 4.2 Comparison Methods We compare DRE to a state-of-the-art LLM-based recommendation explanation method and several LLMs: (i) RecExplainer [21] introduces an explanation approach by leveraging LLM, which employs three methods behavior alignment, intention alignment, and hybrid alignment in the latent spaces. (ii) ChatGPT 1 is a closed-source LLM from OpenAI. We use the version gpt-3.5-turbo-0613. We conduct explanation as a prompt learning method that uses a single instruction with the same input data as our DRE. (iii) Mistral [1] is an open-source LLM and we use the mixture-of-experts version with 8 \u00d7 70 billion parameters, and use the same prompt as ChatGPT. We employ two variants of DRE: DRE-C and DRE-M with ChatGPT and Mistral as the backbone respectively. And we also employ several ablation models: (i) DRE w/o Rev.: We remove all the reviews in our model and only use the description as input. (ii) DRE w/o Dist.: We directly summarize the description and reviews for the user-purchased item using Equation 1 without using the Distill method in Equation 2. (iii) DRE w/o Dist.+\ud835\udc39\ud835\udc5d: Based on DRE w/o Dist., we also directly utilize the description and reviews of the target item without using the Summ method in Equation 1. (iv) DRE w/ \ud835\udc39\ud835\udc5d: We directly generate the explanation by using the \ud835\udc39\ud835\udc5das input to LLM, without using any information from user-purchased items. 1https://chat.openai.com/ 5 EXPERIMENTAL RESULTS 5.1 Main Results Table 1 shows the performance of our proposed DRE and baselines in terms of two metrics. We can find that DRE shows superior performance in terms of all metrics compared to the SOTA method RecExplainer and LLM backbone respectively. This phenomenon indicates that compared to the latent-level alignment, our datalevel alignment is capable of generating explanations of higher quality. Furthermore, the proposed target-aware user preference distillation method can assist the model in capturing more user preference information. 5.2 Ablation Study To evaluate the effectiveness of each module in DRE, we also conduct ablation studies with model DRE-C, and the results are shown in Table 1. We found that the DRE w/o Rev. method achieved lower scores compared to other ablation models, indicating the effectiveness of integrating review information in our approach. Additionally, due to the complexity of information in reviews, generating meaningful explanations requires extracting target-aware information. Therefore, DRE w/o Dist. also exhibited lower performance after removal Distill module from DRE. 5.3 Case Study ChatGPT: Based on your purchased history \u2026\u2026 I recommend the Kobwa (TM) Bumper Case for Apple iPhone 5 to you. This phone case aligns with your interest in iPhone accessories. \u2026\u2026 The black and clear color option matches your preference for sleek and minimalist designs (Hallucination for non-exist user preference). Additionally, this case is specifically designed for the iPhone 5 and 5G models, which you have shown interest in through your history of viewing electronic items (Not combining target-aware information). Recommended Item: Kobwa (TM) Bumper Case for Apple iPhone 5, (Black & Clear) History Item: [BlueAnt Supertooth Light Bluetooth Speakerphone (Black)\uff1bXS Earbuds with Microphone and Volume Control.] DRE: Based on your purchased history and other users' reviews, I recommend the Kobwa (TM) Bumper Case for Apple iPhone 5 to you. \u2026\u2026 While there may not be a direct relation between the Kobwa(TM) Bumper Case and the XS Earbuds in your viewing history, both items are accessories for Apple devices. Additionally, in the history item you viewed, the BlueAnt Supertooth Light Bluetooth Speakerphone (Black) is mentioned as being an electronic accessory with a black color option, similar to the Kobwa(TM) Bumper Case. Both items have also garnered positive feedback from users. \u2026\u2026 the Kobwa(TM) Bumper Case for Apple iPhone 5 would be a suitable recommendation for you. Figure 3: An example of explanation by DRE and ChatGPT. Figure 3 shows an example of recommendation explanations generated by ChatGPT and DRE based on information about userpurchased items and recommended item. The bold text in the explanation indicates the recommended item and user-purchased items. The text in red indicates the shortcomings of the explanation. The text in green shows target-aware information. The text in blue represents consistent reviews from user \ud835\udc48for user-purchased items and recommended item. From this case, we can find that ChatGPT fails to establish convincing and reasonable relationships between recommended item and user preferences. And DRE provides target-aware information that is persuasive and aligns with user preferences. This observation demonstrates that our proposed target-aware user preference distillation can effectively filter targetaware information from reviews and descriptions. \fDRE: Generating Recommendation Explanations by Aligning Large Language Models at Data-level , , 6"
},
{
"url": "http://arxiv.org/abs/2403.08261v1",
"title": "CoroNetGAN: Controlled Pruning of GANs via Hypernetworks",
"abstract": "Generative Adversarial Networks (GANs) have proven to exhibit remarkable\nperformance and are widely used across many generative computer vision\napplications. However, the unprecedented demand for the deployment of GANs on\nresource-constrained edge devices still poses a challenge due to huge number of\nparameters involved in the generation process. This has led to focused\nattention on the area of compressing GANs. Most of the existing works use\nknowledge distillation with the overhead of teacher dependency. Moreover, there\nis no ability to control the degree of compression in these methods. Hence, we\npropose CoroNet-GAN for compressing GAN using the combined strength of\ndifferentiable pruning method via hypernetworks. The proposed method provides\nthe advantage of performing controllable compression while training along with\nreducing training time by a substantial factor. Experiments have been done on\nvarious conditional GAN architectures (Pix2Pix and CycleGAN) to signify the\neffectiveness of our approach on multiple benchmark datasets such as\nEdges-to-Shoes, Horse-to-Zebra and Summer-to-Winter. The results obtained\nillustrate that our approach succeeds to outperform the baselines on\nZebra-to-Horse and Summer-to-Winter achieving the best FID score of 32.3 and\n72.3 respectively, yielding high-fidelity images across all the datasets.\nAdditionally, our approach also outperforms the state-of-the-art methods in\nachieving better inference time on various smart-phone chipsets and data-types\nmaking it a feasible solution for deployment on edge devices.",
"authors": "Aman Kumar, Khushboo Anand, Shubham Mandloi, Ashutosh Mishra, Avinash Thakur, Neeraj Kasera, Prathosh A P",
"published": "2024-03-13",
"updated": "2024-03-13",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"eess.IV"
],
"label": "Original Paper",
"paper_cat": "Distillation",
"gt": "Generative Adversarial Networks (GANs) have proven to exhibit remarkable\nperformance and are widely used across many generative computer vision\napplications. However, the unprecedented demand for the deployment of GANs on\nresource-constrained edge devices still poses a challenge due to huge number of\nparameters involved in the generation process. This has led to focused\nattention on the area of compressing GANs. Most of the existing works use\nknowledge distillation with the overhead of teacher dependency. Moreover, there\nis no ability to control the degree of compression in these methods. Hence, we\npropose CoroNet-GAN for compressing GAN using the combined strength of\ndifferentiable pruning method via hypernetworks. The proposed method provides\nthe advantage of performing controllable compression while training along with\nreducing training time by a substantial factor. Experiments have been done on\nvarious conditional GAN architectures (Pix2Pix and CycleGAN) to signify the\neffectiveness of our approach on multiple benchmark datasets such as\nEdges-to-Shoes, Horse-to-Zebra and Summer-to-Winter. The results obtained\nillustrate that our approach succeeds to outperform the baselines on\nZebra-to-Horse and Summer-to-Winter achieving the best FID score of 32.3 and\n72.3 respectively, yielding high-fidelity images across all the datasets.\nAdditionally, our approach also outperforms the state-of-the-art methods in\nachieving better inference time on various smart-phone chipsets and data-types\nmaking it a feasible solution for deployment on edge devices.",
"main_content": "Introduction Computer vision applications such as image-to-image translation, image synthesis, image generation, super res*E-mail:{aman.kumar1, khushboo.anand, shubham.mandloi, ashutosh.mishra1, avinash.thakur, neeraj.kasera}@oppo.com \u2020E-mail:{prathoshap@gmail.com} olution etc. have seen tremendous progress yielding highfidelity images with the advent of GANs[1]. The development of image-based GAN applications have in-turn accelerated the demand for deployment of such models on edge devices for the usage of the end consumers. However, the complexity of training such parameter heavy models to generate visually pleasing images result in high computational and memory overhead which acts as a bottleneck in deployment of GANs on mobile devices. For instance, the popular CycleGAN requires over 56.8G MACs (MultiplyAccumulate Operations) for generating a single image of resolution 256 \u00d7 256 pixels. On the other hand, Pix2Pix requires 18.6G MACs which is 4X compared to traditional Res-Net50 [2] architecture. This huge number of operations is not desirable for the deployment on edge devices. Hence, there is a need for compressing these networks by removing the redundant parameters and reducing the memory and computational consumption. Discriminative approaches such as image classification, object detection and semantic segmentation have been at the receiving end of undivided focus since these networks have surpassed human imagination but still, for the learning to saturate, these networks take huge amount of training time. For instance, the popular image classification model, Alexnet [3] has 60 million parameters and requires about 240 MB of memory while VGG16 [4] has 130 million parameters and has takes around 500 MB of memory. The research community has given unmitigated attention on the application of model compression techniques to accelerate deployment of image classification and object detection networks using techniques like weight quantization [5, 6], pruning [7, 8] and knowledge distillation [9, 10]. However, these methods are not directly applicable to generative models such as GANs. A lot of the recently proposed methods have tried to compress generative adversarial networks using the combined techniques of knowledge distillation [11, 12] and channel pruning[13, 14]. However, these approaches don\u2019t allow controllable compression to happen neither using a single technique nor through the combinaarXiv:2403.08261v1 [cs.CV] 13 Mar 2024 \ftion of multiple techniques. To address the above-mentioned issues, we propose a novel method for compressing the GAN using differentiable pruning method using the concept of hypernetwork. The compression is performed during the training regime. The proposed hypernetwork takes latent vector as an input and dynamically produces weights of a given layer of the generator network. This input latent vector decides the pruning rate for different layers in the network. Sparsification of latent vector is achieved via proximal gradient. Post sparsification, the latent vectors are passed through the hypernetwork that in turn generates the weight of the generator network. Since the latent vector and the weights of the generator network are covariant with each other, the sparsification of latent vectors leads to the pruning of the weights of the concerned network. The proposed method also helps in reducing the training time and inference time as compared to that of conventional GAN training method. Through the experiments on different conditional generative models on various datasets, the potential of the proposed method is revealed. The main contributions of the paper can be summarized as follows: 1. We propose CoroNetGAN, an approach based on differentiable pruning via hypernetworks for GAN compression. To the best of our knowledge, this is the first work that achieves model compression using controllable pruning via hypernetwork for conditional GANs. Our proposed approach compresses the GAN network in a controlled way by providing the compression rate as an input to the algorithm. 2. Compression is achieved simultaneously alongside training unlike the distillation based methods that involve teacher dependency [15]. CoroNetGAN outperforms state-of-the-art compression technique [15] on training time on all the datasets validating the effectiveness of our technique both on training latency and visual appearance of the generated images. This will be of great advantage in reducing the training time while maintaining the accuracy when training GANs on bigger datasets containing billion of images. 3. Our proposed approach, CoroNetGAN outperforms state-of-the-art conditional GAN compression methods on widely used Zebra \u2192Horse and Summer \u2192 Winter datasets. CoroNetGAN obtains reasonable qualitative and quantitative results on other datasets. CoroNetGAN also outperforms state-of-the-art compression techniques [15] on inference time. 2. Related Work 2.1. Generative Adversarial Networks GANs [1] have proven to generate realistic results on a variety of tasks. For instance, Isola et al. [16] propose Pix2Pix for paired image-to-image translation trained via the combination of adversarial loss and pixel-wise regression loss in order to ensure the visual quality of generated images. Later, [17] is proposed that helps to increase the resolution of translated images with multi-scale neural networks and edge maps. GANs have also been proposed to perform image deblurring [18], style transfer [19, 20], image super resolution [21] along with text-to-image generation [22]. Zhu et al. [23] propose CycleGAN for unpaired image-to-image translation. The algorithm trains generators on different domains of data through a weakly supervised setting using cycle consistency loss. The final objective is to convert the data from one domain to other without using any label information. 2.2. GAN Compression The tremendous resource consumption by GANs has garnered recent attention towards GAN compression. Wang et al.[24] proposes a novel quantization method and multiprecision quantization algorithm considering different sensitivities of discriminator and generator. Aguinaldo et al. [11] introduces the idea of knowledge distillation in GANs between large over-parameterized network and small few parameter networks optimized using joint and mean squared error loss functions. However, the only focus here is to compress the generator keeping the discriminator intact. Most usage of GANs in mobile devices is based on the application of image-to-image translation task. [12] distills the student discriminator to assist training of the student generator and also focused on image translation problem using Pix2Pix framework. Chang et al. in [25] focuses to mimic the functionality of BigGAN with a smaller compressed network and fewer parameters. Different devices with varied computing power require generators of different sizes. In order to accommodate this trade-off, SlimmableGAN[26] proposes flexible switching between the multiwidth configurations. Further, Ren et al.[15] overcomes the complex multi-stage compression process and proposes a single-stage GAN online distillation strategy to obtain the compressed model. However, these approaches use images from the teacher directly to distill knowledge. Zhang et al.[27] proposes the idea of investigating GAN compression from frequency perspective and introduces the idea of wavelet analysis. They decompose the image into frequency bands and perform distillation only on bands with higher frequency unlike naive methods that do not prioritize the high frequency. [28] aims to find crucial regions in the image using attention module. Considering the attention value \fimportant to the region, features are distilled from teacher to student. Recent works such as [29] introduce an Inceptionbased Residual block replacing the original residual blocks in CycleGAN and search for student generator from teacher generator via pruning followed by Similarity based Knowledge Distillation. Further, approaches integrating various compression techniques are also proposed. [13] combines model distillation, channel pruning and quantization and generate a unified optimization form which achieves superior trade-off compared with standalone compression techniques. Liu at al.[14] combines the idea of channel pruning and knowledge distillation and mainly expands the focus on accelerating unconditional GANs. 2.3. HyperNetworks Hypernetworks are a group of smaller networks that generates the weights for a larger network. These smaller neural networks have been used historically for vision[30], functional representation[31] and bayesian inference tasks[32]. Albeit the word hypernetwork has been coined recently, the concept of using dynamic parameter generation has been used by researchers for a long time [33]. Von der Malsberg et al.[34] indicates a possibility of dynamic modelling between a slow classical weight and a dynamic decaying connection. The technique to model short term memory by computing weight changes of another network was initiated by Schmidhuber et al.[35]. Parameter prediction through co-relation between different parameters of the neural network was extensively studied in [36]. A weight matrix is produced using a learnable lower dimensional matrix using a linear operation.[37] uses weight matrices as a factored representation and feed forward one-shot learners reducing the dimensionality of the hypernetwork.[38] proposes an approach to calculate the parameters for image transformation using a weight generating network. [39] proposes an approach for generating weights for visual question answering task. The parameter prediction network takes input the questions post which the network predicts weights of the main network. In addition, they also use hashing of parameters to reduce the size of the final matrix of parameters. The concept of dynamic filters has been used for image superresolution [30]. These filters are computed based on input using a similar concept to hypernetwork. 3. Methodology GAN consist of a generator and a discriminator network employed in a min-max game. The proposed method allows compression of the generator network while training. Compression is achieved using differentiable meta pruning which is based on the idea of hypernetwork. Hypernetwork is responsible for generating the weights of the generator network for each of its layer. The input to the hypernetwork is a latent vector and the output is a weight matrix of the generator network. During the forward pass, latent vector is given as an input to hypernetwork to generate the weights of the generator. During back-propagation, the gradient flows in the hypernetwork instead of the main network. It is designed in a way such that its output is covariant with the input latent vector. Proximal Gradient helps in pruning of output channels of the generator network by eliminating the redundant parameters automatically. 3.1. HyperNetwork Design The hypernetwork consists of three layers. The latent layer takes as input the latent vectors and computes a latent matrix. The embedding layer projects the elements of the latent matrix to an embedding space. The final layer converts the embedded vectors to the final output. The design is taken from [40]. As an example, consider the generator to be an L-layer convolutional neural network. Each layer of the network has its own corresponding latent vector that is responsible for generating the weights of the corresponding layer. The size of the latent vector is equal to the number of output channels in that layer. For instance, consider an l-th convolutional layer having n \u2217c \u2217w \u2217h number of parameters, where n and c are the output and input channels and w*h is the size of kernel respectively. Suppose that the latent vector corresponding to that particular l \u2212th layer is vl \u2208Rc. Therefore, the previous layer has latent vector vl\u22121 \u2208Rn. The hypernetwork takes latent vector of current layer (vl) and its previous layer (vl\u22121) as input and will output the weights matrix of the l-th layer of the generator network. Initially, the first layer of the hypernetwork computes a latent matrix using the two latent vectors: Vl = vl.vl\u22121T + B0 (1) where, Vl, B0 \u2208Rn\u2217c Here, [T] denotes transpose of the matrix while [.] denotes matrix multiplication. Subsequently, the second layer of the hypernetwork projects every element of the latent matrix to a mdimensional embedding space as follows: Sl ij = Vl ijwl 1 + b1 l i = 1..n, j = 1...c (2) where, Sl ij, wl 1, b1 l \u2208Rm Here, we are considering wl 1 and b1 l as different for different elements of the matrix. The subscript (i, j) has not been used for easier interpretation of the above mentioned equations. The vectors wl 1, b1 l and Sl ij for all the elements \fFigure 1. Illustration of the proposed algorithm designed for compressing GAN\u2019s using controllable differentiable pruning. A latent vector is attached to each of the convolution layer of the generator. The latent vector generates the weights for the generator via hypernetwork. Sparsification of the latent vector leads to pruning of the corresponding weights of the generator network. The proposed design allows the latent vector and its corresponding weight matrix to be covariant with each other. The generator generates visual results using the computed weight matrix through the hypernetwork (Best viewed when zoomed). of the matrix together forms a 3D tensor, i.e., Wl 1, Bl 1 and Sl 1 \u2208Rn\u2217c\u2217m. After the second step, the final layer of the hypernetwork is responsible for converting the embedding vectors to the output(Fl ij) which can be used as weight matrix of the Generator network. This is done by multiplying the embedded vectors Sl ij by an explicit matrix as follows: Fl ij = wl 2.Sl ij + b2 l i = 1..n, j = 1...c (3) where, Fl ij, bl 2 \u2208Rwh and, wl 2 \u2208Rwh\u2217m wl 2 and b2 l are different and unique for each of the element and subscript (i,j) has not been used for easier interpretations. Vectors wl 2, b2 l and Fl ij for all the elements together will be high-dimensional tensors i.e., Wl 2 \u2208Rn\u2217c\u2217wh\u2217m and Bl 2 and Fl \u2208Rn\u2217c\u2217wh. Combining 1, 2 and 3, the functionality of the proposed approach can be collectively written as: Fl = h(vl, vl\u22121; Wl, Bl), (4) where, h(.) denotes the functionality of the above architecture. The final output Fl will be used as the weight parameter of the l-th layer. The hypernetwork is designed in such a way that the weight matrix of the generator is covariant with it\u2019s corresponding input latent vector as pruning an element in the latent vector automatically leads to removal of corresponding slice in the final weight matrix( Fl). Figure 1 depicts the overall workflow of the proposed CoroNetGAN. While designing the hypernetwork, we also execute residual connections in the network. In case of residual or skip connections, we take the input latent vector as the combination of the latent vector of the previous layer and the corresponding layer from which the skip connection originates. We concatenate both the input latent vectors to create one single input latent vector. The resultant input latent vector along with latent vector of the current layer is used to create the latent matrix. The creation of latent matrix is followed by execution of steps((2),(3)) for generating the weights matrix of the convolution layer. 3.2. Vector Sparsity using Proximal Gradient The differentiable property of the algorithm comes through the use of proximal gradient. Proximal gradient helps in sparsification of the latent vector by searching the potential candidates. Since latent vector is covariant with the weight matrix of the Generator network, it leads to compression of the Generator network. During training time, the parameters of the hypernetwork is updated using \fStochastic Gradient Descent (SGD) optimization algorithm. During back-propagation, gradients flow from the Generator network to the hypernetwork. The latent vectors are updated using the proximal gradient [40] which leads to sparsified input latent vectors as follows: v[k + 1] = prox\u03bb\u00b5R(v[k] \u2212\u03bb\u00b5\u2207L(v[k])) (5) The proximal gradient algorithm forces the potential elements of the latent vectors to approach zero quicker than the others without any human effort and interference in this process. Due to the fact that proximal operator has closed-form solution and use of SGD, the whole solution is recognized as approximately differentiable. 3.3. Network Pruning Our proposed method allows the weight matrix to be covariant with the it\u2019s corresponding latent vector. Hence, sparsifictaion of latent vector leads to the pruning of the corresponding weights of the CNN layer in the generator network. Our training regime consists of two stages, namely searching stage and converging stage. During the searching stage, proximal gradients helps in identifying the potential candidates of the latent vector. Therefore, after the searching stage, we get the sparsified latent vector ( \u02c6 vl). Proximal gradient help in elements of the latent vector either be zero or approaching towards zero. We use a mask(ml) on the sparsified latent vector with a predefined threshold(\u03c4). This is followed by masking operation that compares every element of the latent vector with the threshold value. If greater than threshold, the returned value is one else zero. The sparsified latent vector, \u02c6 vl is pruned with the help of the computed mask (ml). Once the target compression ratio is achieved, the algorithm shifts from searching to the converging stage. In the converging stage, hypernetwork is discarded, and the training of the generator follows the conventional GAN training procedure. Upon extensive experimentation, it is observed that the number of epochs in searching stage is much smaller than the number of epochs in the converging stage. The pseudo-code of the proposed algorithm is mentioned in Algorithm 1. 4. Experiments 4.1. Experiment Setting 4.1.1 Models and Datasets We evaluate our approach incorporating the following models to demonstrate the effectiveness of the proposed method: 1. Pix2Pix [16] for paired image-to-image translation with original U-Net generator architecture. Algorithm 1 CoroNetGAN Pseudo Code total epochs \u2190total number of epochs targetflops \u2190target compression ratio latent vectors(v1, v2, .., vi) converging \u2190False epochs \u21900 Compression via Differentiable Pruning while converging \u0338= True do \u2022 Sample m{z1, z2, .., zi} images from given dataset \u2022 Sample m{x1, x2, .., xi} ground-truths \u2022 Update Hypernetwork using SGD \u25bd\u03b8h 1 m Pm i=1 log(1 \u2212D(G(zi))) \u2022 Update Discriminator using SGD \u25bd\u03b8d 1 m Pm i=1{logD(xi) + log(1 \u2212D(G(zi)))} \u2022 Compress latent vector using proximal gradient v[k + 1] = prox\u03bb\u00b5R(v[k] \u2212\u03bb\u00b5\u2207L(v[k])) \u2022 epochs \u2190epochs + 1 if flops \u2212target flops \u2264threshold then converging \u2190True end if if epochs \u2264total epochs then break end if end while Finetuning while epochs \u2264total epochs do \u2022 Sample m{z1, z2, .., zi} images from given dataset \u2022 Sample m{x1, x2, .., xi} ground-truths \u2022 Update Generator using SGD \u25bd\u03b8g 1 m Pm i=1 log(1 \u2212D(G(zi))) \u2022 Update Discriminator using SGD \u25bd\u03b8d 1 m Pm i=1{logD(xi) + log(1 \u2212D(G(zi)))} \u2022 epochs \u2190epochs + 1 end while 2. CycleGAN [23] for unpaired image-to-image translation using Res-Net architecture to perform transformation on an image belonging to source domain to desired target domain. 3. Deep Convolutional Generative Adversarial Network (DCGAN) [41] that uses convolutional and convolutional-transpose layers in the discriminator and generator, respectively. For the purpose of quantitative and qualitative evaluation, four datasets are utilised including Edges \u2192Shoes, Horse \u2194Zebra, Summer \u2192Winter and CIFAR10. 1. Edges \u2192 Shoes [16] is a paired image-to-image \fFigure 2. Graphical representation of training time(in minutes) and FID for Pix2Pix(left) on Edges \u2192Shoes and CycleGAN(middle,right) on Horse \u2192Zebra and Summer \u2192Winter datasets respectively. From the graphs, it is evident that total training time for our proposed approach is significantly lesser compared to OMGD [15]. For CycleGAN on Summer \u2192Winter dataset, our algorithm outperforms OMGD [15] on both training time and FID (Best viewed when zoomed). Figure 3. Samples generated from our approach. First row contains translated images from Zebra \u2192Horse dataset. The second row contains translated images from Horse \u2192Zebra dataset (Best viewed when zoomed). translation dataset including images edges of shoes to be mapped to their corresponding complete image of shoes. The dataset consists of 49825 images. 2. Horse \u2194Zebra [23] dataset contains images originally from ImageNet [42]. It is an unpaired imageto-image translation dataset used for translating horse images to zebra and vice versa. In our experiments, the training set includes 1067 horse images and 1334 zebra images. 3. Summer \u2192Winter [23] is also unpaired image-toimage translation dataset which translates summer images to winter. We have used 1231 summer images for training purpose. 4. CIFAR10 dataset [43] consists of 50000 training images and 10000 test images across 10 different classes. Our approach CoroNetGAN with Pix2Pix architecture is bench-marked on Edges \u2192Shoes dataset. On the other hand, CoroNetGAN with CycleGAN has been benchmarked on Horse \u2194 Zebra and Summer \u2192 Winter datasets. Although our proposed approach focuses on the compression for conditional GAN, we also made initial attempts to perform compression using our proposed algorithm for unconditional GAN (specifically DCGAN). Figure 4. Qualitative comparison of CoroNetGAN with CycleGAN architecture on Summer \u2192Winter dataset compared with original CycleGAN [23], GAN Compression [44] and OMGD [15] algorithms. Our approach generates visually realistic images and outperforms all the other algorithms on the FID metric (Best viewed when zoomed). 4.1.2 Implementation Details We train our proposed approach using single NVIDIA Tesla V100 GPU on PyTorch deep learning framework. For the algorithm to compress the network, a target compression ratio needs to be selected. When the difference between the actual compression and the target compression ratio falls below 2%, pruning stops and model moves to fine-tuning state from the compression state. The number of parameters in the hypernetwork is proportional to the size of the embedding space. For the experimentation, embedding space is set to 8 across all the experiments. Learning rate is set to 0.0002. Batch size has been set to 4 and 1 for Pix2Pix and CycleGAN respectively on all the experiments across different datasets. The sparsity regularization factor for the proximal gradient is set to 0.5 across all the experiments. 4.1.3 Evaluation Setting For the quantitative performance comparison, we adopt Frechet Inception Distance (FID) [45] as common evaluation metric. FID is specifically developed for assessing the performance of GANs. It is used to evaluate the quality of the images generated by generative models by comparing the distribution of features corresponding to real and gen\fFigure 5. Qualitative comparison of CoroNetGAN with Pix2Pix architecture on Edges \u2192Shoes dataset compared with original Pix2Pix [16], GAN Compression [44] and OMGD [15] algorithms. Our approach generates visually plausible images compared to state-of-the-art methods (Best viewed when zoomed in). erated images using an InceptionV3 [46] network. A lower FID score is an indicator of high similarity between both the distributions and thus better quality of generated images. We have evaluated FID for different architectures with our approach on multiple datasets and compared it against existing methodologies. 4.2. Experimental Results 4.2.1 Quantitative Results We evaluate our approach on different models and datasets using evaluation setting mentioned in the previous section and report quantitative results compared with the corresponding state-of-the-art methods. The results can be summarized as follows: Pix2Pix: We incorporate Pix2Pix with its original U-Net architecture in our proposed approach and report the experimental results in Table 1. We observe that our approach is able to achieve second best FID score on Edges \u2192Shoes dataset. We are also able to outperform the results of [28, 27, 47] by achieving a better FID. Although, our FID score is higher than [15] but our approach outperforms it in terms of training time. Figure 2 illustrates that our approach significantly improves training time however elevates FID score compared to [15]. CycleGAN: Similar to previous works, we include ResNet style CycleGAN in our method and report the results in Table 1. We observe that the results on Zebra \u2192Horse dataset outperform all the state-of-the-art approaches by achieving the best FID score of 32.3 corresponding to 75% compression. Additionally, our approach is also able to improve over all the existing baselines by achieving FID score of 72.3 on Summer \u2192Winter dataset. As illustrated in Figure 2 we also outperform [15] on both training time and FID. Furthermore, we also observe that our approach with CycleGAN is able to beat the results of [28, 27, 47, 44, 48] on Horse \u2192Zebra dataset. Even though, we achieve greater FID score than [15, 29], CoroNetGAN outperforms [15] on training time as illustrated in Figure 2. One thing to note is that we compare CoroNetGAN quantitative results with compression rates of 75% and 85 % using CycleGAN architecture unlike 95% in Pix2Pix since it becomes difficult to compress CycleGAN architecture beyond 85% due to its huge model complexity. DCGAN: We demonstrate the applicability of our approach on unconditional GAN. For the experimentation and evaluating our approach, we adopt DCGAN [50] on CIFAR10 dataset [43] with original FID score of 45.8. As per the results, we were able to achieve FID score of 56.3 with 50% compression ratio which outperforms the results obtained with random pruning which achieves FID score of 68.8. Inference Time Comparisons: Additionally, a comparative analysis of the inference time is conducted between the 95% compressed model obtained from our approach and OMGD [15]. Both the models are trained on Edges \u2192 Shoes and evaluated using diverse smartphone chipsets and data types. The inference results, as presented in Table 3, demonstrate that our proposed model achieves superior inference time performance compared to OMGD. 4.2.2 Qualitative Results We further show visualization results of our proposed method in comparison with state-of-the-art methodologies in Figure 3, 4 and 5 demonstrating the effectiveness of our approach. As illustrated, our method can generate high-fidelity images comparable to other state-of-the-art approaches across multiple datasets. The reason we believe our approach generates realistic images is that the compression state of our algorithm forces the generator to generate visually plausible images while competing in the min-max game. 4.2.3 Ablation Studies Our proposed method for GAN compression shows promising results and outperform state-of-the-art methods on some conditional GANs. We perform extensive ablation studies to further demonstrate the effectiveness of hypernetworks on GAN compression on U-Net based architecture for Pix2Pix. We tried exhaustive hyperparamter search and finetuning the learning rate. All the modifications result in a very negligible change of overall FID score. We also enabled a learning rate scheduler to check the improvement in model performance but quantitatively, no major change was observed. We also tried increasing the layer structure of the hypernetwork employed by increasing the dimension of the embedding space. However, this led to an increase in the training time with a small change in the overall FID score. \fModel Dataset Paper Params(M) FLOPs(G) MACs(G) FID Pix2Pix Edges \u2192Shoes Original [16] 54.4 \u2013 18.6 34.31 Region-Aware [28] 13.61 (4.00\u00d7) 1.56 \u2013 77.69\u00b13.14 Wavelet KD [27] 13.61 (4.00\u00d7) 1.56 \u2013 80.13\u00b12.18 DMAD [47] 2.13 (25.5\u00d7) \u2013 2.99 (6.2\u00d7) 46.95 OMGD [15] 3.404 (16.0\u00d7) \u2013 1.219 (15.3\u00d7) 25 CoroNetGAN(75%) 13.225 4.8879 \u2013 39.1 CoroNetGAN(95%) 4.721 1.2551 \u2013 54.3 CycleGAN Horse \u2192Zebra Original [23] 11.3 \u2013 56.8 61.53 Region-Aware [28] 1.61 (7.08\u00d7) 7.29 \u2013 60.01\u00b15.22 Wavelet KD [27] 1.61 (7.08\u00d7) 7.29 \u2013 61.65\u00b14.73 DMAD [47] 0.42 (26.9\u00d7) \u2013 3.97 (14.3\u00d7) 62.41 Teachers Do More Than Teach [29] \u2013 \u2013 2.56 53.48 GAN Compression [44] 0.34 (33.3\u00d7) \u2013 2.67 (21.2\u00d7) 64.95 Revisiting Discriminator in GAN Compression [48] \u2013 \u2013 2.40 59.31 OMGD [15] 0.137 (82.5\u00d7) \u2013 1.408 (40.3\u00d7) 51.92 CoroNetGAN(75%) 2.685 0.217 \u2013 57.7 CoroNetGAN(85%) 1.670 0.1347 \u2013 60.9 Zebra \u2192Horse Original [23] 11.3 49.64 \u2013 138.07\u00b14.01 Region-Aware [28] 1.61 (7.08\u00d7) 7.29 (6.80\u00d7) \u2013 137.03\u00b13.03 Wavelet KD [27] 1.61 (7.08\u00d7) 7.29 (6.80\u00d7) \u2013 138.84\u00b11.47 DMAD [47] 0.30 (37.7\u00d7) \u2013 3.50 139.3 CoroNetGAN (75%) 2.685 0.217 \u2013 32.3 Summer \u2192Winter Original [23] 11.3 \u2013 56.8 79.12 DMAD [47] 0.24 (47.1\u00d7) \u2013 3.18 (17.9\u00d7) 78.24 OMGD [15] 0.137 (82.5\u00d7) \u2013 1.408 (40.3\u00d7) 73.79 Auto-GAN [49] \u2013 4.34 \u2013 78.33 CoroNetGAN (75%) 2.685 0.217 \u2013 72.3 CoroNetGAN (85%) 1.670 0.1347 \u2013 74.7 Table 1. Performance comparison of CoroNetGAN with state-of-the-art algorithms on Pix2Pix and CycleGAN architectures. It can be observed that our approach achieves best FID with CycleGAN on Zebra \u2192Horse and Summer \u2192Winter datasets. Our results also outperform [28, 27, 47, 11, 48] and achieve competitive FID on Horse \u2192Zebra dataset. We also achieve second best FID score on Edges \u2192Shoes dataset beating the results of [28, 27, 47]. Model Dataset Method Params(M) FLOPs(G) FID Pix2Pix Edges \u2192Shoes Original [16] 54.41 \u2013 34.31 CoroNetGAN(75%) 13.225 (24.31%) 4.8879 (26.94%) 39.1 CoroNetGAN(G + D)(75%) Generator 13.321 (24.48%) Generator 4.8993 (27%) 38.6 Discriminator 0.725 (26.23%) Discriminator 0.4767 (26.74%) Table 2. Generator and Discriminator compression in CoroNetGAN in Pix2Pix architecture on Edges \u2192Shoes dataset. It is evident that compressing both generator and discriminator helps in improving the FID score. Chipset d-type Model GPU Inference Time(CL)(ms) Qualcomm Snapdragon SM8450 32-bit Ours 12.5419 OMGD [15] 15.3378 16-bit Ours 11.794 OMGD [15] 15.283 8-bit Ours 12.244 OMGD [15] 16.0191 Dimensity 1200-Max Octa 32-bit Ours 20.7268 OMGD [15] 21.1919 16-bit Ours 20.1961 OMGD [15] 20.9635 8-bit Ours 20.9972 OMGD [15] 21.541 Table 3. Shows inference time comparison between the model compressed by our methodology to 95% and the model compressed from OMGD on different processors. Both the models are trained on Edges \u2192Shoes dataset. The inference time is computed for the input resolution of 256\u00d7256. The 8-bit quantization of the compressed model results in increased processing time due to the presence of a higher number of quantization and de-quantization blocks compared to other data types. Compression of both generator and discriminator: To evaluate the significance of our approach, we design a variant CoroNetGAN(G+D compression) which compresses both the generator and discriminator till 75% during training. As mentioned in Table 2, this ablation results in improving the FID score from 39.1 to 38.6 as the generator is able to generate better results while compression of discriminator. Generator finetuning through HyperNetwork: We also tried to finetune the weights of the generator generated from the compression state through the HyperNetwork itself, but we did not find any significant improvements in the evaluation metric. \f5."
},
{
"url": "http://arxiv.org/abs/2403.03890v1",
"title": "Hierarchical Diffusion Policy for Kinematics-Aware Multi-Task Robotic Manipulation",
"abstract": "This paper introduces Hierarchical Diffusion Policy (HDP), a hierarchical\nagent for multi-task robotic manipulation. HDP factorises a manipulation policy\ninto a hierarchical structure: a high-level task-planning agent which predicts\na distant next-best end-effector pose (NBP), and a low-level goal-conditioned\ndiffusion policy which generates optimal motion trajectories. The factorised\npolicy representation allows HDP to tackle both long-horizon task planning\nwhile generating fine-grained low-level actions. To generate context-aware\nmotion trajectories while satisfying robot kinematics constraints, we present a\nnovel kinematics-aware goal-conditioned control agent, Robot Kinematics\nDiffuser (RK-Diffuser). Specifically, RK-Diffuser learns to generate both the\nend-effector pose and joint position trajectories, and distill the accurate but\nkinematics-unaware end-effector pose diffuser to the kinematics-aware but less\naccurate joint position diffuser via differentiable kinematics. Empirically, we\nshow that HDP achieves a significantly higher success rate than the\nstate-of-the-art methods in both simulation and real-world.",
"authors": "Xiao Ma, Sumit Patidar, Iain Haughton, Stephen James",
"published": "2024-03-06",
"updated": "2024-03-06",
"primary_cat": "cs.RO",
"cats": [
"cs.RO",
"cs.AI",
"cs.CV",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "Distillation",
"gt": "This paper introduces Hierarchical Diffusion Policy (HDP), a hierarchical\nagent for multi-task robotic manipulation. HDP factorises a manipulation policy\ninto a hierarchical structure: a high-level task-planning agent which predicts\na distant next-best end-effector pose (NBP), and a low-level goal-conditioned\ndiffusion policy which generates optimal motion trajectories. The factorised\npolicy representation allows HDP to tackle both long-horizon task planning\nwhile generating fine-grained low-level actions. To generate context-aware\nmotion trajectories while satisfying robot kinematics constraints, we present a\nnovel kinematics-aware goal-conditioned control agent, Robot Kinematics\nDiffuser (RK-Diffuser). Specifically, RK-Diffuser learns to generate both the\nend-effector pose and joint position trajectories, and distill the accurate but\nkinematics-unaware end-effector pose diffuser to the kinematics-aware but less\naccurate joint position diffuser via differentiable kinematics. Empirically, we\nshow that HDP achieves a significantly higher success rate than the\nstate-of-the-art methods in both simulation and real-world.",
"main_content": "Introduction Learning efficient visual manipulation strategies in robotics is challenging due to diverse environments, objects, and robot trajectories. The choice of policy representation strongly influences agent performance. One way of parameterising the policy is to directly map visual observations to robot commands, e.g., joint position or velocity actions [18, 22, 27, 39]. These approaches make the least assumptions of the task and environment and retain the flexible control of the over-actuated, but they often suffer from low sample efficiency and poor generalisation ability, especially for long-horizon tasks [20, 34]. Recent advances in learning next-best-pose (NBP) agents [6, 7, 15\u201317, 20, 34, 43] have significantly improved 1Code and videos are available in our project page. High-Level Agent Denoising next-best pose Figure 1. We introduce HDP, a hierarchical agent for robotic manipulation. At the high-level, HDP learns to predict the next-best end-effector pose. Conditioned on the current and the predicted pose (red), a diffusion model generates an action trajectory for the robot to follow (blue). In contrast, the trajectories generated by classic planners (yellow) cannot be executed due to violating environment constraints, e.g., the hinge of the box. the sample efficiency and performance for robotic manipulation. Instead of learning continuous actions, NBP agents directly predict a distant \u201ckeyframe\u201d [17], a next-best endeffector pose, and use a predefined motion planner to compute a trajectory for the agent to follow. However, as the motion planner is unaware of the task context, it will fail to perform tasks that require understanding the environment context, e.g., dynamics. For example, in Fig. 1 to open the box, the agent has to understand the unknown physics properties of the hinge, e.g., the resistance force, and only a specific curved trajectory can be successfully executed. In this work, we introduce Hierarchical Diffusion Policy (HDP), a hierarchical multi-task agent that combines the best of both worlds. HDP factorises a manipulation policy by chaining a high-level NBP agent with a low-level learned controller. At the high level, HDP takes the 3D visual observations and language instructions as the inputs, and predicts a 6-DoF next-best end-effector pose. At the high level, HDP entails the capability of understanding the visual environment and language instructions and performing long-horizon task-level decision-making. At the low level, given the high-level 6-DoF end-effector pose action as a goal, HDP casts the control task as a context-aware 6-DoF pose-reaching task. We introduce a novel kinematics-aware low-level agent, Robot Kinematics Diffuser (RK-Diffuser), a diffusion-based policy [5] that directly generates the mo1 arXiv:2403.03890v1 [cs.RO] 6 Mar 2024 \fFigure 2. We focus on learning multi-task language-guided agent for robotic manipulation. Unlike a standard motion planner that only samples an arbitrary trajectory to the end pose. tion trajectory via conditional sampling and trajectory inpainting. Specifically, instead of generating the end-effector pose trajectories as in Chi et al. [5], Xian et al. [40] and solving the robot inverse kinematics, RK-Diffuser learns both end-effector pose and robot joint position diffusion, and distill the accurate but kinematics-unaware end-effector pose trajectory into the joint position trajectory via differentiable robot kinematics. RK-Diffuser achieves accurate trajectory generation and maximum control flexibility, while avoiding violating the robot kinematic constraints, which is a common issue of inverse kinematics solvers. In our experiments, we empirically analyse HDP on a wide range of challenging manipulation tasks in RLBench [19]. We show that (1) RK-Diffuser generally achieves a higher success rate on goal-conditioned motion generation. (2) The proposed hierarchical agent, HDP, outperforms the flat baseline agents and other hierarchical variants. (3) HDP can be directly trained on a real robot with only 20 demonstrations on a challenging oven opening task with a high success rate. 2. Related Works 2.1. End-to-End Visual Manipulation Agents End-to-end manipulation approaches [18, 22, 27, 39] make the least assumption about objects and tasks, and learn a direct mapping from RGB images to a robot action, but tend to be sample-inefficient. Two recent approaches have emerged to combat this sample inefficiency: (1) the nextbest pose (NBP) action mode that learns to directly predict a distant \u201ckeyframe\u201d [17]; (2) 3D action-value maps [20] that aligns the 3D task space and the action space, by learning 3D voxel-based action-value maps as policies, and extracting actions by taking the coordinates of the voxel with the highest value. Such a structured action space significantly reduces the amount of data needed and the generalisation of the learned policy. In particular, built with Transformers backbones, Shridhar et al. [34] and Gervet et al. [6] are able to take in language tokens as input, and develop languageconditioned policies. In this work, without loss of generality, we choose PerAct [34] as our high-level languageconditioned agent for various tasks. Taking the predicted 6-DoF NBP as input, RK-Diffuser naturally works as a lowlevel policy of PerAct. Similar to our work, James and Abbeel [15] combines a high-level C2F-ARM [20] with a low-level agent that learns to rank a set of sampled trajectories by human heuristics. This approach has been shown to work on a series of challenging manipulation tasks, but it is computationally heavy and not scalable, conditioned on predefined motion generators. We show that HDP achieves strong multi-task manipulation capabilities with both kinematics awareness and high accuracy. 2.2. Diffusion Models The diffusion model is a powerful class of generative models that learn to approximate the data distribution by iterative denoising processes. They have shown impressive results on both the conditional and unconditional image, video, and 3D object generation [10, 11, 26, 32, 35, 38]. In the field of decision making, diffusion models have recently been adopted as a powerful policy class [1, 5, 21, 23, 37]. Specifically, Diffusion Policies [5] learn to generate diverse multi-modal trajectories for robot manipulation by conditional generation with imitation learning. Concurrent with our work, Xian et al. [40] proposes ChainedDiffuser as a hierarchical agent. As we show in our experiments, the gripper-pose diffusion policy in ChainedDiffuser relies on inverse kinematics solvers to generate robot joint actions, which, however, is susceptible to prediction errors and might violate the kinematic constraints of the robot. On the contrary, the proposed CDP learns both the end-effector pose and joint position trajectories and refines the joint position trajectories by distilling the end-effector poses. 2 \fLift the Box Robot Kinematics Diffuser High-Level Next-Best Pose Agent NextBest Pose Prediction Current Pose, Robot States, Trajectory Ranks, ... RGB-D Observations Denoising Differentiable Forward Kinematics Denoising End-Effector Pose Trajectories Joint Position Trajectories Gradient-Based Inverse Kinematics Refined Joint Position Trajectories Vector Observations Language Instructions Figure 3. Overview of Hierarchical Diffusion Policy (HDP). HDP is a multi-task hierarchical agent for kinematics-aware robotic manipulation. HDP consists of two levels: a high-level language-guided agent and a low-level goal-conditioned diffusion policy. From left to right, the high-level agent takes in 3D environment observations and language instructions, then predicts the next-best end-effector pose. This pose guides the low-level RK-Diffuser. The RK-Diffuser subsequently generates a continuous joint-position trajectory by conditional sampling and trajectory inpainting given the next-best pose and environment observations. To generate kinematics-aware trajectories, RKDiffuser distills the accurate but less flexible end-effector pose trajectories into joint position space via differentiable robot kinematics. 2.3. Differentiable Physics for Decision Making Differentiable physics simulation constructs each simulation step as a differentiable computational graph, such that the environment steps are fully differentiable with respect to network parameters [4, 12, 13, 42]. Learning decisionmaking policies via differentiable physics has shown to be more efficient and generalisable compared with standard non-differentiable environments [3, 41] with the physics priors as an inductive bias to the gradients. Similar to differentiable physics, we make use of the differentiable robot kinematics models [45] to distill the accurate but less reliable end-effector pose trajectory to the joint position space. 3. Preliminaries 3.1. Diffusion Models Diffusion models are a powerful family of generative models that consist of forward and backward Markov-chain diffusion processes. Consider a real data distribution q(x) and a sample x0 \u223cq(x) drawn from it. The forward diffusion processes adds Gaussian noise to x0 in K steps, which gives a sequence of noisy samples {xi}K i=1. In DDPM [10], the noise is controlled by a variance scheduler q(xk|xk\u22121) = N(xk; p 1 \u2212\u03b2kxk\u22121, \u03b2kI), (1) q(x1:K|x0) = K Y k=1 q(xk|xk\u22121). (2) where \u03b21, . . . , \u03b2K are scheduler parameters. Theoretically, x\u221ewill distribute as an isotropic Gaussian. To reconstruct distribution q(x), diffusion models learn a conditional distribution p\u03b8(xt\u22121|xt) and generate new samples by p\u03b8(x0:K) = p(xK) K Y k=1 p\u03b8(xk\u22121|xk), (3) p\u03b8(xk\u22121|xk) = N(xk\u22121; \u00b5\u03b8(xk, k), \u03a3\u03b8(xk, k)), (4) where p(xK) = N(0, I) under the condition that QK k=1(1 \u2212\u03b2k) \u22480. The model can be trained by maximising the evidence lower bound (ELBO) Ex0[log p\u03b8(x0)] \u2265Eq \u0014 log p\u03b8(x0:K) q(x1:K|x0) \u0015 (5) In the context of decision making, diffusion policies consider a trajectory of actions a1:T = {a(t)}T t=1, and learn a conditional distribution p\u03b8(ak\u22121 1:T |ak 1:T , {ci}N i=1), where {ci}N i=1 are N additional conditions for policy learning, e.g., RGB observations, point cloud, robot states, etc. For simplicity, we abuse the notation and denote ak 1:T as ak. 3.2. Differentiable Kinematics Differentiable simulation aims to encode the physics simulation steps as a differentiable computational graph. Take a simple point-mass system as an example. yt = yt\u22121 + \u2206t \u2217vt, vt+1 = vt + \u2206t \u2217F m (6) where the force F is the input to the system, m is the mass, v is the speed, and y is the position of the point. Importantly, such a system is differentiable and we can optimise the input force f by the gradients from positions y. Similarly in the context of robotics, conditioned on a predefined URDF 3 \fmodel of a robot, the end-effector pose sp of a robot can be obtained by a differentiable forward kinematics function fK as sp = fK(sj), where sj is the joint angles. Thus, given a loss function L(sp) over the gripper poses, the joint positions can be directly updated by gradients \u2202L(sp) \u2202sj . 4. Hierarchical Diffusion Policy The overall pipeline of HDP is shown in Fig. 3. Problem Definition. We aim to learn a HDP policy \u03c0(a | o, l), which processes the RGB-D observation o and language instruction l, specifying the task, to predict a hybrid action a. Here, a consists of a trajectory ajoint = {a(0), a(1), . . . , a(T)} and gripper opening / closing action agrip, where T is the trajectory length and a(i) \u2208RN, with N denoting the number of the robot joints. For brevity, we symbolise actions a without temporal index in the episode. Factorised Hierarchical Policy. To tackle long-horizon context-aware manipulation tasks, we factorise the policy \u03c0(a | o, l) into a hierarchical structure. Specifically, \u03c0(a | o, l) = \u03c0high(ahigh | o, l) \u25e6\u03c0low(a | o, ahigh). Here, the high-level action ahigh = (apose, agrip) \u223c\u03c0high, consists of (1) the end-effector pose action apose = (atrans, arot), with translation action atrans \u2208R3 and quaternion rotation action arot \u2208R4; and (2) a binary gripper action agrip \u2208R. Conditioned on the high-level action ahigh, we parameterise the low-level policy \u03c0low(a | ahigh, o) with RK-Diffuser, and learn to generate accurate joint position trajectories alow = ajoint. Such a factorisation offloads the complex and expensive task-level understanding from language instructions to the high-level agent, leaving only control to be learned by a simple, goal-conditioned low-level agent. During inference, HDP works in a sequential manner and we take a = {ajoint, agrip} as the output. 4.1. Dataset Preparation We assume access to a multi-task dataset D = {\u03bei}ND i=1, containing a total of ND expert demonstrations paired with Dl = {li}ND i=1 language descriptions. Note that a single task might have multiple variations, each with different description, e.g., \u201copen the middle drawer\u201d or \u201copen the bottom drawer\u201d. Each demonstration \u03be = {ademo, odemo}, consists of an expert trajectory ademo and resulting observation odemo. To enable the training of both the high-level policy \u03c0high and the low-level RK-Diffuser \u03c0low, the action ademo includes: (1) end-effector poses apose; (2) gripper opening / closing action agrip; and (3) joint positions ajoint. The observation odemo includes multi-view calibrated RGB-D camera observations and robot states. Keyframe Discovery. Referring to prior works [17, 20], training the high-level agent on all trajectory points is inefficient and instead we apply a Keyframe discovery method introduced in James and Davison [17]. Scanning through each trajectory \u03be, we extract a set of K\u03be keyframe indices {ki}K\u03be i=1 that capture the principal bottleneck end-effector poses. Specifically, a frame is considered as a keyframe if (1) the joint velocity is close to 0; and (2) the gripper open / close state remains unchanged. Unlike prior works, which only keep keyframes for training, we maintain the keyframe indices and extract different segments of data to train both high-level and low-level agents. The details will be discussed in the following sections. 4.2. High-Level Next-Best Pose Agent For the high-level policy ahigh = (apose, agrip) \u223c\u03c0high(a | o, l), we utilise a next-best pose agent [17] with structured action representations. In this work, to parameterise \u03c0high and fulfil this objective, we employ Perceiver-Actor (PerAct) [34]. PerAct is a language conditioned Behaviour Cloning (BC) agent with Transformer [36] backbones. PerAct achieves its high sample-efficiency, generalisability and accuracy through the use of a high-resolution voxel scene representations to predict 3D voxel-based actionvalue maps. To tackle the large number of visual and language tokens, PerAct adopts PerceiverIO [14], which encodes the inputs with a small set of latent vectors and reduces the computational complexity. Action Spaces. PerAct uses discrete action spaces for all action heads, including (1) a discrete policy head over the voxels for atrans and (2) a pair of discrete policies for arot and agrip. Continuous actions are reconstructed by converting the discrete indices according to the action space ranges. Model Training. For the high-level agent, we use only the keyframes for training. In addition, following Shridhar et al. [34], we use demo augmentation and translation augmentation to generate more samples. The network is optimised by behaviour cloning losses, i.e., cross-entropy losses in the discrete action space Lhigh = \u2212Ek\u223c\u03be,\u03be\u223cD [log \u03c0high(ademo(k) | o, l)] (7) where ademo(k) is the expert action of the keyframe k. 4.3. Low-Level RK-Diffuser Given the predicted high-level action ahigh, we perform conditional trajectory generation with RK-Diffuser through denoising diffusion processes. Standard diffusion policy for robotic manipulation considers end-effector pose diffusion p\u03b8(ak\u22121 pose | ak pose, Cpose) = N(ak\u22121 pose; \u00b5\u03b8(ak pose, Cpose, k), \u03a3\u03b8(ak pose, Cpose, k)) (8) where Cpose consists of the conditional variables, including the known start pose a(0)0 pose, the predicted next-best pose \u02c6 a0 pose(T) by the high-level agent, the low-dimensional 4 \fstate s of the robot, the end-effector pose, the gripper open amount, and the point cloud of the environment v. Besides using the start and next-best pose as conditional variables to the networks, we inpaint the trajectory with the start pose and the predicted next-best pose at each denoising step. This end-effector pose diffusion allows the inpainting operation to act as a hard constraint for the diffusion process, which guarantees the last step in the trajectory will always be aligned with the output of the high-level agent. Prior to execution, the end-effector pose trajectory must undergo processing by an inverse kinematics (IK) solver to determine corresponding joint positions. However, the predicted end-effector pose trajectory lacks kinematics awareness and there is a high likelihood of it violating the kinematic constraints. Consider, for example, that each step of the predicted trajectory has a probability p to violate the IK constraints. For a trajectory of length T, the probability of the trajectory might violate the constraint is perror = 1 \u2212(1 \u2212p)T , and lim T \u2192\u221eperror = 1. As we show in our experiments, IK error contributes to most of the failure cases in end-effector pose trajectory diffusion. Kinematics-Aware Diffusion. As an alternative to using IK solvers, the robot could be operated through joint position control. This approach provides direct and complete control of the robot. However, learning a trajectory diffusion model in the joint position space is challenging. In the case of end-effector pose diffusion models, we can impose accurate and strong constraints with the predicted next-best pose \u02c6 a0 pose(T). However, for an over-actuated 7-DoF robot arm, a 6-DoF end-effector pose \u02c6 a0 pose(T) might have an infinite number of corresponding joint positions \u02c6 a0 joint(T), which makes it difficult to perform inpainting for joint position diffusion. As we show in our experiments, the naive joint position diffusion model tends to be less accurate for goal-conditioned control, especially for the end poses. To tackle this issue, we introduce Robot Kinematics Diffuser (RK-Diffuser). Similar to Xian et al. [40], RK-Diffuser learns an end-effector pose diffusion model p\u03b8(ak\u22121 pose | ak pose, Cpose) which generates accurate but less reliable end-effector pose trajectories. RK-Diffuser further learns an additional joint position diffusion model p\u03d5(ak\u22121 joint | ak joint, Cpose) = N(ak\u22121 joint; \u00b5\u03b8(ak joint, Cpose, k), \u03a3\u03b8(ak joint, Cpose, k)) (9) where we use the same set of conditional variables Cpose for conditional generation, but for inpainting, we only fix the initial joint action a0 joint(0). For action trajectories sampled from each learned policy, a0 pose \u223cp\u03b8(a0 pose | a1 pose, Cpose) and a0 joint \u223cp\u03d5(a0 joint | a1 joint, Cpose), we can build such a mapping by treating the differentiable robot kinematics model fK as a function \u02c6 a0 pose = fK(a0 joint). During inference, initialised with a near-optimal solution a0 joint, we can optimise the joint positions a0 joint to predict end-effector poses \u02c6 a0 pose that are close to a0 pose using gradients a0 joint \u2190a0 joint \u2212\u03b1\u2202\u2225a0 pose \u2212\u02c6 a0 pose \u2225 \u2202a0 joint , (10) where \u03b1 is the learning rate. This gives a trajectory a0\u2217 joint that does not violate the kinematics constraint of the robot while achieving a high accuracy for manipulation tasks. Networks. The low-level RK-Diffuser takes as input the start pose, the end pose, the RGB-D image of the first step observation, a vector of the robot low-dimensional states, and the trajectory rank. For the RGB-D image, we first convert it to a point cloud in the world frame and extract the features with PointNet++ [29]; for the other vector features, we use 4 layers of MLP. For the temporal encoding network, we found a temporal Conv1D UNet used in Janner et al. [21] performs well and has no clear performance gap between the commonly adopted Transformer backbones. Model Training. When training diffusion models, we aim to maximise the ELBO of the dataset (Eqn. 5). However, taking the predicted next-best poses from the highlevel policy \u03c0high is inefficient as the prediction might be sub-optimal and slow. To alleviate this issue, for each demonstration \u03be, we construct sub-trajectories {\u03be(i)}K i=1 by chunking the trajectory \u03be with the detected keyframe indices {ki}K\u03be i=1. Next, we relabel each keyframe as a sub-goal of the training trajectory. This aligns with the training of the high-level agent \u03c0high, and in practice, \u03c0high and \u03c0low can be optimised at the same time. The relabeling idea also resembles the Hindsight Experience Replay [2], which has been shown to be effective in learning hierarchical policy learning [24, 28]. Specifically, we have Llow = \u2212\u03b21Lpose \u2212\u03b22Ljoint \u2212\u03b23Ljoint\u2192pose Lpose = Eq,\u03be(i)\u223cD \" log p\u03b8(a0:K pose | \u03be(i)) q(a1:K pose | a0 pose, \u03be(i)) # Ljoint = Eq,\u03be(i)\u223cD \" log p\u03d5(a0:K joint | \u03be(i)) q(a1:K joint | a0 joint, \u03be(i)) # Ljoint\u2192pose = Eq,\u03be(i)\u223cD \" log p\u03d5(a0:K pose | \u03be(i)) q(a1:K pose | a0 pose, \u03be(i)) # , (11) where \u03b21, \u03b22, and \u03b23 are weighting parameters and \u03be(i) is a sub-trajectory sampled from the dataset with start and end relabeled to two nearby keyframes. In particular, Ljoint\u2192pose is made possible by predicting the endeffector poses from joint positions via differentiable kine5 \fmatics \u02c6 a0:K pose = fK(a0:K joint). This allows us to train a jointposition trajectory which better regularizes the joint positions with the kinematics as an inductive bias. Trajectory Ranking. During training, most of the manipulation algorithms use sampling-based motion planners whose trajectories might be sub-optimal. In RK-Diffuser, we propose to add an additional conditional variable for each sub-trajectory, a trajectory rank r\u03be = dEuclidean dtravel , where dEuclidean is the Euclidean distance between the start and end pose and dtravel is the travelled distance between the start and end pose. Intuitively, an optimal trajectory, ignoring the kinematics constraint of the robot, should have r\u03be = 1. To encourage RK-Diffuser to generate near-optimal trajectories, we set r\u03be = 1 during inference. An analysis of the influence of trajectory ranking is in the appendix. 4.4. Practical Implementation Choices For the high-level agent \u03c0high, different from the past work [15, 34], we ignore the acollsion, which is a binary variable used to indicate whether the motion planner should perform collision avoidance because the low-level RK-Diffuser is trained to generate collision-aware optimal trajectories. For the low-level agent, different from most of the diffusion models that learn to predict a noise prediction model and learn to reconstruct the noise during the denoising steps, we follow Ramesh et al. [31] and observe that empirically directly predicting the original actions a0 pose and a0 joint is giving better performance. Besides, when truncated by the keyframe indices, the sub-trajectories might have different lengths. To tackle this issue, we resample each trajectory into a length of 64 for batched training. More implementations and discussions are in the appendix. 5. Experiments In our experiments we show the following: (1) HDP outperforms the state-of-the-art methods across all RLBench tasks; (2) in general, hierarchical agents outperform simple low-level continuous control policies; and (3) task-aware planning is important for many manipulation tasks, in particular those involving articulated objects. In addition to this, we perform a series of ablation studies and show: (1) IK errors contribute to the majority of the failure cases of end-effector pose diffusion policy; (2) joint position diffusion is less accurate without the access to last joint position inpainting; and (3) 3D information and the corresponding feature extraction module are critical to the performance of RK-Diffuser. Finally, we show HDP is capable of solving challenging real-world tasks efficiently and effectively on an open oven task with only 20 demonstrations. For all simulation experiments, we use 100 demonstrations from RLBench [19] for each task and train for 100K (a) RRT (b) Joint Position (c) RK-Diffuser Figure 4. Trajectory visualisations of the open box task. iterations. On a real robot, we show HDP can learn efficiently and effectively with only 20 demonstrations. 5.1. Trajectory Visualisations Firstly, we aim to understand why learning a low-level controller is necessary. In Fig. 4, we visualise the trajectory of an open box task in RLBench. RRT learns a trajectory that correctly reaches the goal pose. Nevertheless, without understanding the task context, the trajectory generated by RRT will cause the lid of the box to fall from the gripper. To visualise the joint position trajectories of both the vanilla joint position diffusion policy and RK-Diffuser, we further predict the end-effector poses from the joint positions. Although the joint position diffusion policy understands the task context, without direct inpainting with the next-best joint position, the trajectory will be less accurate. RK-Diffuser distills the accurate end-effector poses to the joint positions via differentiable kinematics, which achieves both high prediction accuracy and kinematic awareness. 5.2. Simulation Experiments We aim to compare HDP against (1) the state-of-the-art lowlevel control behaviour cloning agents, including ACT [44] and the vanilla Diffusion Policy [5]; (2) the high-level nextbest-pose agent with a fixed local planner, PerAct. In addition, we aim to demonstrate the benefit of the proposed RKDiffuser against alternatives, including: (1) Planner: a hybrid planner of fixed linear paths and standard RRT, which is the default setup used in RLBench; (2) Planner + Bezier: in which an additional head is added to the PerAct backbone with a discrete output trained to choose the most appropriate trajectory generation method at each episode step, akin to Learned Path Ranking (LPR) [15] in the behaviour cloning setting; (3) Diffuser: the vanilla Diffuser [21] framed as a goal-conditioned joint-position diffusion model. More details of the baseline algorithms are available in the appendix. We choose 11 RLBench tasks ranging from simple contextunaware grasping tasks to challenging tasks that require interacting with articulated objects. We present the results in Tab. 1 and make the following observations. HDP outperforms the state-of-the-art methods across RLBench tasks. As shown in Tab. 1, HDP achieves an overall 80.2% success rate across 11 RLBench tasks. In partic6 \fTable 1. Success Rates (%) on RLBench Tasks. For red tasks, we expect no improvement of HDP over baselines; with blue tasks, we expect HDP to outperform many of the baselines. reach target take lid off saucepan pick up cup toilet seat up open box open door open drawer open grill open microwave open oven knife on board overall ACT 50 45 46 6 12 5 26 1 11 0 0 18.36 Diffusion Policy 43 25 24 5 4 22 28 9 7 0 0 15.18 PerAct + Planner 100 100 86 0 0 64 68 54 32 0 76 57.72 PerAct + Planner + Bezier 96 100 72 80 8 48 84 76 20 4 36 56.73 PerAct + Diffuser 100 94 84 80 82 88 84 82 20 18 52 71.27 HDP 100 96 82 86 90 94 90 88 26 58 72 80.18 ular, we observe on simple tasks (red), that require no accurate trajectory control, most of the baselines have achieved a competent performance. However, when it comes to the more challenging tasks (blue), HDP maintains its performance while the baselines mostly fail, due to either a lack of understanding in the task context or inaccurate motion trajectory generation. Hierarchical agents outperform simple low-level continuous control policies. Comparing ACT and the vanilla Diffusion Policy with hierarchical agents, we observe that hierarchical agents consistently outperform the former. Empirically, both ACT and the Diffusion Policy fail to accurately detect intermediate keyframes, such as the handle of a drawer or an oven. This error is amplified due to distribution shift, which is a common issue of behaviour cloning agents in long-horizon tasks. In contrast, the hierarchical agent, with PerAct at the high level, achieves better generalisation and simplifies the optimisation task of the low-level agent. When trained on a multi-task setting, both ACT and Diffusion Policy fail to manage different skills and generalise to unseen test examples. However, all algorithms achieve a low performance on the open microwave task. We observe that this task has a highly diverse final end-effector pose distribution, which causes the high-level policy to have a high variance and generate inaccurate next-best poses. This error is then propagated to the low-level agents. Further exploration of this issue is left for future study. Learned low-level agents achieve better performance than motion planners. In particular, we note that even with accurate predictions of the next-best pose, a lack of task understanding by the planner often leads to trajectories deviating from the desired optimal trajectory. For instance, while PerAct + Planner achieves 0% success rate on the open box task it regularly succeeds in grasping the box lid. The predicted trajectory consistently exceeds the turning radius of the lid hinge, leading to the failure. This issue is exacerbated by strict kinematic limitations. For example, in the same task, PerAct + Planner + Bezier performs poorly because, unlike in the lift toilet seat task, the smooth opening curves, prompted by the additional head of PerAct, are kinematically infeasible. On the contrary, the learned trajectories capture the task context as demonstrated by the data and result in superior performance on a greater number of tasks. 5.3. Ablation Studies We perform ablation studies on the selected RLBench tasks to further understand the proposed low-level agent, RKDiffuser. Since the high-level agent has been well-studied in prior works [34], we swap it with an expert and only focus on the performance of the low-level agents. We present the results in Tab. 2. Sampling-based motion planners might fail without understanding the task context. As a sampling-based planner, RRT achieves a strong performance on simple tasks that only require goal information. However, for tasks that require a fine-grained trajectory, e.g., toilet seat up, RRT fails completely. As shown in Sect. 5.1, we see that trajectories generated by RRT might easily violate the task constraints.2 One could handcraft task-specific constraints, but it is not generalisable across tasks. IK errors contribute to most of the failure cases of endeffector pose diffusion policy. The Pose Diffusion denotes learning a diffusion policy directly over the end-effector pose trajectories and generate robot controls by solving the inverse kinematics. We observe that although Pose Diffusion achieves strong performance on several tasks, e.g., open microwave, it suffers from an overall 24.55% IK error rate. Specifically, most of the IK errors are caused by invalid quaternions and contribute to 75% of its failure cases. In particular, the IK error rate increases as the control difficulty increases. This explains the importance of learning joint position trajectories, instead of end-effector poses. Joint position diffusion is less accurate without the access to last joint position inpainting. As in Sect. 4.3, an endeffector pose will have multiple corresponding joint positions, and hence, it is infeasible for a joint position diffusion model to perform the last step inpainting. In our ablations, we show that it achieves a worse performance than RKDiffuser, especially on challenging tasks, e.g., open oven. 2RLBench uses a hybrid motion planner of RRT and predefined linear paths by default. To reproduce the RRT result, we disable the linear path trajectories manually. 7 \fTable 2. Ablation Study: Success Rates (%) / IK Error Rates (%) of low-level agents with the ground-truth next-best poses. For red tasks, we expect no improvement of HDP over baselines; with blue tasks, we expect HDP to outperform many of the baselines. reach target take lid off saucepan pick up cup toilet seat up open box open door open drawer open grill open microwave open oven knife on board overall RRT 100 / 0 100 / 0 95 / 0 0 / 0 0 / 0 0 / 0 0 / 0 0 / 0 0 / 0 0 / 0 0 / 0 26.82 / 0 Pose Diffusion 100 / 0 85 / 6 93 / 0 93 / 4 88 / 8 24 / 68 3 / 88 64 / 22 98 / 0 9 / 62 82 / 12 67.18 / 24.55 Joint Diffusion 100 / 0 100 / 0 91 / 0 95 / 0 100 / 0 74 / 0 15 / 0 62 / 0 75 / 0 13 / 0 85 / 0 73.64 / 0 RKD-RGB 100 / 0 96 / 0 78 / 0 40 / 0 98 / 0 94 / 0 78 / 0 36 / 0 80 / 0 0 / 0 94 / 0 72.18 / 0 RKD-ResNet 100 / 0 100 / 0 95 / 0 92 / 0 100 / 0 93 / 0 100 / 0 86 / 0 21 / 0 43 / 0 88 / 0 83.45 / 0 RK-Diffuser 100 / 0 100 / 0 98 / 0 100 / 0 100 / 0 95 / 0 100 / 0 90 / 0 88 / 0 75 / 0 94 / 0 94.55 / 0 (a) Open Oven (b) Sort Objects into Drawer Figure 5. Real-robot execution sequences. For both tasks, the robot needs to accurately predict the trajectories that understand the task context conditioned on languages. As appliances have high resistance force, a slight deviation from the expected trajectory would cause the robot to fail because of exceeding the joint torque limit. 3D information and the corresponding feature extraction module are critical to the performance of RK-Diffuser. As mentioned earlier in Sect. 4.3, RK-Diffuser uses a PointNet++ for point cloud feature extraction. For RKD-RGB, we discard the depth information and use a pretrained ResNet50 to extract the image features; for RKD-ResNet, we ablate using a ResNet to extract features from the RGBD image. We observe that both achieve worse performance when compared to the original RK-Diffuser, which indicates that understanding the 3D environment is necessary for generalisable and accurate control. We believe there are alternative representations and leave it for future study. 5.4. Real Robot Experiment We also conducted a real-world experiment on an opening oven task and a sorting objects into drawer task with a Franka Panda 7 DoF arm. We use 2 RealSense D415 cameras that captures the scene. For each sub-task we collect 10 demonstrations. Both tasks require the robot to accurately locate the target and control all its joints, especially the orientation of the wrist at every time step, otherwise, given the high resistance force of the oven, the arm will halt due to exceeding the joint torque limit. As a summary, HDP achieves 100% success rate for the opening oven task and 94% success rate for the sorting object into drawer task. Due to the nature of demo collection, we observe high variance in the demonstrated trajectories for the task. Intuitively, this leads to sub-optimal and highly diverse next-best pose predictions from the high-level agent, PerAct, some of which are out of distribution for RK-Diffuser. Interestingly, however, there appears to be minimal impact on RK-Diffuser, and the method is still capable of generalising to these unseen poses and generating accurate trajectories. Detailed results are in the appendix and are best viewed via the supplementary video. 6."
},
{
"url": "http://arxiv.org/abs/2403.06700v2",
"title": "Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression",
"abstract": "Deep neural network-based image compression (NIC) has achieved excellent\nperformance, but NIC method models have been shown to be susceptible to\nbackdoor attacks. Adversarial training has been validated in image compression\nmodels as a common method to enhance model robustness. However, the improvement\neffect of adversarial training on model robustness is limited. In this paper,\nwe propose a prior knowledge-guided adversarial training framework for image\ncompression models. Specifically, first, we propose a gradient regularization\nconstraint for training robust teacher models. Subsequently, we design a\nknowledge distillation based strategy to generate a priori knowledge from the\nteacher model to the student model for guiding adversarial training.\nExperimental results show that our method improves the reconstruction quality\nby about 9dB when the Kodak dataset is elected as the backdoor attack object\nfor psnr attack. Compared with Ma2023, our method has a 5dB higher PSNR output\nat high bitrate points.",
"authors": "Zhi Cao, Youneng Bao, Fanyang Meng, Chao Li, Wen Tan, Genhong Wang, Yongsheng Liang",
"published": "2024-03-11",
"updated": "2024-03-16",
"primary_cat": "eess.IV",
"cats": [
"eess.IV"
],
"label": "Original Paper",
"paper_cat": "Distillation",
"gt": "Deep neural network-based image compression (NIC) has achieved excellent\nperformance, but NIC method models have been shown to be susceptible to\nbackdoor attacks. Adversarial training has been validated in image compression\nmodels as a common method to enhance model robustness. However, the improvement\neffect of adversarial training on model robustness is limited. In this paper,\nwe propose a prior knowledge-guided adversarial training framework for image\ncompression models. Specifically, first, we propose a gradient regularization\nconstraint for training robust teacher models. Subsequently, we design a\nknowledge distillation based strategy to generate a priori knowledge from the\nteacher model to the student model for guiding adversarial training.\nExperimental results show that our method improves the reconstruction quality\nby about 9dB when the Kodak dataset is elected as the backdoor attack object\nfor psnr attack. Compared with Ma2023, our method has a 5dB higher PSNR output\nat high bitrate points.",
"main_content": "INTRODUCTION Deep learning has undergone extensive research in the field of image compression[2, 3, 4, 5, 6, 7]. In terms of ratedistortion performance, the performance of deep learning models has surpassed that of traditional methods such as JPEG[8], JPEG2000[9], BPG[10], and even the recent Versatile Video Coding (VVC)[11]. However, as deep learningbased image compression models are applied in practice, considering the sensitivity of deep neural networks to minor input perturbations, the robustness of these models against adversarial perturbations has become an essential and unavoidable concern. Since the image compression models used in practice are mostly publicly accessible, in this paper, we assume that attackers can access all parameters of the neural network compression model. Therefore, the attacks mentioned in this paper are all considered white-box attacks. \u2217These authors contributed equally and are regarded as co-first authors. B Corresponding author, email: liangys@hit.edu.cn Ground Truth Hyper(PSNR Attack) 1.22bpp/23.75dB Ma2023(PSNR Attack) 1.19bpp/25.21dB Ours(PSNR Attack) 0.69bpp/28.62dB Fig. 1. Performance of the original model, Ma2023[1], and our model under attack. Our model outperforms both Hyper model and Ma2023[1] model in terms of both bpp and psnr performance. The concept of adversarial samples was first introduced by Szegedy[12], which unveiled the sensitivity of deep neural networks to input perturbations. Following this, Goodfellow introduced an efficient attack method called the Fast Gradient Sign Method (FGSM) by utilizing gradient signs[13]. Kurakin enhances the attack capability by breaking down the larger step in the FGSM iteration into many smaller steps[14]. Madry proposed the Projected Gradient Descent (PGD) method, which increases attack success rates through multiple iterations[15]. In the context of image compression tasks, Ma introduced a trainable noise as an input perturbation[1], using this noise during training to update the model\u2019s input and generate highly potent adversarial samples. Liu leveraged the PGD method to efficiently attack the model\u2019s bpp (bits per pixel) parameter[16]. We have enhanced the techniques proposed in [1], introducing two attack methods tailored for image compression: psnr attack and bpp attack. Adversarial Training[17, 18], serves as the most effective method to enhance a model\u2019s resilience against adversarial attacks. The core idea of this technique is to introduce adversarial samples and use them as supplementary data alongside the original dataset for model training. The training process can be framed as a min-max problem, where the maximization problem involves generating adversarial samples with strong attack capabilities and the minimization problem focuses on training the model to minimize the output loss for these adarXiv:2403.06700v2 [eess.IV] 16 Mar 2024 \fRobust prior knowledge bpp_tc Stage1 :Teacher Network Teacher Teacher Finetune Finetune Stage2 :Finetune Network \ud835\udc3f1(\ud835\udc65\ud835\udc65\u2032, \ud835\udc65\ud835\udc65\u2032 \u0de1)+ \ud835\udefc\u22c5\ud835\udc3f1(bpp_tc, \ud835\udc4f\ud835\udc5d\ud835\udc5d_\ud835\udc53\ud835\udc61)+\ud835\udefd\u22c5 Smooth loss \ud835\udc4f\ud835\udc5d\ud835\udc5d_\ud835\udc53\ud835\udc61 PKDT Noise \ud835\udc65\ud835\udc65 \u0ddc \ud835\udc65\ud835\udc65 \ud835\udc65\ud835\udc65\u2032 \u0de1 \ud835\udc65\ud835\udc65\u2032 Fig. 2. Prior knowledge-guided adversarial training framework for image compression models. In the first stage, gradient regularization constraints are used to train robust teacher models. Subsequently, in the second stage, a prior knowledge distillation transfer strategy (PKDT) is used to transfer prior knowledge generated by the teacher model to the finetune model to guide adversarial training. versarial samples. However, applying adversarial training directly to image compression tasks resulted in relatively minor improvements in model robustness. To further enhance the model\u2019s robustness, we introduce prior knowledge into the adversarial training process of the model through a distillation framework[19]. Contributions of this paper are summarized as follows: \u2022 We propose a smooth loss to train the teacher model and utilize this teacher model to generate strong robustnessoriented prior knowledge. \u2022 We propose a distillation framework to transfer prior knowledge and use this knowledge to guide the model\u2019s adversarial training. \u2022 Extensive experiments demonstrate that our proposed method significantly enhances the model\u2019s robustness. In the Kodak dataset, our method can achieve a maximum robustness improvement of up to 23% against psnr attack. 2. METHOD Our method effectively improves the model\u2019s robustness by introducing a distillation framework for transferring prior knowledge. The framework of our model is illustrated in Fig.2 and consists of two components: a teacher network and a finetune network. The teacher network provides the prior knowledge necessary for training the finetune network. A gradient regularization term be used to train the teacher network, enabling it to generate prior knowledge with robustnessrelated information. The finetune network involves using prior knowledge generated by the teacher model to guide adversarial training. Since the psnr error caused by attacks is relatively large, using psnr as prior knowledge would make Algorithm 1 Adversarial Training with Prior Knowledge Input: Pre-trained model M pre, Dataset D, Training epoch N, psnr attack iterations Np, bpp attack iterations Nb Output: Finetune model M ft 1: Stage I: Training a robustness teacher model M tc 2: Let M tc = M pre 3: for i < N do 4: \u20d7 x = Random Sample(D, batchsize = 16) 5: smooth loss = \u2202bpp main \u2202x + \u2202y \u2202x 6: loss = RD loss + \u03b1 \u00b7 smooth loss 7: updating the parameters of model M tc by loss 8: end for 9: return teacher model M tc 10: Stage II: Finetune network, adversarial training with prior knowledge 11: Let M ft = M pre 12: for i < N do 13: \u20d7 xori = [x1, x2, x3] = Random Sample(D, batchsize = 3) 14: Generate adversarial sample \u00b4 x1 through psnr attack on x1 with Np = 200, generate adversarial sample \u00b4 x2 through bpp attack on x2 with Nb = 100 15: \u20d7 xnew = [x\u2032 1, x\u2032 2, x3] 16: Input \u20d7 xori into model M tc to obtain bpp tc Input \u20d7 xnew into model M ft to obtain bpp ft 17: D loss = \u2225\u20d7 xnew \u2212M ft(\u20d7 xnew)\u22251 18: R loss = \u2225bpp tc \u2212bpp ft\u22251 19: smooth loss = \u2202bpp main \u2202x + \u2202y \u2202x 20: loss = D loss + \u03b1 \u00b7 R loss + \u03b2 \u00b7 smooth loss 21: updating the parameters of model M ft by loss 22: end for the adversarial training process difficult to converge. At the same time, the error of bpp is very small, so we choose bpp as the prior knowledge. 2.1. Stage I: Training a Robustness Teacher Gradient regularization has been demonstrated as an effective method for improving model robustness[20, 21]. However, due to the presence of two performance metrics for the image compression task, we have designed two gradient constraint terms. Regarding the bpp term, we observed that the changes in bpp after the sample is attacked are mainly manifested in the bpp during the analysis transformation process, which we refer to as bpp main. Therefore, the constraint term is formulated as \u2202bpp main/\u2202x. For the psnr term, we also utilize the gradient of y with respect to x after the analysis transformation as another constraint. Therefore, the constraint term for gradient smoothness is formulated as: smooth loss = \u2202bpp main \u2202x | {z } bpp constrain + \u2202y \u2202x |{z} psnr constrain (1) \fThen the overall training loss of the model is: loss = RD loss + smooth loss (2) where RD loss is the rate-distortion loss function of the pretrained model, and RD loss = R + \u03bbD, Distortion term D is measured by peak signal-to-noise ratio (PSNR) between original images and reconstructed images, R is the bit rate, \u03bb is hyper parameter to control the trade-off between rate and distortion. Through training, we obtain the teacher model, denoted as gradient model. 2.2. Stage II: Transferring Prior Knowledge We transfer prior knowledge from the teacher network to the finetune network through a distillation framework. The prior knowledge specifically refers to the bpp generated by input samples into the teacher model, denoted as bpp tc, We assume the input sample as x, and the reconstructed output from the sample input into the finetuned model as \u02c6 x, with bpp as bpp ft. The loss for training the finetune network consists of three components. First is the reconstruction error part: D loss = \u2225x \u2212\u02c6 x\u22251 (3) Secondly, there is the introduction of prior knowledge: R loss = \u2225bpp tc \u2212bpp ft\u22251 (4) Lastly, the smooth loss in Equation 2. Then the overall training loss of the model is: loss = D loss + \u03b1 \u00b7 R loss + \u03b2 \u00b7 smooth loss (5) The entire training process is depicted as Algorithm 1. 2.3. Attack Method The image compression task involves two evaluation metrics, psnr (Peak Signal-to-Noise Ratio) and bpp (Bits Per Pixel), which directly impact the model\u2019s compression performance. Therefore, we conduct attacks on these two metrics separately. When attacking psnr, we ensure that bpp remains almost unchanged to simulate a real-world scenario where image transmission efficiency remains constant, but image decompression quality significantly deteriorates. Conversely, when attacking bpp, we ensure that psnr remains almost unchanged to simulate a situation in the real world where image decompression quality remains constant, but transmission efficiency experiences a significant drop. Assuming the input image is x, noise is n, adversarial samples x\u2217= x + n. The input image and the adversarial sample are both reconstructed to \u02c6 x and \u02c6 x\u2217, simultaneously, the bit rates are bpp ori and bpp, respectively. We improved the approach outlined in [1] by modifying the loss function to: arg min n Ld = \u001a\u2225n\u22252 + bpp error \u2225n\u22252 2 \u2265\u03f5 1 \u2212\u2225x \u2212\u02c6 x\u2217\u22252 2 \u2225n\u22252 2 < \u03f5 (6) arg min n Ld = \u001a\u2225n\u22252 + psnr error \u2225n\u22252 2 \u2265\u03f5 \u2212bpp \u2225n\u22252 2 < \u03f5 (7) where \u03f5 is the l2 threshold of the input noise, bpp error = \u2225bpp \u2212bpp ori\u22251, psnr error = \r \r \r\u2225x \u2212\u02c6 x\u2217\u22252 2 \u2212\u2225x \u2212\u02c6 x\u22252 2 \r \r \r 1. Then implementing psnr attack through training Equation 6, and implementing bpp attack through training Equation 7. 3. EXPERIMENT A. Experiment Setup Datasets. The training dataset includes CLIC and DIV2K. Two datasets are used for testing, including Kodak dataset with 24 images at a size of 768\u00d7512, DIV2K valid dataset with 100 high-quality images. Training details. We train each model for 100 epochs with Adam optimizer. For the pre-train model, the learning rate is initially set to 10\u22124 and then halved every 25 epochs after the initial 50 epochs. For the gradient model and ours finetune model the learning rate is initially set to 5 \u00d7 10\u22125, and then halved every 25 epochs after the initial 25 epochs. The pre-trained model is trained using the CLIC dataset. The gradient model and our fine-tuned model are trained using the DIVK dataset, more importantly, both of them are fine-tuned based on the pre-trained model. In the adversarial training process we proposed, the number of iterations for psnr attack Np is set to 200, while for bpp attack, iteration counts Nb is set to 100, and \u03f5 = 0.0005. B. Results and Comparison We tested two original models, Hyper[3] and Cheng[6], as well as the finetuned models obtained using our method. We evaluated these models on various test datasets and under different attack methods to obtain results. Fig.3 shows the performance comparison of our model in different scenarios. It can be observed that when our method is applied to different models, there is a noticeable enhancement in robustness. Especially for models with large parameter sizes like Cheng the improvement in robustness is more pronounced compared to the Hyper model. Additionally, we can see that our method effectively enhances model robustness in both low-resolution and high-resolution test datasets. Fig.4 illustrates the robustness performance of our method concerning different noise thresholds for \u03f5 = 0.001. It can be observed that the model\u2019s robustness is significantly enhanced when compared to smaller noise thresholds \u03f5 = 0.0005. Particularly, in the case of bpp attack, the robustness improvement can reach up to 23%. Table.1 contains the data points for the low bitrate scenario shown in Fig.3, leftmost graph. It can be observed that the robustness improvement brought by our method is similarly higher than that of the gradient model, which is training with stage I. Adversarial training can potentially lead to a decrease in the model\u2019s generalization ability, resulting in less improvement in robustness against other attack methods. However, \f\u03f5=0.0005 Kodak \u03f5=0.0005 DIV2K_valid 14 19 24 29 34 0 0.5 1 Hyper(clean) Ours_Hyper(clean) Hyper(psnr attack) Ours_Hyper(panr attack) Cheng(clean) Ours_Cheng(clean) Cheng(psnr attack) Ours_Cheng(psnr attack) PSNR (dB) Bit per pixel (Bpp) 27 29 31 33 35 37 39 0 1 2 Hyper(clean) Hyper(bpp attack) Ours_Hyper(bpp attack) Cheng(clean) Cheng(bpp attack) Ours_Cheng(bpp attack) Bit per pixel (Bpp) PSNR (dB) Bit per pixel (Bpp) PSNR (dB) 25 27 29 31 33 35 37 0 0.5 Hyper(clean) Ours_Hyper(clean) Hyper(psnr attack) Ours_Hyper(psnr attack) Cheng(clean) Ours_Cheng(clean) Cheng(psnr attack) Ours_Cheng(psnr attack) Bit per pixel (Bpp) PSNR (dB) 28 29 30 31 32 33 34 35 36 37 38 0 1 2 Hyper(clean) Hyper(bpp attack) Ours_Hyper(bpp attack) Cheng(clean) Cheng(bpp attack) Ours_Cheng(bpp attack) Fig. 3. R-D performance of the model under different attack scenarios. The two graphs on the left illustrate the robustness performance of our method applied to the Hyper and Cheng models using the Kodak test dataset. The first graph on the left pertains to psnr attack, while the second one is related to bpp attack, with the attack iterations Np = Nb = 1000. The two graphs on the right are based on the DIV2K valid dataset, with the attack iterations Np = Nb = 500. It\u2019s worth noting that in the second graph on the left, the last two data points on curve Ours Cheng(bpp attack) have bpp exceeding 9. Hyper \u03f5=0.001 Kodak 19 24 29 34 39 0 0.5 1 Hyper(clean) Ours(clean) Hyper(psnr attack) Ours(psnr attack) PSNR (dB) Bit per pixel (Bpp) 27 29 31 33 35 37 0 1 2 3 Hyper(clean) Hyper(bpp attack) Ours(bpp attack) PSNR (dB) Bit per pixel (Bpp) Fig. 4. R-D performance of the model under other \u03f5 Table 1. R-D performance of model model clean psnr attack bpp attack bpp psnr bpp psnr bpp psnr Hyper 0.4978 33 0.498 22.3502 0.9161 33.7702 Ours w/o Stage II 0.4322 32.2592 0.4309 25.9039 0.7225 33.1432 Ours 0.4323 32.0802 0.4286 28.4039 0.6822 32.9933 we compared our approach with the attack method mentioned in Ma2023. The results of the comparison can be seen in Fig.5. It is evident that for adversarial samples where we did not undergo adversarial training, our method still exhibits better robustness. 13 18 23 28 33 38 0 0.5 1 Hyper(clean) Ma2023(clean) Ours(clean) Hyper(attack) Ma2023(attack) Ours(attack) PSNR (dB) Bit per pixel (Bpp) Fig. 5. R-D performance compared with Ma2023 C. Ablation Experiments We demonstrate the optimality of our method under the same training parameter conditions in Table.2. The second row represents the most effective method used in Algorithm 1. The third row represents the results obtained by training without the smooth loss term in the stage II. Analogously, the results in the last two rows correspond to the removal of \u2202y/\u2202x and \u2202bpp main/\u2202x in stage II, respectively. Test dataset is Kodak with iteration counts Np = 400, Nb = 200, and noise threshold \u03f5 = 0.0005. From the results, we can observe that removing any of the constraint terms deteriorates the model\u2019s robustness. Additionally, we have noticed that the impact on robustness is relatively small. This further validates that in our proposed method, the most significant improvement in model robustness is achieved through the introduction of prior knowledge. Table 2. Results of the ablation experiments model clean psnr attack bpp attack bpp psnr bpp psnr bpp psnr best 0.4323 32.0802 0.4273 28.6556 0.6953 33.0194 best w/o smooth loss 0.4358 32.1936 0.4328 28.4497 0.7214 33.0731 best w/o \u2202y/\u2202x 0.4355 32.1549 0.4301 28.5293 0.7075 33.031 best w/o \u2202bpp main/\u2202x 0.4332 32.1463 0.4279 28.6224 0.7083 33.093 4."
}
]
}